Research

Joining the ethics board of Machine Intelligence for People

A research note on applied AI ethics, written from the practical side of reviewing projects where healthcare, sustainability, transparency, and public benefit meet.

Research

In April 2023, I joined the ethics board of Machine Intelligence for People, a nonprofit led by Dr. Paul Springer. The role is less about abstract ethics and more about project judgment: looking at applied AI ideas before they become systems that affect people.

That matters especially in healthcare, sustainability, and environmental contexts. A model can be technically interesting and still be incomplete as a real intervention. The work is to ask what the system changes, who depends on it, where failure would matter, and what has to be transparent before it can be trusted.

Working question

The practical question is not only whether an AI system works. It is whether the system is understandable, accountable, and appropriate for the context in which people are expected to use it.

What I look for

  • Whether the problem is clearly defined before AI is introduced as the solution.
  • Whether the people affected by the system can understand, challenge, or recover from its outputs.
  • Whether performance, incentives, access, and failure modes are considered together.

Why this belongs here

This is not a publication in the classical sense, but it uses the same research habits: separate signal from noise, make assumptions explicit, and test whether a claim survives contact with real-world constraints.

AI ethics becomes useful when it is tied to decisions. It has to show up in product scope, data choices, evaluation, deployment, documentation, and the institutional setting around the tool.

Two levels of review

Model quality

The system needs to perform well enough for the task, with clear limits and evidence that the output can be relied on.

Use context

The system also needs to fit the people, workflows, risks, incentives, and institutions around it.