Bias, Diversity and Truth in AI: NJIT Alum Sanmi Koyejo Advocates for Change
Artificial intelligence is transforming our world in ways that spark curiosity, excitement and concern. Sanmi Koyejo ’05, assistant professor of computer science at Stanford University and founder and president of Black in AI, explored its profound impact across various domains, addressing pressing questions about the technology’s societal implications.
The conversation delved into the rise of AI in the public consciousness, the challenge of countering biases in automated systems, its potential to revolutionize health care and even its capacity to unlock mysteries of the human brain. Koyejo also shared personal insights on advocating for inclusivity in AI through the founding of Black in AI, an international organization championing diversity in the field. This dialogue sheds light on the opportunities and challenges of AI, highlighting a vision for harnessing its potential to create a more equitable and beneficial future.
Q: Should we be alarmed about the rise of AI?
A: AI has been around for a long time and its deployment is ubiquitous. Almost every online platform has some automated decision making: Google web searches; fraud detection; and hiring decisions based on automated systems. What has changed recently, in the public consciousness at least, is that ChatGPT and OpenAI got released and people saw that these tools seemed to work well, but also in a way they could engage with easily. I’m trying to make sure that the impact on society is more beneficial than not.
Q: How can you counter bias?
A: The way these tools are built depends strongly on the data that exists, so behavior in the past is used to make predictions about behavior in the future. This can be problematic, as in decisions that are correlated with past histories of discrimination. It shows up in a benign way in how different video cameras work for people with different skin tones, but it also shows up in police applications that make more mistakes on people with darker skin tones and can result in people being in prison incorrectly. Having people in the room that have different lived experiences, not just diversity of thought, has a direct impact on building tools that work for everyone.
Q: How can AI improve our health?
A: You often get a diagnosis for a disease because there are symptoms. But we built tools that take existing information from people, including a tool that looks at X-ray images and can diagnose things that are not necessarily tied to a medical visit. This means there are now possibilities to find diseases early, before there are different symptoms and before it’s too late to have significant interventions. We started with diabetes, but I’m quite excited about this kind of technology for lots of diseases that are pressing in the U.S. and across the world.
Q: What can AI teach us about the brain?
A: My lab is building tools that help better estimate, from observing signals, how different regions of the brain are connected to each other, how those connections are related to brain function, behavior and brain disorders and how we can use this as a diagnostic tool.
Q: Can AI help us detect biases in health care that affect outcomes?
A: Unfortunately, a lot of the history of medical care has systemic biases of different care being administered to different groups, sometimes by demographics and sometimes by wealth status. It’s sometimes hard to tell which differences are biological and which are due to the biases of the decision makers. We were looking at care around COVID and found systematic differences in care and outcomes that were tied to demographics. An interesting finding was that language seemed to be a really good predictor of who got better or worse care. If people came in and they didn't speak English sufficiently well, their care was systematically worse, their outcomes were systematically worse.
Q: What prompted you to be a founding member (and current president) of Black in AI?
A: Circa 2016, a few of us came together with the shared experience of being the only Black person we knew working in the field. At the time, some impacts of technology could be explained by the fact that there was no one in the room with the lived experience to speak to a tool being built that would have a worse outcome for certain demographics. We’re about 5,000 people now and international. It’s rewarding to hear stories from people who tell us they would not be in the field if the