Sorry, you need to enable JavaScript to visit this website.
Search
April 16, 2021

Check the Math: If We Want an Equitable Society, We Need to Police AI

person
Jason Brinkley, Ph.D.
Data Scientist Principal

If America is going to truly, finally reckon with systemic racial—and gender, and income—injustice, then we have to address the data and analytics that prop up those systems.

As the recent Executive Order from President Biden calling for more accurate census data collections reminds us, data can be manipulated in ways that can lead to poorer outcomes for families of color. Technologies such as artificial intelligence (AI) and machine learning (ML) have been analyzing enormous amounts of data, enabling governments and businesses to make decisions that impact the nation, but what happens when that data is biased in such a way that it negatively impacts whole demographics, from women to people of color to low-income families?

For example, housing choice—which can affect educational, health, and ultimately employment outcomes—is determined at least in part by financing, which requires access to credit. Credit scoring is essentially just a statistical analysis of data, and it favors whites over people of color, because that analysis is capturing—and building—on decades of biased data created through policies that illegally supported surreptitious housing segregation. That corrupted data is determining the future for the next generation of families that have already been discriminated against.

As a data scientist, I rely on technologies such as AI and ML every day.  Collectively, we’ll need such tools to get in front of the next health epidemic before it becomes a pandemic. But if we’re going to seize this moment to address inequity in our society, we need to ensure our tools are properly calibrated. Algorithms aren’t inherently biased. They work—and work well—with the data we give them. But, as we continue to expand the reaches of the information age, we’re relying on—and building—algorithms that are using data that’s packed with systemic biases against race, ethnicity, gender and sexual identity, persons with disabilities, and more. The uncomfortable truth is that the systemic “isms”—sexism, racism, ageism—have been made worse, multiplied at times by machine-driven decision-making.

So, how do we fix it?  First, we have to stop seeking out simple solutions for complex problems.

As we’ve found better ways to put numbers to work, we’ve let the algorithms do the heavy lifting, abdicating human judgment on the front end. For example, initial attempts to eliminate bias from sentencing in the justice systems have been worsened by the use of AI. Not because the algorithms are bad, but because, as with housing, the data that feeds them—who gets what sentences, who recidivates—is built on decades of bias.  Similarly, we haven’t had time to assess bias in the nation’s COVID vaccination strategy, but we know that there are systemic issues that lead to whites living longer than Blacks. So an algorithm that targets people with the highest risk of dying—say, older individuals (who are mostly white)—would be less racially equitable, leaving front line workers (a racially diverse group) at higher risk for exposure while waiting for their COVID vaccine. So deploying AI that corrects for these biases may result in more uniform sentencing or fewer COVID deaths, as intended, but those results may be inequitable, which is absolutely counter to the overall intent.

Whatever the process we’re trying to automate (criminal sentencing, vaccine deployment, etc.) we need to look upstream in that process to first identify the disparities we’re trying to prevent. We need to collect higher quality data to ensure we can identify those disparities. We need AI/ML algorithms that are designed to measure and account for these biases. Finally, we need better metrics to evaluate algorithms in order to ensure they are performing as intended without increasing existing disparities.

There are steps that the research community is taking now to tackle bias from different perspectives. We have the development of interpretable AI, which seeks to have algorithms developed by computers but present rules for decisions that can be listed or diagramed in a way humans can track. Fair AI, seeks to force algorithms to consider something like race during development so that the probability of good or poor outcomes is equitable. The goal of humble AI,  is to define when machines can make decisions and when humans have to step in. This delineation of AI types indicates the research community’s efforts to respond to the rising challenge of equity and equality in AI and ML algorithms.

The history of race and fear of the other in this country is shameful, so it’s not surprising that our historical databases reflect unconscionable biases. But once we identify our goals, data can help us identify solutions. We just have to do the work of selecting the right analytic tool, checking our math, and being committed to supporting the dignity and well-being of our neighbors.

Read More

Global Digital Health Forum (GDHF) 2024

Abt Global is sponsoring and presenting at the Global Digital Health Forum (GDHF) 2024.

Learn More
Event

Understanding Homeless Encampments in Long Beach, L.A. River Basin, and San Fernando Valley

A Conrad N. Hilton Foundation-funded, Abt-led evaluation studied three encampments in Los Angeles.

Learn More
Publication

How to Make Federal Justice Data More User Friendly

Abt is making federal justice data more user friendly through better visualization and filtering capabilities.

Learn More
Blog

2024 American Society of Criminology (ASC) Annual Meeting

Abt staff will present the latest justice system research at the American Society of Criminology Annual Meeting in San Francisco, CA.

Learn More
Event