top of page
Ravit banner.jpg

AI ETHICS
RESOURCES

Why Bias in AI is More than Bias in Datasets

In honor of Juneteenth 2022, I wrote about how bias in AI goes beyond what people often think: Bias in AI is more than bias in datasets.


➤ How bias in datasets can cause bias in AI decisions


AI systems learn from the data they are fed. Therefore, the data we feed the algorithm is extremely important. Garbage in, garbage out.


For example, imagine a hiring algorithm. If all the top-performing resumes it sees in its training have male names, the algorithm may only recommend male candidates for jobs.


Because of the importance of the training dataset, people often reduce problems of bias in AI to bias in datasets. That is a misconception.



➤ The COMPAS scandal


COMPAS is a recidivism algorithm, calculating the likelihood that defendants will reoffend. It was widely used in the US criminal justice system to inform decisions about defendants.


In 2016, ProPublica exposed that COMPAS’s predictions were biased against black people: Black defendants were twice as likely to be labeled as high risk to reoffend but not actually re-offend. White defendants were 1.67 more likely to be labeled as low risk to reoffend but go on to re-offend (link in the comments).


➤ A potential source of bias: the COMAPS questionnaire


COMPAS's decisions were based on a questionnaire defendants filled out (link in the comments). Some of the questions seem to be biased against black people.


For example:

-->"How many of your friends/acquaintances have been arrested?"


Discrimination in arrest patterns in the US may lead black people to have more friends who have been arrested, regardless of their inclination to commit crimes.


-->"Which of the following best describes who raised you? Both natural parents/ Natural mother only…."


In the US, as of 2017, 74% of white children lived with both their parents while only 40% of black children lived with both their parents (link in the comments)


--> How much do you agree with the statement: “The law doesn’t help average people”


Who counts as an "average person" and whether the law helps them, or perceived to help them, depends on one's background. Systems like COMPAS demonstrate that the law can indeed be less helpful to black people. Therefore, black defendants might accurately assess that the law is not as helpful to who they see as average people.


--> How much do you agree with the statement: “Some people see me as a violent person”


Black people are perceived as more violent in the US. Therefore, black defendants might accurately assess that they are perceived as violent, even if they are not actually violent.


➤ Join the discussion about this topic on my LinkedIn page here

Comentarios


FOR UPDATES

Join my newsletter for tech ethics resources.

I will never use your email for anything else.

bottom of page