Home » News&Events » How digital tools reinforce racial and ethnic bias

How digital tools reinforce racial and ethnic bias

How digital tools reinforce racial and ethnic bias

Research at UC Davis shows the dangers of algorithmic bias

By: Kriti Varghese — SCIENCE@THEAGGIE.ORG

June 4th, 2019

 

Digital tools are omnipresent in everyday life, and while they are known for being objective and unbiased, that isn’t entirely accurate. In fact, if left unaddressed and unregulated, digital tools could inadvertently reinforce current racial inequalities. Without formal regulations, public awareness and further research, algorithms could propagate racially-based consequences.

“There’s an old saying in the computer business: ‘Garbage in, garbage out’,” said Steven M. Bellovin, a professor in the Department of Computer Science at Columbia University who wrote about the effects of algorithm bias on artificial intelligence. “Unfortunately, most people blindly believe the output of computers — and if the inputs are bad, the outputs will be bad. In other words, ‘garbage in, gospel out.’”

This study shows where bias could be located in an algorithm and how it could be addressed based on where it’s located. According to the paper, there are five phases in the model of algorithmic decision-making: input, algorithmic operations, output, users and feedback.

Within these five phases, nine types of bias could occur: training data bias, algorithmic focus bias, algorithmic processing bias, transfer context bias, misinterpretation bias, automation bias, non-transparency bias, consumer bias and feedback loop bias. This suggests that bias could originate in the data, algorithm or the individuals using the algorithm or the output produced.

“So the data and algorithm may not be biased but the user interacts with the platform in a biased way,” said Martin Kenney, a professor in the Department of Community and Regional Development and co-author of the paper. “For example, say an Uber rider or driver rates someone negatively due to the ethnicity.  Here, the data and algorithms may be completely unbiased, but the human being making decisions expresses their bias.”

The more widespread technology becomes, the more likely the risk of bias. The impacts of algorithmic bias already exist and could affect anyone. An example of this would be a company using an algorithm to read resumes and select the best candidate for a job opening.

“The algorithm may be trained based on the qualities that current employees of the company already have such as education, location and specific skills,” said Selena Silva, 4th-year undergraduate Community and Regional Development major and co-author of the paper. “If I apply to this job position and do not match up with the type of employee who already works there, I will be denied the job. A short-term impact would be me getting denied access to an interview. A long-term impact from the bias would be that the company continuously hires people who match the status-quo which prevents diversity.”

Some of the algorithms used in the criminal justice system or by employers are made by private companies for profit and are minimally regulated. Potential solutions to prevent algorithmic bias include rigorous testing of algorithms before they are used to make real-life decisions and employing algorithms that are transparent, if possible. Engineers and programmers need to constantly be aware of the possibility of bias in the algorithms they build and where potential bias could be located.

“It should be possible to audit outcomes to search for biased outcomes,” Kenney said. “This can be done statistically because all digital decisions can be tracked and analyzed. This is the good thing about digitization. Everything can be examined, as it all leaves data tracks. Awareness is important, as is enforcement of the laws, in cyberspace.”

Link to article: https://theaggie.org/2019/06/04/how-digital-tools-reinforce-racial-and-ethnic-bias/