Bias in Artificial Intelligence

Removing Bias From Your Recruiting Process

Artificial intelligence (AI) is front and center in technological development, both consumer and industrial. Personal assistants, computer vision-based systems, and marketing algorithms are just a few examples of how artificial intelligence is applied in modern technology, but what really is AI?  What situations call for it and what are its drawbacks?

First, let’s talk definitions. Artificial intelligence is, according to the Oxford dictionary, “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” AI is similar to the term physics, it’s a field of study. The same dictionary defines machine learning as “the use and development of computer systems that can learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data.” Although similar, machine learning is a specific, statistical method to achieve some level of artificial intelligence. One of the most useful things about machine learning and AI, is that it severely reduces the amount of manual coding an engineer has to do. It’s much easier to plug your data into a machine learning model than to manually recognize patterns in data.

But because AI is fundamentally a method to analyze data, its glaring weakness is the data that humans input. A key example is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an AI system used in many US state courts to guess the likelihood that a given criminal will re-offend. Unfortunately, but predictably, the engineers used biased data that resulted in the model disproportionately singling out black criminals at a higher rate while the model labeled white offenders with lower percentages. Examples like this are very common, but they all stem back to one issue, human error. Everyone is biased, some more than others, but when humans are tasked with creating datasets, their decisions impact the resulting model and their biases are baked into its very construction.

To combat this issue, the field needs to address a few systemic problems. Of course, greater diversity in teams that produce data or AI models can reduce mistakes like this. The more perspectives a team has at its disposal, the more it can reduce the overlap of bias and cut out ethical blindspots. Additionally, individual engineers need to be forever cognizant of the source of their data and whether it is representative of their application.

AI is only as good as its data and data is only as good as the people who collect it. As AI and machine learning progress and continue to lead innovation, it’s important for us to not let the ethics of the situation fall out of sight.

 

Scroll to top
Close Bitnami banner
Bitnami