The problem of inclusion in Artificial Intelligence

"Although the mathematical and programming part of the models is fundamental, so is the bias behind their construction. This concept of bias is not a minor issue, and it has already affected large companies such as Amazon and Microsoft, damaging its image."
Por Genaro Almaraz
Jun 13, 2022

There is in artificial intelligence a link with psychology and another with societies, which is too important. Although the mathematical and programming part of the models is fundamental, so is the bias behind their construction. This concept of bias is not a minor issue, and it has already affected large companies like Amazon and Microsoft, damaging their image.

To ground this concept, let’s take the following example:

If we want to make a facial recognition system, the artificial intelligence that we develop will learn from the patterns in the input data that we use as training, and if we did not make an adequate collection of these (considering only people of a certain age, race or gender), we are incurring a serious fault that will exclude several segments. This imbalance in the data, taking into account that each group of people can be a class or category; affects the generalization of the artificial intelligence model that we develop.

The engineer behind the construction of the model mentioned above has a past and history that, without being aware of it, causes bias to enter the data collection.



According to a study conducted by Columbia University, the more heterogeneous the working group is, the key is to reduce the bias in the algorithms.

As I mentioned, Amazon had a very important case of artificial intelligence bias (https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G); as they developed a recruitment system that did not it had the results they hoped for, as it did not encourage diversity and inclusion. When analyzing it in 2015, they realized that the model was trained mainly with applications from men over a period of 10 years, which made women’s resumes less valuable. A sexist Artificial Intelligence system developed by Amazon?

We have already noticed a more practical case of bias in artificial intelligence; but now the question arises about what we can do to avoid it. Well, every application has a target audience, therefore focusing on that group of people, carefully analyzing their physical, psychological and social aspects that are relevant to our model, will be of great help (although this too specific training can also present bias in the course of time).

No method against bias is perfect, and we will always have this bias. The important thing is to be aware of this problem and do everything possible to design fairer developments that reflect the complexity of global society; such as this initiative called “The Algorithmic Justice League” https://www.ajl.org/, created by an MIT student, and which I recommend reading.

Genaro Almaraz

Genaro Almaraz

Master in computer science.
Share This