Can artificial intelligences be completely impartial?
Home NEWSArtificial Intelligence (AI) Can artificial intelligences be completely impartial?

Can artificial intelligences be completely impartial?

Published by IT-technews

Artificial intelligence systems, which are often implemented using machine learning, a subset of AI, represent an increasingly important part of our societies and, whether we realize it or not, of our daily lives. So, given its growing influence, there are concerns about how to make artificial intelligence more inclusive and accessible.

To build a safer future for all, minimizing bias or prejudice in artificial intelligences is crucial. Algorithms for machine learning are supported by data and designs, which, in turn, are defined by teams that build these systems and make decisions about how they should be trained.

How important is bias in machine learning?

Felicity Hanley, Vice President of the Race & Ethnicity Steering Committee, gives us some personal examples of how she has experienced bias in machine learning systems: “Artificial intelligence is supposed to make life easier for all of us. Although it can do so, it can also amplify real-world sexist and racist biases. Some of my personal experiences with artificial intelligence include social media filters that make my skin appear whiter. Another example was with an old mobile that did not activate biometric facial recognition if the room was dark, although it did activate it for a friend with lighter skin in the same conditions”.

As artificial intelligence becomes ubiquitous in our lives, the potential for bias becomes greater. Matt Lewis, director of business research at NCC Group, comments: “There are many cases where artificial intelligence is used and we probably don’t realize it. The use of facial biometrics is well known and happening in numerous scenarios – not only for the authentication of our mobile phones, but also for surveillance systems.”

The UK government’s analysis of algorithmic decision bias highlights the importance of this issue, as the report states that “it has become clear that we cannot separate the issue of bias in algorithms from the issue of of biases in decision-making in general”. Adds Kat Sommer, NCC Public Affairs Lead, “The report looked at financial services and the example it mentions is credit scoring. Injustice occurs when people who don’t adhere to standard financial trajectories are treated unfairly because there is no data available to train those models.”

Related Posts