Can artificial intelligences be completely impartial?
Artificial intelligence systems, which are often implemented using machine learning, a subset of AI, represent an increasingly important part of our societies and, whether we realize it or not, of our daily lives. So, given its growing influence, there are concerns about how to make artificial intelligence more inclusive and accessible.
To build a safer future for all, minimizing bias or prejudice in artificial intelligences is crucial. Algorithms for machine learning are supported by data and designs, which, in turn, are defined by teams that build these systems and make decisions about how they should be trained.
How important is bias in machine learning?
Felicity Hanley, Vice President of the Race & Ethnicity Steering Committee, gives us some personal examples of how she has experienced bias in machine learning systems: “Artificial intelligence is supposed to make life easier for all of us. Although it can do so, it can also amplify real-world sexist and racist biases. Some of my personal experiences with artificial intelligence include social media filters that make my skin appear whiter. Another example was with an old mobile that did not activate biometric facial recognition if the room was dark, although it did activate it for a friend with lighter skin in the same conditions”.
As artificial intelligence becomes ubiquitous in our lives, the potential for bias becomes greater. Matt Lewis, director of business research at NCC Group, comments: “There are many cases where artificial intelligence is used and we probably don’t realize it. The use of facial biometrics is well known and happening in numerous scenarios – not only for the authentication of our mobile phones, but also for surveillance systems.”
The UK government’s analysis of algorithmic decision bias highlights the importance of this issue, as the report states that “it has become clear that we cannot separate the issue of bias in algorithms from the issue of of biases in decision-making in general”. Adds Kat Sommer, NCC Public Affairs Lead, “The report looked at financial services and the example it mentions is credit scoring. Injustice occurs when people who don’t adhere to standard financial trajectories are treated unfairly because there is no data available to train those models.”
You may also like
Categories
- Android (3)
- Antivirus (1)
- Artificial Intelligence (AI) (20)
- Automobili (6)
- Bitcoins (6)
- Blockchain (8)
- CAREER (18)
- Cloud Computing (15)
- Cybersecurity (28)
- DEVELOPMENT (20)
- Digital Transformation (62)
- EDUCATION (20)
- FINANCE (99)
- HEALTHCARE (98)
- Home Security Systems (2)
- IGAMING (12)
- Internet of Things (IoT) (28)
- Laptops (8)
- NEWS (351)
- Printers (2)
- PRODUCTS (90)
- RETAIL (31)
- Routers (8)
- SECURITY (60)
- Servers (13)
- SERVICE (12)
- Smartwatches (2)
- Storage (2)
- Streaming Devices (13)
- SUSTAINABILITY (56)