Comprehending Algorithmic Bias and Strategies for Fostering Trust in Artificial Intelligence

Comprehending Algorithmic Bias and Strategies for Fostering Trust in Artificial Intelligence

DOI: 10.4018/979-8-3693-1762-4.ch014
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Fairness is threatened by algorithm bias, systematic and unfair disparities in machine learning results. Amazon's AI-driven hiring tool favoured men. AI promised data-driven, impartial decision-making, but it has revealed sector-wide prejudice, perpetuating systematic imbalances. The algorithm's bias is data and design. Biassed historical data and feature selection and pre-processing can bias algorithms. Development is harmed by human biases. Algorithm prejudice impacts money, education, employment, and crime. Diverse and representative data collection, understanding complicated “black box” algorithms, and legal and ethical considerations are needed to address this bias. Despite these issues, algorithm bias elimination techniques are emerging. This chapter uses secondary data to study algorithm bias. Algorithm bias is defined, its origins, its prevalence in data, examples, and issues are discussed. The chapter also tackles bias reduction and elimination to make AI a more reliable and impartial decision-maker.
Chapter Preview
Top

Introduction

The Dartmouth Summer Research Project on Artificial Intelligence, held in 1956, is largely seen as the seminal event that established artificial intelligence as a distinct subject.

The project had a duration of around six to eight weeks and mostly consisted of an extensive brainstorming session. Originally, there were eleven scientists and mathematicians who intended to attend the event. Although not all of them showed up, more than ten additional individuals came for brief periods of time.

During the early 1950s, the topic of “thinking machines” was referred to by other titles, including cybernetics, automata theory, and sophisticated information processing. The multitude of names indicates the diverse range of intellectual perspectives.

In 1955, John McCarthy, a youthful Assistant Professor of Mathematics at Dartmouth College, made the decision to assemble a collective with the purpose of elucidating and advancing concepts related to artificial intelligence. He selected the moniker 'Artificial Intelligence' for the emerging discipline. He selected the name partly due to its neutrality, in order to steer clear of a restricted emphasis on automata theory and to avoid the strong emphasis on analogue feedback in cybernetics. Additionally, he wanted to avoid the possibility of having to recognise Norbert Wiener as an authoritative figure or engage in arguments with him (Moor, 2006).

Today, machine learning (ML) and artificial intelligence (AI) are extensively utilised across various sectors of our economy to make critical decisions that have wide-ranging consequences. Examples include employers using ML algorithms to evaluate job applications, financial institutions using ML tools to assess individual creditworthiness for loan approvals, retailers employing recommendation algorithms for product suggestions, doctors relying on algorithms for medical decision support, and some courts in the United States utilizing algorithms like COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) for predicting recidivism (Van Dijck, 2022). Initially, ML algorithms were seen as a way to reduce historical bias and discrimination, with the promise of objective, data-driven decision-making (Fu et al., 2020).

For inclusive growth, can we envision a fair and equitable world where access to quality healthcare, nutritious food, and basic human necessities is available to everyone, regardless of age, gender, or social class? The question arises whether data-driven technologies like artificial intelligence and data science can achieve this goal or if existing biases that affect real-world outcomes will seep into the digital realm as well. This highlights the significant potential risk of biased AI. The journey to managing and mitigating this risk starts with comprehending how bias can creep into algorithms and why detecting it can be challenging. While AI holds the promise of creating a better and more equitable world, if left unregulated, it could also perpetuate historical inequalities. Fortunately, businesses can take steps to minimize this risk and use AI systems and decision-making software with confidence (Best & Rao, 2021).

Before delving deeper into this topic, let's establish the definitions of Algorithm, Algorithm Bias, Machine Learning, and Artificial Intelligence. Fundamentally, an algorithm can be defined as a collection of instructions or principles employed in computational or problem-solving tasks, typically carried out by a computing device (Köchling & Wehner, 2020). Within the field of computation, algorithms are transformed into software applications that have the capability to analyse incoming data using predetermined rules and produce corresponding output. Algorithms are integral to decision-making and advisory procedures across various domains of society (Silva & Kenney, 1960). Algorithmic bias is the term used to describe specific characteristics of an algorithm that result in unfair or subjective outcomes, displaying a preference for one group or entity over another (LibertiesEU, 2021).

Key Terms in this Chapter

Artificial Intelligence: Artificial intelligence (AI) refers to the replication of human intelligence in computer systems and machines. This is accomplished through developing algorithms, software, and hardware that enable these systems to perform tasks often associated with human intelligence. The aforementioned activities encompass problem-solving, acquisition of knowledge, comprehension of natural language, recognition of patterns, and decision-making informed by data and acquired knowledge (Unni et al., 2023).

Algorithm: An algorithm refers to a systematic procedure employed for the purpose of executing a computation or resolving a problem. Algorithms, whether implemented in hardware or software, operate as a systematic series of instructions that execute preset operations in a sequential manner ( Gillis, 2023 ).

Machine Learning: Machine learning, a specialised domain within the realm of artificial intelligence (AI) and computer science, is dedicated to the utilisation of data and algorithms in order to replicate the learning process observed in humans. Through this iterative approach, the system's precision is gradually enhanced ( IBM, n.d. ).

Algorithm Bias: When a computer system makes repeated, systematic mistakes that lead to unjust results, such as favouring one random set of users over another, this is referred to as algorithmic bias. It's a common worry nowadays as applications for artificial intelligence (AI) and machine learning (ML) permeate more and more of our daily lives ( Awan, 2023 ).

Decision Making: Decision making entails the act of choosing a specific option or path of action from a range of available alternatives (Unni, 2020).

Bias: Bias is an unfair inclination or prejudice in favour of or against something, leading to unbalanced outcomes in various contexts.

Fairness: The concept of fairness varies across disciplines, and there is no universally accepted definition. Developers struggle to express fairness in mathematical terms, and there are trade-offs between different definitions, making it challenging for businesses to adopt a specific definition ( Silberg & Manyika, 2019 ).

Complete Chapter List

Search this Book:
Reset