Ethical Rationality in AI: On the Prospect of Becoming a Full Ethical Agent

Ethical Rationality in AI: On the Prospect of Becoming a Full Ethical Agent

Jonas Holst
DOI: 10.4018/978-1-7998-4894-3.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Taking its starting point in a discussion of the concept of intelligence, the chapter develops a philosophical understanding of ethical rationality and discusses its role and implications for two ethical problems within AI: Firstly, the so-called “black box problem,” which is widely discussed in the AI community, and secondly, another more complex one which will be addressed as the “Tin Man problem.” The first problem has to do with opacity, bias, and explainability in the design and development of advanced machine learning systems, such as artificial neural networks, whereas the second problem is more directly associated with the prospect for humans and AI of becoming full ethical agents. Based on Aristotelian virtue ethics, it will be argued that intelligence in human and artificial forms should approximate ethical rationality, which entails a well-balanced synthesis of reason and emotion.
Chapter Preview
Top

Introduction

The purpose of this chapter is to define and discuss the concept of ethical rationality in relation to two central problems in the growing research areas of machine learning and artificial intelligence (AI). Drawing on ancient Greek virtue ethics, it will be argued that ethics, in its concern for how to live, act, and think well, is founded on rationality which needs to be defined further in order to realize what role it may play within AI ethics, and how it is distinct from intelligence.

It is precisely out of the conceptual conundrum of intelligence, which does not in itself have ethical considerations or goals built into it, that the first of the two problems in AI arises. Whether intelligence is defined as the capacity to accomplish complex goals (Tegmark, 2017, p. 50) or as doing the right thing at the right time (Bryson, in press), these definitions do not contain any explicit links to ethics, or their possible ethical implications would need to be clarified further; for instance, what is meant by the term “right.” In case there were some third exhaustive definition of intelligence, it would possibly only entail ethical principles, intentions, or considerations if some form of rationality were introduced.

As Nick Bostrom (2014) has observed, “more or less any level of intelligence could in principle be combined with more or less any final goal” (p. 107). This seems to imply that AI could be developed to its utmost realization without giving any serious thought to what is right and wrong, good and bad. One way of tying artificial intelligence to an ethical goal is to secure its explainability, which will be presented as a first step in the development of ethical rationality. When explainability cannot be effectuated, we are faced with the so-called “black box problem,” which has become one of the principal concerns in the AI research community, as it appears to stand in the way of a rational, transparent and unbiased use of technology: The black box is meant to convey the image of a machine working behind opaque “walls” without shedding light on what it is, or has been, doing. It is not merely a hotly debated topic among academics, but it is also being discussed by AI practitioners and designers. In a recent debate between NYU Professor, Gary Marcus, and pioneering practitioner of neural networks, Yoshua Bengio, the question of the black box problem and how it might be solved by putting reasoning into a machine was raised several times.1

The second problem discussed is related to what was from the beginning considered to be the main challenge of AI, namely developing artificial general intelligence comparable to human level intelligence (McCarthy, Minsky, Rochester, & Shannon, 2006/1955). In the chapter, the connection between the two problems will be interpreted as one between foundation and further development: artificial general intelligence could hardly be developed without being founded on some form of rationality which can explain what, how, and why it acts as it does. However, even if the first problem could at some point be solved, the second problem will keep the research community occupied for decades, probably without coming up with any clear-cut solution. Given that human level intelligence involves rational ways of seeking what is good and avoiding what is bad, the second problem can be associated with the process of becoming a “full ethical agent,” a goal, which may not be accomplishable without making reference to emotions and the significant role they play for human cognition, decision-making, and care.

In so far as this same problem could be addressed from a machine’s point of view, it might be tentatively denominated the “Tin Man problem.” As is well-known from the tale of The Wizard of Oz, the Tin Man was made or remade without a heart; i.e. he apparently lacked emotions, although he expressed his desire to get a heart in order to feel and express love. Whether emotions, as we humans know them, can be recreated artificially, remains an open question. Yoshua Bengio’s teacher, Geoffrey Hinton, has claimed to be 99.9 percent sure that it can be done.2 Viewed from an ethical standpoint, it would, though, not be enough to produce and embed emotions into AI agents in order to make them ethical; a further difficulty consists in finding ways of connecting and attuning emotions to an artificially created rationality.

Key Terms in this Chapter

Machine Learning: The process by which machines “learn” from vast amounts of data which they are fed. This technology is mostly associated with algorithms and artificial neural networks that are optimized so that they can combine data, detect patterns, and produce novel output at a speed highly superior to human-level intelligence.

Ethics: A philosophical discipline which studies how, and what it means, to live well. Elaborating critically on the work of Plato, who declared the good to be the supreme idea in the world, Aristotle was the first to write a treatise on ethics, in which he argued that the acquisition of virtues is paramount for humans to act and think well. Today, ethics is present in practically all fields of knowledge, including the life sciences (cf. bioethics).

Accountability: The status of being accountable and responsible for something, such as actions or products. Ethically speaking, it entails minding and explicitly clarifying the relevant context, causes, and implications for what one is accountable.

Explainability: Research paradigm which prioritizes making AI intelligible and interpretable in order to promote safety, transparency, and trust.

Artificial Intelligence (AI): The term can both refer to the creation of intelligences, which operate as software applications (bots) or machines (robots), and to these intelligences themselves. In both senses of the word, “artificial” should be understood as a creative aspect of intelligence (as in “art”), not as something fake or phony.

Bias: Attitude comparable to a form of prejudice which may be built into certain ways of thinking or may pertain to the way in which data is treated.

Rationality: In ancient Greek philosophy known by the concept logos , which covers reason, speech, and argument. It expresses itself first and foremost in the human capacity of giving an account of the causes and the goals inherent in action. The concept is highly contested in modern philosophy and also associated with deduction and deliberation, and critical assessment and thinking.

Complete Chapter List

Search this Book:
Reset