Formalization of Ethical Decision Making: Implementation in the Data Privacy of Wearable Robots

Formalization of Ethical Decision Making: Implementation in the Data Privacy of Wearable Robots

Sofia Almpani, Petros Stefaneas, Panayiotis Frangos
DOI: 10.4018/IJEACH.320488
Article PDF Download
Open access articles are freely available for download

Abstract

As automation in robotics and artificial intelligence is increasing, we will need to automate a growing amount of ethical decision making. However, ethical decision-making raises novel challenges for designers, engineers, ethicists, and policymakers, who will have to explore new ways to realize this task. For example, engineers building wearable robots should take into consideration privacy aspects and their different context-based scenarios when programming the decision-making procedures. This in turn requires ethical input in order to respect norms concerning privacy and informed consent. The presented work focuses on the development and formalization of models that aim at ensuring a correct ethical behavior of artificial intelligent agents, in a provable way, extending and implementing a logic-based proving calculus. This leads to a formal theoretical framework of moral competence that could be implemented in artificial intelligent systems in order to best formalize certain parameters of ethical decision-making to ensure safety and justified trust.
Article Preview
Top

Introduction

As autonomous artificial intelligent (AI) systems take up a progressively prominent role in our daily lives, it is undoubtedly that they will sooner or later be called on to make significant, ethically charged decisions and actions (Bringsjord et al., 2006). Over the last years, the issue of ethics in artificial intelligence and robots has gained great attention and many important theoretical and applied results were derived in the perspective of developing ethical systems (Tzafestas, 2018). But how could a robot or any AI agent be considered ethical? Some of the requirements needed are a broad capability to envisage the consequences of its own decisions as well as an ethical policy with rules to test each possible decision/consequence, so as to choose the most ethical scenario (Danaher, 2019; Tzafestas, 2018). The challenge is how we can guarantee that robots will always perform ethically correct behavior as defined by the ethical code declared by their human supervisors.

Academic research and real-life incidents of AI system failures and misuse have indicated the need for employing ethics in software development (Bringsjord et al., 2006). Nevertheless, studies on methods and tools to address this need in practice are still lacking, resulting in a growing demand for AI ethics as a part of software engineering (Vakkuri et al., 2019). But how can AI ethics be integrated in engineering projects when they are not formally considered? There has been some work on the formalization of ethical principles in AI systems (L. A. Dennis et al., 2015). Previous studies that attempt to integrate norms into AI agents and design formal reasoning systems has focused on: ethical engineering design (Flanagan et al., 2008; Robertson et al., 2019; Winfield et al., 2019; Wynsberghe, 2012), norms of implementation (Hofmann, 2012; Sisk et al., 2020), moral agency (Cunneen et al., 2019; Floridi & Sanders, 2004), mathematical proofs for ethical reasoning (Bringsjord et al., 2006), logical frameworks for rule-based ethical reasoning (Ågotnes & Wooldridge, 2010; Arkin, 2009; Iba & Langley, 2011), reasoning in conflicts resolution (Pereira & Saptawijaya, 2007), and inference to apply ethical judgments to scenarios (Blass & Forbus, 2015).

One of the categories of AI ethics is Ethics by Design, which is the incorporation of ethical reasoning abilities as a part of system behavior, such as in ethical robots (Vakkuri et al., 2019). In this work, if we assume that an AI agent can be capable of ethical agency, the purpose is to enable AI agents to reason ethically (L. A. Dennis et al., 2015). This includes taking into consideration societal and moral norms; to hierarch the respective priorities of norms in various contexts; to explain its reasoning; and to secure transparency and safety (Dignum, 2018). These systems are often established with the purpose to assist ethical decision-making by people, identifying the ethical principles that a system should not violate (L. Dennis et al., 2016).

Complete Article List

Search this Journal:
Reset
Volume 6: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 5: 1 Issue (2023)
Volume 4: 2 Issues (2022): 1 Released, 1 Forthcoming
Volume 3: 2 Issues (2021)
Volume 2: 2 Issues (2020)
Volume 1: 2 Issues (2019)
View Complete Journal Contents Listing