Conceptualizing Policy in Value Sensitive Design: A Machine Ethics Approach

Conceptualizing Policy in Value Sensitive Design: A Machine Ethics Approach

Steven Umbrello
DOI: 10.4018/978-1-7998-4894-3.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The value sensitive design (VSD) approach to designing emerging technologies for human values is taken as the object of study in this chapter. VSD has traditionally been conceptualized as another type of technology or instrumentally as a tool. The various parts of VSD's principled approach would then aim to discern the various policy requirements that any given technological artifact under consideration would implicate. Yet, little to no consideration has been given to how laws, policies, and social norms engage within VSD practices, similarly, how the interactive nature of the VSD approach can, in turn, influence those directives. This is exacerbated when considering machine ethics policy that has global consequences outside their development spheres. This chapter begins with the VSD approach and aims to determine how policies come to influence how values can be managed within VSD practices. It shows that the interactional nature of VSD permits and encourages existing policies to be integrated early on and throughout the design process.
Chapter Preview
Top

Introduction

The varied influences that artificial intelligence systems and robotics have on society have moved out of the realm of speculation and into reality. The impact that algorithmic trading agents, medical diagnostic systems, driverless cars and smart home assistants – to name a few – already have substantial and unignorable effects on the lives of both direct stakeholders (users, designers, companies, etc.) as well as indirect stakeholders (environments, bystanders, etc.). Their socialtechnicity – i.e., their inextricable link to the social environment in which they are designed and used – makes their study critical if their design and deployment are to be responsible. For this reason there has been a considerable amount of attention directed towards the ethical understanding of these systems, and a search towards actionable guidelines and best practices (Dignum, 2019). As a result, numerous principles, guidelines, recommendations, and values have been proposed to govern such systems, with a resulting risk of confusion as to which set to choose, thus delaying much-needed progress into making such principles actionable (Floridi et al., 2018). The next turn in AI ethics is how to translate abstract philosophical and legal principles/values into design requirements that engineers can understand and plan in design.

Multiple approaches have emerged that consider the social embeddedness of technologies and their impacts. The core of many of these methodologies is the engagement and elicitation of stakeholders, whether they are directly or indirectly implicated by technology design. Approaches such as universal design (Ruzic & Sanfod, 2017; van den Hoven, 2017), inclusive design (Gregor, Sloan, & Newell, 2005; Hyppönen, Kemppainen, Gill, Slater, & Poulson, 2000), sustainable design (Fallan, 2015; Lockton, Harrison, & Stanton, 2016; Winkler & Spiekermann, 2019), participatory design (Bødker, Kensing, & Simonsen, 2009; Ehn, 2016), and value sensitive design (Friedman & Hendry, 2019; Umbrello, 2020a; van den Hoven & Manders-Huits, 2009) among others, have been constructed and proposed. Although these methodologies are disparate in many respects, they all aims towards the goal of responsible research and innovation (RRI).

Originally developed within the field of human-computer interaction, value sensitive design (VSD) begins from the premise that technology is not value-neutral; rather, it is sensitive to stakeholder values, whether they are direct stakeholders such as users and designers, or indirect, such as the environment, and that social contexts and technologies co-vary (Friedman, Hendry, & Borning, 2017; van den Hoven & Manders-Huits, 2009). As a starting point then, the VSD approach aims to explicitly design technologies for stakeholder values – with emphasis on moral values - in a manner as to successfully map the values deemed critical and to ensure the robustness of sociotechnical systems (Friedman & Hendry, 2019; Umbrello, 2019b). What differs VSD from other design approaches then is its explicit emphasis on moral values and their inherit embeddedness in technologies (see Friedman & Hendry, 2019).

VSD has traditionally prioritized the values that emphasize human well-being, human dignity, justice, welfare, and human rights as its central concern (Friedman, Kahn, Borning, & Huldtgren, 2013). The approach is considered ‘principled’ because it assumes an objective moral grounds on which these values spring, one that is independent of whether any particular individual or group subscribe to such values (e.g., the belief in and practice of racial eugenics by a group does not a priori mean that racial eugenics is a morally acceptable practice). Still, VSD maintains that expression of such values in any particular culture, or by any particular individual, can vary greatly (Friedman et al., 2017; Umbrello, 2020a). This ethical objectivism that VSD affirms easily permits it to be integrated into existent design practices across sociocultural dimensions, although it is not without objections (Davis & Nathan, 2014; Umbrello, 2018a, 2020).

Complete Chapter List

Search this Book:
Reset