Designed to Seduce: Epistemically Retrograde Ideation and YouTube's Recommender System

Designed to Seduce: Epistemically Retrograde Ideation and YouTube's Recommender System

Fabio Tollon
Copyright: © 2021 |Pages: 12
DOI: 10.4018/IJT.2021070105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Up to 70% of all watch time on YouTube is due to the suggested content of its recommender system. This system has been found, by virtue of its design, to be promoting conspiratorial content. In this paper, the author firstly critiques the value neutrality thesis regarding technology, showing it to be philosophically untenable. This means that technological artefacts can influence what people come to value (or perhaps even embody values themselves) and change the moral evaluation of an action. Secondly, he introduces the concept of an affordance, borrowed from the literature on ecological psychology. This concept allows him to make salient how technologies come to solicit certain kinds of actions from users, making such actions more or less likely, and in this way influencing the kinds of things one comes to value. Thirdly, he critically assesses the results of a study by Alfano et al. He makes use of the literature on affordances, introduced earlier, to shed light on how these technological systems come to mediate our perception of the world and influence action.
Article Preview
Top

1. Introduction

When thinking about technology it is often natural to assume that it can be instrumentally valuable. For a system to be instrumentally valuable means for it to be valuable due to its ability to serve as a means for attaining some valuable end. While perhaps prima facie satisfactory, this understanding of technology obscures the way artefacts can mediate our experiences (Verbeek, 2005, 2006). With the widespread use of Artificial Intelligence (AI) becoming part of many decision making processes, it is especially important that we attempt to align technological and moral progress (Mittelstadt & Floridi, 2016).

The instrumentalist (or value neutral) view of technology does not sufficiently account for the active role that technological artefacts can play in our interactions with the world, and the beliefs that we come to hold. Technology has the ability to influence what we come to value. This does not necessarily mean that technological artefacts have value in themselves, but rather that technology can influence the values we end up endorsing (Klenk, 2020; van de Poel & Kroes, 2014). To this end, it is therefore important that we create and use technological systems that promote socially beneficial values. To bring about these positive outcomes there is a demand on the developers of AI to align their systems with the goals of artificial intelligence for social good (hereafter AI4SG) (Floridi et al., 2018; Floridi, Cowls, King, & Taddeo, 2020; Hagendorff, 2020; Peters, Vold, Robinson, Calvo, & Member, 2020; Taddeo & Floridi, 2018). AI4SG is:

The design, development, and deployment of AI systems in ways that (i) prevent, mitigate or resolve problems adversely affecting human life and/or the wellbeing of the natural world, and/or (ii) enable socially preferable and/or environmentally sustainable developments. (Floridi et al., 2020: 1773-1774)

My focus in this paper will relate to (ii) in the above characterisation of AI4SG. My argument will proceed as follows: Firstly, I will introduce the value neutrality thesis regarding technology, showing its “strong” version to be untenable. I will then outline the weak neutrality thesis, which allows for technology to affect the moral evaluation of an action. Secondly, I will introduce the concept of an affordance, which helps illuminate the ways technology can influence our actions. Thirdly, in light of the aforementioned, I will analyse the results of a recent paper by Alfano et al. (2020) where the authors show that YouTube’s recommender system may be involved in promoting suboptimal ideation in users. This occurs due to bottom-up technological seduction, whereby technological systems make use of aggregated user data in order to guide and predict behaviour. This can be especially problematic in the case of recommender systems more generally, as these are systems have the potential to promote socially harmful content.

Specifically, I will claim that the actions afforded by YouTube’s recommender system are problematic as their algorithm, by virtue of its intentionally designed properties, is promoting epistemically corrupt content by promoting conspiratorial videos. My focus will be both on the design of the system, and the consequences that follow from this. Furthermore, I will use the concept of an affordance to better understand how recommender systems solicit the attention of users, and that such solicitation is more often than not morally valanced.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 2 Issues (2022)
Volume 12: 2 Issues (2021)
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing