War and Peace: Ethical Challenges and Risks in Military Robotics

War and Peace: Ethical Challenges and Risks in Military Robotics

Racquel D. Brown-Gaston, Anshu Saxena Arora
Copyright: © 2021 |Pages: 12
DOI: 10.4018/IJIIT.2021070101
Article PDF Download
Open access articles are freely available for download

Abstract

The United States Department of Defense (DoD) designs, constructs, and deploys social and autonomous robots and robotic weapons systems. Military robots are designed to follow the rules and conduct of the professions or roles they emulate, and it is expected that ethical principles are applied and aligned with such roles. The application of these principles appear paramount during the COVID-19 global pandemic, wherein substitute technologies are crucial in carrying out duties as humans are more restrained due to safety restrictions. This article seeks to examine the ethical implications of the utilization of military robots. The research assesses ethical challenges faced by the United States DoD regarding the use of social and autonomous robots in the military. The authors provide a summary of the current status of these lethal autonomous and social military robots, ethical and moral issues related to their design and deployment, a discussion of policies, and the call for an international discourse on appropriate governance of such systems.
Article Preview
Top

Introduction

The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. (Asimov, 2004)

As the United States’ development of autonomous military robots progressed into self-decision mode, still, the inevitable question arises—can they ever meet the threshold of moral agents, where they would be deemed ethically capable of determining, through informed processes based on prior knowledge and situations, when they would be able to legitimately deprive a human of life? Social military robots have since been cemented as a norm around the world for non-combat purposes. For instance, their roles during the pandemic of COVID 19, highlight the urgency for the military to further the goals of applying robots in place of human assets (Bendett, 2020). On the battleground, the United States’ Army, non-combat robots have joined humans on the battle field and have served in many capacities: scout enemy fire from around the corner; scan buildings for spot threats; carry teams’ ammo, water, gear, batteries; use thermal camera and chemical sensor to report back on city sewer system; and even scour for explosives or enemy fighters in the dark (South, 2020). Conversely, autonomous lethal weapons, such as those in the Marine Corps’ Sea Mob Program, has been successfully tested to not only go on the offensive and strategically choose targets, but also to do so without being instructed by a human (Fryer-Biggs, 2019). Similarly, the Army’s Joint Air-to-Ground Missile System, without human input, will be able to select vehicles to attack; and another of its system will be able to point guns at selected targets (Fryer-Briggs, 2019). As for the Navy, the Phalanx, that is positioned on midsize and large ship decks, fires 75 bullets a second, and consistently corrects itself as it zeros in on targets-- incoming missiles and airplanes; it does all of this, and keep count of its bullets, without direct human input (Fryer-Briggs, 2019). This is undoubtedly a leap from the autonomous weaponry that were only allowed to take defensive strikes against incoming targets (Fryer-Biggs, 2019).

Artificial intelligence (AI) deals with intelligent and emotional interactions between artificial systems and their users. Artificial emotional intelligence (AEI) delves in improving human emotion to provide robots with the capability to express emotions and become social-moral agents in human–robot interaction (HRI). Artificial intelligence and human-robot interaction can aid humans in varied non-combat tasks. However, the overarching principle of Asimov’s Laws of Robotics becomes more pronounced when dealing with AI and HRI in a combat zone. Military robots are seen as social agents that are essential to the accomplishment of future missions. It is well known that a robot will not harm a human even during self-preservation. However, if programmed for the purposes of sparing the lives of “good” humans (based on the ethical value of the end user and under the context of acceptable ‘Defense of Others’ principle), ‘ethics’ becomes an important AI / HRI research area for military robotics.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing