Article Preview
TopIntroduction
The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. (Asimov, 2004)
As the United States’ development of autonomous military robots progressed into self-decision mode, still, the inevitable question arises—can they ever meet the threshold of moral agents, where they would be deemed ethically capable of determining, through informed processes based on prior knowledge and situations, when they would be able to legitimately deprive a human of life? Social military robots have since been cemented as a norm around the world for non-combat purposes. For instance, their roles during the pandemic of COVID 19, highlight the urgency for the military to further the goals of applying robots in place of human assets (Bendett, 2020). On the battleground, the United States’ Army, non-combat robots have joined humans on the battle field and have served in many capacities: scout enemy fire from around the corner; scan buildings for spot threats; carry teams’ ammo, water, gear, batteries; use thermal camera and chemical sensor to report back on city sewer system; and even scour for explosives or enemy fighters in the dark (South, 2020). Conversely, autonomous lethal weapons, such as those in the Marine Corps’ Sea Mob Program, has been successfully tested to not only go on the offensive and strategically choose targets, but also to do so without being instructed by a human (Fryer-Biggs, 2019). Similarly, the Army’s Joint Air-to-Ground Missile System, without human input, will be able to select vehicles to attack; and another of its system will be able to point guns at selected targets (Fryer-Briggs, 2019). As for the Navy, the Phalanx, that is positioned on midsize and large ship decks, fires 75 bullets a second, and consistently corrects itself as it zeros in on targets-- incoming missiles and airplanes; it does all of this, and keep count of its bullets, without direct human input (Fryer-Briggs, 2019). This is undoubtedly a leap from the autonomous weaponry that were only allowed to take defensive strikes against incoming targets (Fryer-Biggs, 2019).
Artificial intelligence (AI) deals with intelligent and emotional interactions between artificial systems and their users. Artificial emotional intelligence (AEI) delves in improving human emotion to provide robots with the capability to express emotions and become social-moral agents in human–robot interaction (HRI). Artificial intelligence and human-robot interaction can aid humans in varied non-combat tasks. However, the overarching principle of Asimov’s Laws of Robotics becomes more pronounced when dealing with AI and HRI in a combat zone. Military robots are seen as social agents that are essential to the accomplishment of future missions. It is well known that a robot will not harm a human even during self-preservation. However, if programmed for the purposes of sparing the lives of “good” humans (based on the ethical value of the end user and under the context of acceptable ‘Defense of Others’ principle), ‘ethics’ becomes an important AI / HRI research area for military robotics.