Tyler Roberts

Autonomous Machines

CSC 540 Individual Research Paper

Introduction: The use of robots in society used to just be a topic of science fiction, but more recently it has become a reality. This individual research project will cover the topic of the social, professional, and ethical issues that arise along with the growing market of military and some commercial autonomous machines.

There are 4 major rules that define whether a robot is fully autonomous: first, it has to be able to gain information about the environment, second, it must be able to work for an extended period without human intervention, third, either all or part of itself throughout its operating environment without human assistance, and fourth, it must avoid situations that are harmful to people, property, or itself unless those are part of its design specifications. Many of the robots described in this paper would not be considered fully autonomous for various reasons, the ones that aren’t considered fully autonomous are extremely close and will be grouped in with the ones that are.

Currently autonomous machines are just a fraction of what is involved in global warfare. Even though they may be a small part, they have recently heavily impacted the way we fight our wars in other countries. We implemented them starting from all the way back in WWII. The U.S. has conducted extensive research and development regarding military autonomous machines with the RQ-1/MQ-1 Predator being the most noticed and covered in the media. The predator drone is not the only machine that caused ethical issues in today's society, DARPA has been developing and buying machines that can do very impressive terrain navigation and military purposed goals that could change the way wars are fought. Imagine a world where all wars are fought by autonomous machines; there would be no more families waiting for their loved ones to come home from war and human lives would be just a tiny fraction of the casualties relative to the past. This is a double edged sword; it will always involve two debating sides on whether it will hurt or help society more. A report prepared for the US Department of Navy, Office of Naval Research states:

"The worries include: where responsibility would fall in cases of unintended or unlawful harm, which could range from the manufacturer to the field commander to even the machine itself; the possibility of serious malfunction and robots gone wild; capturing and hacking of military robots that are then unleashed against us; and lowering the threshold for entering conflicts and wars, since fewer US military lives would then be at stake."[1]

These are the types of questions people that are skeptical about the rise of autonomous military machines will ask, and they are very reasonable questions that we need to have concrete answers and laws to.

History of Autonomous Machines: Autonomous machines being used in the military could be considered to have started in WWII and the Cold War with tracked mines and Soviet teletanks. Now several military robots have been/are being deployed and developed by the armies of many countries. It is now possible to see a future where warfare is fought by automated military machines. The U.S. military research and development for automated systems. The most prominent system currently in use is the unmanned aerial vehicle (IAI Pioneer & RQ-1 Predator) which can be armed with Air-to-ground missiles and remotely operated from a command center in reconnaissance roles. Many of this new technology can be credited to DARPA. The Defense Advanced Research Projects Agency is an agency of the United States Department of Defense responsible for the development of new technologies for use by the military. DARPA has been responsible for funding the development of many technologies which have had a major effect on the world. DARPA has hosted competitions in 2004 & 2005 to involve private companies and universities to develop unmanned ground vehicles to navigate through rough terrain in the Mojave Desert for a final prize of $2 Million [2].

Types of Military Autonomous Machines: Presently robots move on land, in air, in bodies of water, and also in space. Land/Ground mobility uses legs, treads, wheels, snake-like locomotion, and hopping. Flying robots are known to use propellers, jet engines, and wings. Underwater robots usually resemble submarines or boats when used above water. Some vehicles capable of moving in more than one medium or terrain have been built.

-Ground Robots: The U.S. Army has implemented a new commonly used type of autonomous and semi-autonomous ground vehicle. These small vehicles can/may be carried by a soldier in a backpacksuch as the PackBot. The PackBot uses cameras and communication equipment for input and output,and may include arms. It is designed to find and detonate IEDs, as well as to perform reconnaissance. Since the PackBot is smaller, it can go into buildings, report on conditions, and trigger/disarm bombs.

Typical armed robot vehicles are: the Talon SWORDS, which can be equipped with machine guns, grenade launchers, or anti-tank rocket launchers as well as cameras and other sensors and the newer MAARS. While vehicles such as SWORDS and the newer MAARS are able to autonomously navigate toward specific targets through its global positioning system, at present the firing of any on-board weapons is done by a soldier located a safe distance away.

-Aerial Robots: One of the most commonly used aerial robot is known as unmanned flying vehicles (UAV’s). These robots were developed by all various parts of the military for their global usefulness. These types of robots also have multiple uses: reconnaissance while being unmanned, and carrying cargo/weapons. Very small aircraft robots called Micro Air Vehicles (MAV) can carry a camera for reconnaissance, serving the same purpose as UAV’s but on a micro scale.

Although the UAV’s are not directly threatening, the other type of commonly used aerial robot are the RQ-1/MQ-1 Predator and the MQ-9 Reaper. The very concerning statistics of the Predator and Reaper strikes will be covered in more detail in the “Moral/Legal/and Ethical Issues” section, but the basics will be covered here. As of March 2009, the U.S. Air Force had 195 MQ-1 Predators and 28 MQ-9 Reapersin operation. Since its first flight in July 1994, the MQ-1 series has accumulated over 1,000,000 flight hours [3].

There have been some developments towards developing autonomous fighter jets and bombers. The use of autonomous fighters and bombers to destroy enemy targets is especially promising because of the lack of training required for robotic pilots, autonomous planes are capable of performing maneuvers which could not otherwise be done with human pilots (due to high amount of G-Force). Plane designs do not require a life support system, and a loss of a plane does not mean a loss of a pilot. However, the largest draw back to robotics is their inability to accommodate for non-standard conditions [4].

-Marine Robots: The US Navy also has a robotic program. This includes surface ships as well as Unmanned Underwater Vehicles (UUVs). Some of the purposes of these robots include surveillance, reconnaissance, and anti-submarine combat. These robots, like all other types can be in varied sizes. Boeing's Long-term Mine Reconnaissance System (LMRS) is dropped into the ocean from a telescoping torpedo launcher aboard the SV Ranger to begin its underwater surveillance test mission. LMRS uses two sonar systems, an advanced computer and its own inertial navigation system to survey the ocean floor for up to 60 hours. A large UUV, the Seahorse is advertised as being capable of ‘independent operations’, which may include the use of lethal weapons.

-Immobile/Fixed Robots: On the other hand, there are immobile or stationary weapons, both on land and on ships, which do merit the designation of robot, despite their lack of mobility. An example of such a system is the Navy’s Phalanx Close-In Weapon System (CIWS).CIWS is a rapid-fire 20mm gun system designed to protect ships at close range from missiles which have penetrated other defenses. The system is mounted on the deck of a ship; it is equipped with both search and tracking radars and the ability to rotate a turret in order to aim the guns. It automatically performs search, detecting, tracking, threat evaluation, firing, and kill-assessments of targets.

Moral/Legal/and Ethical Issues:If you hadn’t already come to the conclusion that there could be a giant bucket filled with moral/legal/and ethical issues involving military use of these robots than you might want to re-asses your sanity. Arguments over the legal and ethical legitimacy of particular weapons poison as a weapon in war, for example, or the crossbow go back very far in the history of warfare. This section will try to touch on many of these issues that come with these autonomous machines.

-Autonomous Machines Violating Laws:A very popular question that is brought up when considering ethics of autonomous machines is “Who is responsible if the use of an autonomous weapon results in a violation of international humanitarian law?”As a machine, an autonomous weapon could not itself be held responsible for a violation of international humanitarian law. This then raises the question of who would be legally responsible if the use of an autonomous weapon results in a war crime: the programmer, the manufacturer or the commander who deploys the weapon? If responsibility cannot be determined as required by international humanitarian law, is it legal or ethical to deploy such systems?Because so many questions remain unanswered [5].

-Drone Strikes:Out of all the types of military autonomous robots the aerial ones attract the most attention, specifically the Predator and Reaper unmanned drones. The success rates of these drones are very questionable. Below is a summary of US drone strikes as of January 2014:

  • Total strikes: 381
  • Total reported killed: 2,537 - 3,646
  • Civilians reported killed: 416 - 951
  • Children reported killed: 168 - 200
  • Total reported injured: 1,128 - 1,557 [6]

The attacks greatly increased starting in 2008 and have spiked up and down ever since yearly. It is stated in a Bureau of Investigative Journalism (BIJ) report that of all the drone attack victims since 2004, more than 76% of the dead fall in the legal grey zone, 22% are confirmed civilians (included 5% minors) and only the remaining 1.5% are high-profile targets [7].

-Just War:Issues involving just war arise with the development of autonomous military machines, questions like “Does the option of military robots make it easier for one nation to wage war, since they help reduce risk and friendly casualties, which both bear a heavy political cost?” are very important and need to be answered. One nation with a heavily developed robot army would lose much less in a war with a country with none. This would make starting wars less risky for larger more wealthy countries. [8]

Conclusion: Autonomous robots both on and off the battlefield will need to make choices in the course of fulfilling their missions. Some of those choices will have potentially harmful consequences for humans and other agents worthy of moral consideration. Even though the capacity to make moral judgments can be quite complex, and even though roboticists are far from substantiating the collection of affective and cognitive skills necessary to build AMAs, systems with limited moral decision-making abilities are more desirable than ‘ethically blind’ systems. The military’s comfort with the robots it deploys, and ultimately the comfort of the public, will depend upon a belief that these systems will honor basic human values and norms in their choice of actions. Given the prospect that robotic systems can reduce the loss of personnel during combat, one can presume that the development of autonomous robotic fighting machines will proceed. However, if semi-autonomous and autonomous robotic systems are deployed as lethal weapons, it goes without saying that commanders will need to be confident that the systems will only wield their destructive might on designated targets.

The challenge for the military will reside in preventing the development of lethal robotic systems from outstripping the ability of engineers to assure the safety of these systems. Implementing moral decision-making faculties within robots will proceed slowly. While there are aspects of moral judgment that can be isolated and codified for tightly defined contexts, moral intelligence for autonomous entities is a complex activity dependent on the integration of a broad array of discrete skills. Robots initially will be built to perform specified tasks. However, as computer scientists learn to build more sophisticated systems that can analyze and accommodate the moral challenges posed by new contexts, autonomous robots can and will be deployed for a broad array of military applications. So for the foreseeable future and as a more reasonable goal, it seems best to attempt to program a virtuous partial character into a robot and ensure it only enters situations in which its character can function appropriately. [1]

References (1 page):

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]