You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

Don't Ban Armed Robots in the U.S.

Sean Gallup/Getty Images News

What if armed drones were not just piloted remotely by humans in far-away bunkers, but they were programmed under certain circumstances to select and fire at some targets entirely on their own? This may sound like science fiction, and deployment of such systems is, indeed, far off. But research programs, policy decisions, and legal debates are taking place now that could radically affect the future development and use of autonomous weapon systems.

To many human rights NGOs, joined this week by a new international coalition of computing scientists, the solution is to preemptively ban the development and use of autonomous weapon systems (which a recent U.S. Defense Department directive on the topic defines as one “that, once activated, can select and engage targets without further intervention by a human operator”). While a preemptive ban may seem like the safest path, it is unnecessary and dangerous.

No country has yet publicly evinced plans to use fully autonomous weapons specifically designed to target humans. Some countries—including the United States—have long used near-autonomous weapon systems targeting other machines, such as defensive systems aboard some naval vessels, without which, for example, a ship would be helpless against a swarm of small missiles coming at speeds far faster than human reaction times.  

Broader application of autonomous weapon decision-making, including possibly against human targets, however, will emerge incrementally. Rather than a sudden shift from human control to machine control, in some contexts more and more aspects of seeking, analyzing, and firing upon targets will be automated, as the human role shifts gradually from full command, to partial command, to oversight or possible override, and so on.  This evolution is inevitable, as machine sensors, analytics, and learning improve; as states demand greater protection for their military personnel and for civilians; and as similar automation technologies show that they are capable of driving cars or performing robotic surgery—activities with potentially lethal consequences—with greater safety than human operators.

Proponents of banning autonomous weapons through a global treaty argue, among other things, that these systems risk dehumanizing warfare and thereby erode ethical constraints on it, and that artificial intelligence will never be capable of meeting the requirements of the laws of war (a.k.a. international humanitarian law) to distinguish between combatants and noncombatants and to avoid excessive collateral damage. As a moral matter, many of them do not believe that decisions to intentionally kill should be delegated to machines, and as a practical matter they believe that these systems may operate in unpredictable ways or be used in irresponsible—or even in the most ruthless—ways. 

A treaty ban is unlikely to work, however, especially in constraining states or actors most inclined to abuse these weapons—and gives them an advantage of possessing such weapons if other states are banned even from R&D into the weapon technologies that enable such systems, as well as autonomous defenses to counter them. Because automation of weapons will increase gradually, step-by-step toward full autonomy, it is also not as easy to design or enforce such a ban as proponents assume. 

In any event, if the goal is to reduce suffering and protect human lives, a ban may be counterproductive. Besides the self-protective advantages to military forces that might use them, it is quite possible that autonomous machine decision-making may, at least in some contexts, reduce risks to civilians by making targeting decisions more precise and firing decisions more controlled. True, believers in artificial intelligence have at times overpromised before, but we also know for certain that humans are limited in their capacity to make sound and ethical decisions on the battlefield, as a result of sensory error, fear, anger, fatigue, and so on. As a moral matter, states should strive to use the most sparing methods and means of war—and at some point that may involve autonomous weapons systems. No one can say with certainty how much automation technologies might gradually reduce the harms of warfare, but it would be morally wrong not to seek such gains as can be had—and especially pernicious to ban research and development into such technologies at the front end, before knowing what benefits they might offer. 

This is not to say that autonomous weapons warrant no special regulation, or that the United States should heedlessly rush to develop them. After all, the United States’ interest has never been trigger-pulling robot soldiers chasing down their human enemies—the cartoonish “killer robots” of the ban campaign—or taking humans out of targeting for its own sake.

We’ve argued at length in a policy paper titled Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can that a much better solution than a global ban is to adapt the existing laws of war to deal with autonomous systems. The United States should set very high standards internally for assessing legally and ethically any research and development programs in this area—especially with regard to baseline requirements regarding target discrimination and excessive collateral damage avoidance—and for continually assessing legally and ethically any use of such system. Importantly, it should discuss publicly as much as possible how it conducts such review, and according to what metrics. 

To make this work, the United States cannot go it alone; it should work with a coalition of like-minded partners to set standards and develop best practices. Some allies have already shown an interest in pursuing such a policy. Recently the British government, for example, responded to calls for a ban by declaring instead its view that existing international law already provides a strong framework for regulating future development of such systems, and that responsible states should engage in cooperative development of common standards and best practices within a law of war framework. 

Autonomous weapon systems are not inherently unethical or unlawful, and they can be made to serve the ends of law on the battlefield. Provided we start now, addressing the challenges they raise—as any new military technologies do—through existing normative frameworks is a much better solution than pushing for a global treaty ban.