Monday, November 11, 2024

The moral obligation of using AI to reduce atrocities

By Ronald Arkin

Let me unequivocally state: The status quo with respect to innocent civilian casualties is utterly and wholly unacceptable. I am not in favor of Lethal Autonomous Weapon Systems (LAWS) nor of lethal weapons of any sort. I would hope that LAWS would never need to be used, as I am against killing in all its manifold forms. But if humanity persists in entering into warfare, which is an unfortunate underlying assumption, we must protect the innocent noncombatants in the battlespace far better than we currently do. Technology can and should be used toward that end. Is it not our responsibility as scientists to look for effective ways to reduce man’s inhumanity to its fellow man through technology? Research in ethical military robotics can and should be applied toward achieving this goal.

I have studied ethology – animal behavior in their natural environment – as a basis for robotics for my entire career, ranging from frogs, insects, dogs, birds, wolves and human companions. Nowhere has it been more depressing than to study human behavior in the battlefield. The commonplace occurrence of slaughtering civilians in conflict over millennia gives rise to my pessimism in reforming human behavior yet provides optimism for the prospects of robots being able to exceed human moral performance in similar circumstances.

I have the utmost respect for our young men and women in the battlespace, but they are placed into situations where no human has ever been designed to function. This is exacerbated by the tempo at which modern warfare is conducted. Given this pace and resultant stress, expecting widespread compliance with international humanitarian law seems unreasonable and perhaps unattainable by flesh-and-blood warfighters.

I believe judicious design and the use of LAWS can lead to the potential saving of noncombatant lives. If properly developed and deployed, it can and should be used towards achieving that end, and not simply about winning wars. We must position this humanitarian technology at the point where war crimes, carelessness and fatal human error occur and lead to noncombatant deaths. Unmanned systems will never be able to be perfectly ethical in the battlefield, but I am convinced that they can ultimately perform more ethically than human soldiers.

I am not averse to a ban should we be unable to reach the goal of reducing noncombatant casualties; but for now we are better served by a moratorium, at least until we can agree upon definitions regarding what we are regulating and it is determined whether we can indeed achieve humanitarian benefits through the use of this technology. A preemptive ban ignores the moral imperative to use technology to reduce the persistent atrocities and mistakes that human war-fighters make. At the very least it is premature.

Alternative considerations include the following: Regulate autonomous weapons usage instead of prohibiting them entirely; consider restrictions in well-defined circumstances rather than an outright ban and stigmatization of the weapons systems; do not make decisions based on unfounded fears – remove pathos and hype while focusing on the real technical, legal, ethical and moral implications.

Numerous factors point to autonomous robots soon being able to outperform humans on the battlefield from an ethical perspective:

  • They are able to act conservatively, as they do not need to protect themselves in cases of low certainty of target identification.
  • The eventual development and use of a broad range of sensors will render robots better equipped than humans for battlefield observations.
  • They can be designed without emotions that would otherwise cloud their judgment or result in anger and frustration with ongoing battlefield events.
  • They avoid the human psychological problem of “scenario fulfillment,” which contributed to the downing of an Iranian airliner by the USS Vincennes in 1988.
  • They can integrate more information from more sources far faster than a human possibly could in real-time before responding with lethal force.
  • When working in a team of combined human soldiers and autonomous systems, they have the potential to independently and objectively monitor ethical behavior in the battlefield by all parties and to report any infractions that may be observed.

LAWS should not be considered an end-all military solution. To the contrary, their use should be limited to specific circumstances. Current thinking recommends:

  • Specialized missions where bounded morality applies, e.g. room clearing, counter-sniper operations or perimeter protection in the DMZ.
  • High-intensity inter-state warfare, not counter-insurgencies, to minimize likelihood of civilian casualties.
  • Deployment in concert with soldiers, not as their replacement. Human presence in the battlefield should be maintained.

Smart autonomous weapons systems may enhance the survival of noncombatants. Human Rights Watch considers the use of precision-guided munitions in urban settings to be a moral imperative. In effect, there may be mobile precision-guided munitions that result in a similar moral imperative for their use. Such weapons have the possibility of deciding when to fire and – more importantly – when not to fire. They should be designed with overrides to ensure meaningful human control. Moreover, they can employ fundamentally different tactics while assuming far more risk than human warfighters in terms of protecting noncombatants and assessing hostility and hostile intent. In essence, these systems can more effectively operate on a philosophy of “First do no harm” rather than “Shoot first and ask questions later.”

Building such systems is not a short-term goal, but rather part of a medium- to long-term agenda addressing many challenging research questions. However, exploiting bounded morality within a narrow mission context helps to achieve better performance with respect to preserving noncombatant life, and thus warrants robust research on humanitarian grounds. Other researchers have begun related work on at least four continents. Nonetheless, many daunting questions regarding lethality and autonomy remain unresolved. Discussions regarding regulation must be based on reason, not on fear. Until these questions are resolved, a moratorium is more appropriate than a ban. Only then can a careful, graded introduction of the technology into the battlespace be ensured.

The status quo is unacceptable with respect to noncombatant deaths. It may be possible to save noncombatant lives through the use of this technology, and these efforts should not be prematurely terminated by a preemptive ban. AI can be used to save innocent lives where humans may and do fail. Nowhere is this more evident than on the battlefield.

RONALD ARKIN
is Regents’ Professor and director of the Mobile Robot Laboratory at the Georgia Institute of Technology’s School of Interactive Computing in Atlanta.