The Struggle to Ban Killer Robots

MILITARISM, 26 May 2014

Kristin Bergtora Sandvik, Nicholas Marsh and Maral Mirshahi – Bulletin of the Atomic Scientists

killer robots

The Campaign to Stop Killer Robots was launched in April 2013 with the objective of achieving a ban on the development, production, and deployment of lethal autonomous weapons. The same month, Christof Heyns, the United Nations’ special rapporteur on extrajudicial, summary or arbitrary executions, called for a moratorium on the development and deployment of such weapons while an international commission considered the issue. Within a remarkably short period of time, the campaign has achieved significant traction. Every month, a flurry of media reports, international conferences, and policy events are dedicated to the issue. The campaign is succeeding at something very important: bringing politics to bear on what are, at the most basic level, sets of computer algorithms designed to accomplish particular military tasks.

From May 13 to 16, a meeting of UN experts in Geneva under the auspices of the Convention on Certain Conventional Weapons will discuss questions relating to emerging technologies in lethal autonomous weapon systems. At this stage, it is inevitable that there will be much debate and discussion over the scope and meaning of any future prohibition. The campaign is still being shaped; if it is to succeed, a group of states and governments must coalesce around a shared understanding of the problem and its solutions over the next couple of years. What is the way forward?

Most important, the Campaign to Stop Killer Robots needs to strike a balance between establishing a wide-ranging prohibition and pragmatically accommodating the interests of potential state supporters. Would-be signatories need to be reassured that they won’t have to give up something they perceive to be militarily essential. If it is not possible to persuade states that a prohibition is needed, the campaign will most likely not find the support required to form a coalition and negotiate a successful treaty. To move into the next phase, the nongovernmental organizations that make up the campaign need to agree among themselves on a set of key issues.

A primary task will be to clarify what, exactly, should be subject to new laws and regulations, what type of rules (if any) should apply, how they should be implemented, and under whose oversight. So far, much attention has been given to whether lethal autonomous weapons are unlawful under international humanitarian law. The jury is still out on the complex issue of illegality, and the campaign must think strategically about the emphasis put on existing legal norms. If lethal autonomous weapons are not unlawful, but one wants them to be, a ban is needed. If they are unlawful, but one wants to end the discussion once and for all, a ban would still be useful. At the same time, agreement on international law alone does not resolve the matter.

One of the most contentious issues is likely to concern the threshold at which a weapon system is deemed to be “fully autonomous.” The minimum level that is set would determine which systems are banned and which are allowed to continue in operation. Setting the threshold of autonomy is going to involve significant debate, because machine decision-making exists on a continuum. A key task for the campaign will be to create consensus on this issue among both nongovernmental organizations and the states that would have to negotiate and then implement a ban.

Furthermore, the world must be convinced that a ban is realistic. Those who disagree with a need for a ban often argue that it’s too late because the technology is already in the pipelines; that it is unfeasible given the difficulty of defining automated and autonomous processes; or, in a version of the “regulate the use, not the technology” argument, that a ban is unnecessary. The campaign must address each objection.

Finally, there is the challenge of reaching a public that is already debating the use of artificial intelligence technology for civilian purposes. While the public imagination is most easily captured by fantasies of menacing-looking hardware, the problem with lethal autonomous weapons is one of decision-making and software development. The campaign needs to provide a convincing analysis of what distinguishes killer software from non-killer software, and find effective ways to communicate this distinction to governments and citizens worldwide.

In sum, the campaign must balance engagement in technical expert conversations with active participation in public debate. Identifying and arguing for broad ethical principles while keeping the objective narrow appears to be the most feasible strategy, along with insisting that the development of lethal autonomous weapons is not inevitable. Political choices and priorities will determine what kind of algorithms result.

_________________________________

Nicholas Marsh is a research fellow at the Peace Research Institute Oslo, where he works on the small arms trade and armed violence. In addition to research and writing, he has co-developed data visualizations on these subjects.

Maral Mirshahi is a project assistant at the Peace Research Institute Oslo and the Norwegian Center for Humanitarian Studies. She also researches Iranian nuclear policy.

Kristin Bergtora Sandvik, a senior researcher at the Peace Research Institute Oslo, works on humanitarian technology. She is the center director of the Norwegian Centre for Humanitarian Studies and a member of the advisory board for UAViators, a humanitarian unmanned aerial vehicle network.

Go to Original – thebulletin.org

 

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.


Comments are closed.