Autonomous Weapons Will Be Tireless, Efficient, Killing Machines—And There Is No Way to Stop Them

MILITARISM, 26 Nov 2018

Andreas Kirsch | Quartz – TRANSCEND Media Service

Unstoppable weapons—just add AI.

The world’s next major military conflict could be over quickly.

Our human soldiers will simply not stand a chance. Drones and robots will overrun our defenses and take the territory we are standing on. Even if we take out some of these machines, more of them will soon arrive to take their place, newly trained off our reactions to their last offense. Our own remote-controlled drones will be outmaneuvered and destroyed, as no human operator can react quickly enough to silicon-plotted attacks.

This isn’t a far-off dystopian fantasy, but a soon-to-be-realized reality. In May, Google employees resigned in protest over the company helping the US military develop AI capabilities for drones. (The company ultimately decided to shelve the project.) More recently, 2,400 researchers vowed not to develop autonomous weapons. Many AI researchers and engineers are reluctant to work on autonomous weapons because they fear their development might kick off an AI arms race: Such weapons could eventually fall into the wrong hands, or they could be used to suppress the civilian population.

How could we stop this from happening?

The first option is developing a non-proliferation treaty to ban autonomous weapons, similar to the non-proliferation treaty for nuclear weapons. Without such a treaty, the parties voluntarily abstaining from developing autonomous weapons for moral reasons will have a decisive disadvantage.

Nobody mourns them or asks for their bodies to be returned from war.

That’s because autonomous weapons have many advantages over human soldiers. For one, they do not tire. They can be more precise, and they can react faster and operate outside of parameters in which a human would survive, such as long stints in desert terrains. They do not take years of training and rearing, and they can be produced at scale. At worst they get destroyed or damaged, not killed or injured, and nobody mourns them or asks for their bodies to be returned from war.

It is also easier to justify military engagements to the public when autonomous weapons are used. As human losses to the attacker’s side are minimal, armies can keep a low profile. Recent engagements by the US and EU in Libya, Syria, and Yemen have focused on using drones, bombing campaigns, and cruise missiles. Parties without such weapons will have a distinct handicap when their soldiers have to fight robots.

But even if all countries signed an international treaty to ban the development of autonomous weapons, as they once did for nuclear non-proliferation, it would be unlikely to prevent their creation. This is because there are stark differences between the two modes of war.

There are two properties that make 1958’s nuclear non-proliferation treaty work quite well: The first one is a lengthy ramp-up time to deploying nuclear weapons, which allows other signatories to react to violations and enact sanctions, and the second one is effective inspections.

To build nuclear weapons, you need enrichment facilities and weapons-grade plutonium. You cannot feasibly hide either and, even when hidden, traces of plutonium are detected easily during inspections. It takes years, considerable know-how, and specialized tools to create all the special-purpose parts. Moreover, all of the know-how has to be developed from scratch because it is secret and import-export controlled. And even then, you still need to develop missiles and means of deploying them.

But it’s the opposite with autonomous weapons.

To start, they have a very short ramp-up time: Different technologies that could be used to create autonomous weapons already exist and are being developed independently in the open. For example, tanks and fighter planes have lots of sensors and cameras to record everything that is happening, and pilots already interface with their plane through a computer that reinterprets their steering commands. They just need to be combined with AI, and suddenly they have become autonomous weapons.

AI research is progressing faster and faster as more money is poured in by both governments and private entities. Progress is not only driven by research labs like Alphabet’s DeepMind, but also by game companies. Recently, EA’s SEED division began to train more general-purpose AIs to play its Battlefield 1 game. After all, AI soldier don’t need to be trained on ground: Elon Musk’s OpenAI has published research on “transfer learning,” which allows AIs to be trained in a simulation and then adapted to the real world.

It’s much harder to spot an AI for autonomous weapons than it is to spot the creation of a nuclear weapon.

This makes effective inspections impossible. Most of the technologies and research needed for autonomous weapons are not specific to them. In addition, it’s much harder to spot an AI for autonomous weapons than it is to spot the creation of a nuclear weapon. AIs can be trained in any data center: They are only code and data, after all, and code and data can now easily be moved and hidden without leaving a trace. Most of their training can happen in simulations on any server in the cloud, and running such a simulation would look no different to outside inspectors from predicting tomorrow’s weather forecast or training an AI to play the latest Call of Duty.

Without these two properties, a treaty has no teeth and no eyes. Signatories will still continue to research general technologies in the open and integrate them into autonomous weapons in secret with low chances of detection. They will know that others are likely doing the same, and that abstaining is not an option.

So, what can we do?

We cannot shirk our responsibilities. Autonomous weapons are inevitable. Both offensive and defensive uses of autonomous weapons need to be researched, and we have to build up deterrent capabilities. However, even if we cannot avoid autonomous weapons, we can prevent them from being used on civilians and constrain their use in policing.

There can be no happy ending, only one we can live with.

___________________________________________________

Andreas Kirsch – Fellow at Newspeak House, and former Google and Deepmind software and research engineer.

 

Go to Original – qz.com

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

Comments are closed.