Recent Advances in Artificial Intelligence Contribute to Nuclear Risk – SIPRI

ARTIFICIAL INTELLIGENCE-AI, 6 Jul 2020

Human Wrongs Watch | SIPRI - TRANSCEND Media Service

2 Jul 2020 –

michael-dziedzic-aqyguywncsm-unsplash_1Michael Dziedzic/Unsplash

Recent advances in artificial intelligence will impact nuclear weapons and related capabilities 

The report indicates that recent advances in AI, specifically machine learning and autonomy, could unlock new and varied possibilities in a wide array of nuclear weapons-related capabilities, ranging from early warning to command and control and weapon delivery.

Machine learning and autonomy are not new, but recent developments in these fields have enabled the development of automated systems that can solve complex problems or tasks that had previously only yielded to human cognition or required human intervention.

‘The key question is not if, but when, how and by whom recent advances in AI will be adopted for nuclear-related purposes,’ says Dr Vincent Boulanin, Senior Researcher, SIPRI and lead author of the report. ‘However, at this stage the answers to these questions can only be speculative. Nuclear-armed states have not been transparent about the current and future role of AI in their nuclear forces’

Research shows nonetheless that all nuclear-armed states have made the military pursuit of AI a priority, with many determined to be world leaders in the field. The report warns that this could negatively impact strategic relations, even before nuclear weapon–related applications are developed or deployed.

Premature adoption of military artificial intelligence could increase nuclear risk  

The authors argue that it would be imprudent for nuclear-armed states to rush their adoption of AI technology for military purposes in general and nuclear-related purposes in particular. Premature adoption of AI could increase the risk that nuclear weapons and related capabilities could fail or be misused in ways that could trigger an accidental or inadvertent escalation of a crisis or conflict into a nuclear conflict.

‘However, it is unlikely that AI technologies—which are enablers—will be the trigger for nuclear weapon use.’ says Dr Lora Saalman, Associate Senior Fellow on Armament and Disarmament, SIPRI. ‘Regional trends, geopolitical tensions and misinterpreted signalling must also be factored into understanding how AI technologies may contribute to escalation of a crisis to the nuclear level’.

The report recommends that transparency and confidence-building measures on national AI developments would help to mitigate such risks.

Challenges of artificial intelligence must be addressed in future nuclear risk reduction efforts

According to the report’s authors, the challenges of AI in the nuclear arena must be made a priority in future nuclear risk reduction discussions.

‘It is important that we do not overestimate the danger that AI poses to strategic stability and nuclear risk. However, we also must not underestimate the risk of doing nothing,’ says Dr Petr Topychkanov, Senior Researcher, Nuclear Disarmament, Arms Control and Non-proliferation Programme, SIPRI.

‘While the conversation on AI-related risks is still new and speculative, it is not too early for nuclear-armed states and the international security community to explore solutions to mitigate the risks that applying AI to nuclear weapon systems would pose to peace and stability,’ says Topychkanov.

The report proposes a number of measures for nuclear-armed states, such as collaborating on resolving fundamental AI safety and security problems, jointly exploring the use of AI for arms control and agreeing on concrete limits to the use of AI in nuclear forces.

______________________________________

SIPRI is an independent international institute dedicated to research into conflict, armaments, arms control and disarmament. Established in 1966, SIPRI provides data, analysis and recommendations, based on open sources, to policymakers, researchers, media and the interested public. Based in Stockholm, SIPRI is regularly ranked among the most respected think tanks worldwide.

Source:

Go to Original – human-wrongs-watch.net

Tags: , , , , , , , , , ,

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

Comments are closed.