{"id":164073,"date":"2020-07-06T12:00:55","date_gmt":"2020-07-06T11:00:55","guid":{"rendered":"https:\/\/www.transcend.org\/tms\/?p=164073"},"modified":"2023-06-20T05:56:01","modified_gmt":"2023-06-20T04:56:01","slug":"recent-advances-in-artificial-intelligence-contribute-to-nuclear-risk-sipri","status":"publish","type":"post","link":"https:\/\/www.transcend.org\/tms\/2020\/07\/recent-advances-in-artificial-intelligence-contribute-to-nuclear-risk-sipri\/","title":{"rendered":"Recent Advances in Artificial Intelligence Contribute to Nuclear Risk \u2013 SIPRI"},"content":{"rendered":"<p><em>2 Jul 2020 &#8211;<\/em><time datetime=\"2020-06-22T12:00:00Z\">Recent advances in artificial intelligence (AI) contribute to nuclear risk according to a new report from the Stockholm International Peace Research Institute (SIPRI). The authors warn that nuclear-armed states\u2019 competition in military AI and premature adoption of AI in nuclear weapons and related capabilities could have a negative impact on strategic stability and increase the likelihood of nuclear weapon use. The report proposes AI-specific nuclear risk reduction measures and is available now.<br \/>\n<\/time><\/p>\n<div class=\"field-image field--label-hidden\">\n<div class=\"field-items\">\n<div class=\"field-item\">\n<p style=\"text-align: center;\"><img loading=\"lazy\" decoding=\"async\" class=\" aligncenter\" title=\"Michael Dziedzic\/Unsplash\" src=\"https:\/\/www.sipri.org\/sites\/default\/files\/styles\/node\/public\/2020-06\/michael-dziedzic-aqyguywncsm-unsplash_1.jpg?itok=0gLycKtX\" alt=\"michael-dziedzic-aqyguywncsm-unsplash_1\" width=\"349\" height=\"197\" \/><em><strong>Michael Dziedzic\/Unsplash<\/strong><\/em><\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"body field--label-hidden\">\n<div class=\"field-items\">\n<div class=\"field-item\">\n<h3><em><b>Recent advances in artificial intelligence will impact nuclear weapons and related capabilities\u00a0<\/b><\/em><\/h3>\n<p>The <a target=\"_blank\" href=\"https:\/\/bit.ly\/2zIMxZj\" >report<\/a> indicates that recent advances in AI, specifically machine learning and autonomy, could unlock new and varied possibilities in a wide array of nuclear weapons-related capabilities, ranging from early warning to command and control and weapon delivery.<\/p>\n<p>Machine learning and autonomy are not new, but recent developments in these fields have enabled the development of automated systems that can solve complex problems or tasks that had previously only yielded to human cognition or required human intervention.<\/p>\n<p>\u2018The key question is not if, but when, how and by whom recent advances in AI will be adopted for nuclear-related purposes,\u2019 says Dr Vincent Boulanin, Senior Researcher, SIPRI and lead author of the report. \u2018However, at this stage the answers to these questions can only be speculative. Nuclear-armed states have not been transparent about the current and future role of AI in their nuclear forces\u2019<\/p>\n<p>Research shows nonetheless that all nuclear-armed states have made the military pursuit of AI a priority, with many determined to be world leaders in the field. The report warns that this could negatively impact strategic relations, even before nuclear weapon\u2013related applications are developed or deployed.<\/p>\n<h3><em><b>Premature adoption of military artificial intelligence could increase nuclear risk \u00a0<\/b><\/em><\/h3>\n<p>The authors argue that it would be imprudent for nuclear-armed states to rush their adoption of AI technology for military purposes in general and nuclear-related purposes in particular. Premature adoption of AI could increase the risk that nuclear weapons and related capabilities could fail or be misused in ways that could trigger an accidental or inadvertent escalation of a crisis or conflict into a nuclear conflict.<\/p>\n<p>\u2018However, it is unlikely that AI technologies\u2014which are enablers\u2014will be the trigger for nuclear weapon use.\u2019 says Dr Lora Saalman, Associate Senior Fellow on Armament and Disarmament, SIPRI. \u2018Regional trends, geopolitical tensions and misinterpreted signalling must also be factored into understanding how AI technologies may contribute to escalation of a crisis to the nuclear level\u2019.<\/p>\n<p>The report recommends that transparency and confidence-building measures on national AI developments would help to mitigate such risks.<\/p>\n<h3><em><b>Challenges of artificial intelligence must be addressed in future nuclear risk reduction efforts<\/b><\/em><\/h3>\n<p>According to the report\u2019s authors, the challenges of AI in the nuclear arena must be made a priority in future nuclear risk reduction discussions.<\/p>\n<p>\u2018It is important that we do not overestimate the danger that AI poses to strategic stability and nuclear risk. However, we also must not underestimate the risk of doing nothing,\u2019 says Dr Petr Topychkanov, Senior Researcher, Nuclear Disarmament, Arms Control and Non-proliferation Programme, SIPRI.<\/p>\n<p>\u2018While the conversation on AI-related risks is still new and speculative, it is not too early for nuclear-armed states and the international security community to explore solutions to mitigate the risks that applying AI to nuclear weapon systems would pose to peace and stability,\u2019 says Topychkanov.<\/p>\n<p>The report proposes a number of measures for nuclear-armed states, such as collaborating on resolving fundamental AI safety and security problems, jointly exploring the use of AI for arms control and agreeing on concrete limits to the use of AI in nuclear forces.<\/p>\n<p><em>______________________________________<br \/>\n<\/em><\/p>\n<p style=\"padding-left: 40px;\"><em>SIPRI is an independent international institute dedicated to research into conflict, armaments, arms control and disarmament. Established in 1966, SIPRI provides data, analysis and recommendations, based on open sources, to policymakers, researchers, media and the interested public. Based in Stockholm, SIPRI\u00a0is regularly ranked among the most respected think tanks worldwide.<\/em><\/p>\n<p><em>Source: <\/em><strong><time datetime=\"2020-06-22T12:00:00Z\"><em><a target=\"_blank\" href=\"https:\/\/www.sipri.org\/media\/2020\/recent-advances-artificial-intelligence-contribute-nuclear-risk-new-sipri-report\" >Stockholm International Peace Research Institute<\/a>(<a target=\"_blank\" href=\"https:\/\/www.sipri.org\/media\/2020\/recent-advances-artificial-intelligence-contribute-nuclear-risk-new-sipri-report\" >SIPRI<\/a>)<br \/>\n<\/em><\/time><\/strong><\/p>\n<\/div>\n<\/div>\n<\/div>\n<div><\/div>\n<div><a target=\"_blank\" href=\"https:\/\/human-wrongs-watch.net\/2020\/07\/02\/recent-advances-in-artificial-intelligence-contribute-to-nuclear-risk-sipri\/\" >Go to Original &#8211; human-wrongs-watch.net<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>2 Jul 2020 &#8211; The authors warn that nuclear-armed states\u2019 competition in military AI and premature adoption of AI in nuclear weapons and related capabilities could have a negative impact on strategic stability and increase the likelihood of nuclear weapon use.<\/p>\n","protected":false},"author":4,"featured_media":105237,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3078],"tags":[1733,291,1476,1566,429,1323,1361,1301,450,70,875],"class_list":["post-164073","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","tag-artificial-intelligence-ai","tag-military","tag-nuclear-abolition","tag-nuclear-arms-in-space","tag-nuclear-ban-treaty","tag-nuclear-club","tag-nuclear-disaster","tag-nuclear-war","tag-nuclear-weapons","tag-usa","tag-wmd"],"_links":{"self":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/164073","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/comments?post=164073"}],"version-history":[{"count":1,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/164073\/revisions"}],"predecessor-version":[{"id":237706,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/164073\/revisions\/237706"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media\/105237"}],"wp:attachment":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media?parent=164073"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/categories?post=164073"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/tags?post=164073"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}