{"id":202968,"date":"2022-01-17T12:00:26","date_gmt":"2022-01-17T12:00:26","guid":{"rendered":"https:\/\/www.transcend.org\/tms\/?p=202968"},"modified":"2022-01-10T09:09:09","modified_gmt":"2022-01-10T09:09:09","slug":"lethal-autonomous-weapons-systems-and-the-fight-to-contain-them","status":"publish","type":"post","link":"https:\/\/www.transcend.org\/tms\/2022\/01\/lethal-autonomous-weapons-systems-and-the-fight-to-contain-them\/","title":{"rendered":"Lethal Autonomous Weapons Systems and the Fight to Contain Them"},"content":{"rendered":"<p class=\"main-article__title article-title\" style=\"text-align: center;\"><strong><em>Keep Your Laws Off My Planet<\/em><\/strong><\/p>\n<div id=\"attachment_202969\" style=\"width: 550px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2022\/01\/ban_killer_robots-06112021-1.jpg\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-202969\" class=\"size-full wp-image-202969\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2022\/01\/ban_killer_robots-06112021-1.jpg\" alt=\"\" width=\"540\" height=\"360\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2022\/01\/ban_killer_robots-06112021-1.jpg 540w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2022\/01\/ban_killer_robots-06112021-1-300x200.jpg 300w\" sizes=\"auto, (max-width: 540px) 100vw, 540px\" \/><\/a><p id=\"caption-attachment-202969\" class=\"wp-caption-text\">World Politics Review<\/p><\/div>\n<p><em>9 Jan 2022 &#8211; <\/em>Here\u2019s a <a href=\"https:\/\/www.bbc.co.uk\/programmes\/m00127t9\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">scenario to consider<\/a>: a military force has purchased a million cheap, disposable flying drones each the size of a deck of cards, each capable of carrying three grams of explosives \u2014 enough to kill a single person or, in a \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/Shaped_charge\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">shaped charge<\/a>,\u201d pierce a steel wall. They\u2019ve been programmed to seek out and \u201cengage\u201d (kill) certain human beings, based on specific \u201csignature\u201d characteristics like carrying a weapon, say, or having a particular skin color. They fit in a single shipping container and can be deployed remotely. Once launched, they will fly and kill autonomously without any further human action.<\/p>\n<p>Science fiction? Not really. It could happen tomorrow. The technology already exists.<\/p>\n<p id=\"more\">In fact, lethal autonomous weapons systems (LAWS) have a long history. During the spring of 1972, I spent a few days <a href=\"https:\/\/www.nytimes.com\/1972\/04\/29\/archives\/lab-occupation-ends-lab-at-columbia-opened-after-debate.html?searchResultPosition=2\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">occupying the physics building<\/a> at Columbia University in New York City. With a hundred other students, I slept on the floor, ate donated takeout food, and listened to Alan Ginsberg when he showed up to honor us with some of his extemporaneous poetry. I wrote leaflets then, commandeering a Xerox machine to print them out.<\/p>\n<p>And why, of all campus buildings, did we choose the one housing the Physics department? The answer: to convince five Columbia faculty physicists to sever their connections with the Pentagon\u2019s Jason Defense Advisory Group, a program offering money and lab space to support basic scientific research that might prove useful for U.S. war-making efforts. Our specific objection: to the involvement of Jason\u2019s scientists in designing parts of what was then known as the \u201cautomated battlefield\u201d for deployment in Vietnam. That system would indeed prove a forerunner of the lethal autonomous weapons systems that are poised to become a potentially significant part of this country\u2019s \u2014 and the world\u2019s \u2014 armory.<\/p>\n<p><strong>Early (Semi-)Autonomous Weapons<\/strong><\/p>\n<p>Washington faced quite a few strategic problems in prosecuting its war in Indochina, including the general corruption and unpopularity of the South Vietnamese regime it was propping up. Its biggest military challenge, however, was probably North Vietnam\u2019s continual infiltration of personnel and supplies on what was called the Ho Chi Minh Trail, which ran from north to south <a href=\"https:\/\/www.nationsonline.org\/maps\/Southeast-Asia-Map.jpg\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">along the Cambodian and Laotian borders<\/a>. The Trail was, in fact, a network of easily repaired dirt roads and footpaths, streams and rivers, lying under a thick jungle canopy that made it almost impossible to detect movement from the air.<\/p>\n<p>The U.S. response, developed by Jason in 1966 and deployed the following year, was an attempt to interdict that infiltration by <a href=\"https:\/\/thefutureofthings.com\/3902-igloo-white-the-automated-battlefield\/\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">creating<\/a> an automated battlefield composed of four parts, analogous to a human body\u2019s eyes, nerves, brain, and limbs. The eyes were a broad variety of sensors \u2014 acoustic, seismic, even chemical (for sensing human urine) \u2014 most dropped by air into the jungle. The nerve equivalents transmitted signals to the \u201cbrain.\u201d However, since the sensors had a maximum transmission range of only about 20 miles, the U.S. military had to constantly fly aircraft above the foliage to catch any signal that might be tripped by passing North Vietnamese troops or transports. The planes would then relay the news to the brain. (Originally intended to be remote controlled, those aircraft performed so poorly that human pilots were usually necessary.)<\/p>\n<p>And that brain, a magnificent military installation secretly built in Thailand\u2019s Nakhon Phanom, housed two state-of-the-art IBM mainframe computers. A small army of programmers wrote and rewrote the code to keep them ticking, as they attempted to make sense of the stream of data transmitted by those planes. The target coordinates they came up with were then transmitted to attack aircraft, which were the limb equivalents. The group running that automated battlefield was designated Task Force Alpha and the whole project went under the code name Igloo White.<\/p>\n<p>As it turned out, Igloo White was largely <a href=\"https:\/\/thefutureofthings.com\/3902-igloo-white-the-automated-battlefield\/\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">an expensive failure<\/a>, costing about a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Operation_Igloo_White\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">billion dollars<\/a> a year for five years (<a href=\"https:\/\/www.in2013dollars.com\/us\/inflation\/1968?amount=1000000\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">almost $40 billion<\/a> total in today\u2019s dollars). The time lag between a sensor tripping and munitions dropping made the system ineffective. As a result, at times Task Force Alpha simply carpet-bombed areas where a single sensor might have gone off. The North Vietnamese quickly realized how those sensors worked and developed methods of fooling them, from playing truck-ignition recordings to planting buckets of urine.<\/p>\n<p>Given the history of semi-automated weapons systems like drones and \u201csmart bombs\u201d in the intervening years, you probably won\u2019t be surprised to learn that this first automated battlefield couldn\u2019t discriminate between soldiers and civilians. In this, they merely continued a trend that\u2019s existed since at least the eighteenth century in which wars routinely kill more civilians than combatants.<\/p>\n<p>None of these shortcomings kept Defense Department officials from regarding the automated battlefield with awe. Andrew Cockburn <a href=\"https:\/\/www.businessinsider.com\/long-before-drones-the-us-tried-to-automize-warfare-during-the-vietnam-war-2015-3\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">described<\/a> this worshipful posture in his book <a href=\"https:\/\/www.amazon.com\/dp\/1784782696\/ref=nosim\/?tag=tomdispatch-20\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\"><em>Kill Chain: The Rise of the High-Tech Assassins<\/em><\/a>, quoting Leonard Sullivan, a high-ranking Pentagon official who visited Vietnam in 1968: \u201cJust as it is almost impossible to be an agnostic in the Cathedral of Notre Dame, so it is difficult to keep from being swept up in the beauty and majesty of the Task Force Alpha temple.\u201d<\/p>\n<p>Who or what, you well might wonder, was to be worshipped in such a temple?<\/p>\n<p>Most aspects of that Vietnam-era \u201cautomated\u201d battlefield actually required human intervention. Human beings were planting the sensors, programming the computers, piloting the airplanes, and releasing the bombs. In what sense, then, was that battlefield \u201cautomated\u201d? As a harbinger of what was to come, the system had eliminated human intervention at a single crucial point in the process: the decision to kill. On that automated battlefield, the computers decided where and when to drop the bombs.<\/p>\n<p>In 1969, Army Chief of Staff William Westmoreland <a href=\"https:\/\/www.businessinsider.com\/long-before-drones-the-us-tried-to-automize-warfare-during-the-vietnam-war-2015-3\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">expressed his enthusiasm<\/a> for this removal of the messy human element from war-making. Addressing a luncheon for the Association of the U.S. Army, a lobbying group, he declared:<\/p>\n<blockquote class=\"wp-block-quote\"><p>\u201cOn the battlefield of the future enemy forces will be located, tracked, and targeted almost instantaneously through the use of data links, computer-assisted intelligence evaluation, and automated fire control. With first round kill probabilities approaching certainty, and with surveillance devices that can continually track the enemy, the need for large forces to fix the opposition will be less important.\u201d<\/p><\/blockquote>\n<p>What Westmoreland meant by \u201cfix the opposition\u201d was kill the enemy. Another military euphemism in the twenty-first century is \u201cengage.\u201d In either case, the meaning is the same: the role of lethal autonomous weapons systems is to automatically find and kill human beings, without human intervention.<\/p>\n<p><strong>New LAWS for a New Age \u2014 Lethal Autonomous Weapons Systems<\/strong><\/p>\n<p>Every autumn, the British Broadcasting Corporation sponsors a series of four lectures given by an expert in some important field of study. In 2021, the BBC invited Stuart Russell, professor of computer science and founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, to deliver those \u201cReith Lectures.\u201d His general subject was the future of artificial intelligence (AI), and the second lecture was entitled \u201cThe Future Role of AI in Warfare.\u201d In it, he addressed the issue of lethal autonomous weapons systems, or LAWS, which the United Nations defines as \u201cweapons that locate, select, and engage human targets without human supervision.\u201d<\/p>\n<p>Russell\u2019s main point, <a href=\"https:\/\/www.bbc.co.uk\/programmes\/m00127t9\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">eloquently made<\/a>, was that, although many people believe lethal autonomous weapons are a potential future nightmare, residing in the realm of science fiction, \u201cThey are not. You can buy them today. They are advertised on the web.\u201d<\/p>\n<p>I\u2019ve never seen any of the movies in the <em>Terminator<\/em> franchise, but apparently military planners and their PR flacks assume most people derive their understanding of such LAWS from this fictional dystopian world. Pentagon officials are frequently at pains to explain why the weapons they are developing are not, in fact, real-life equivalents of SkyNet \u2014 the worldwide communications network that, in those films, becomes self-conscious and decides to eliminate humankind. Not to worry, as a deputy secretary of defense told Russell, \u201cWe have listened carefully to these arguments and my experts have assured me that there is no risk of accidentally creating SkyNet.\u201d<\/p>\n<p>Russell\u2019s point, however, was that a weapons system doesn\u2019t need self-awareness to act autonomously or to present a threat to innocent human beings. What it does need is:<\/p>\n<ul>\n<li>A mobile platform (anything that can move, from a tiny quadcopter to a fixed-wing aircraft)<\/li>\n<li>Sensory capacity (the ability to detect visual or sound information)<\/li>\n<li>The ability to make tactical decisions (the same kind of capacity already found in computer programs that play chess)<\/li>\n<li>The ability to \u201cengage,\u201d i.e. kill (which can be as complicated as firing a missile or dropping a bomb, or as rudimentary as committing robot suicide by slamming into a target and exploding)<\/li>\n<\/ul>\n<p>The reality is that such systems already exist. Indeed, a government-owned weapons company in Turkey recently advertised its Kargu drone \u2014 a quadcopter \u201cthe size of a dinner plate,\u201d as Russell described it, which can carry a kilogram of explosives and is capable of making \u201canti-personnel autonomous hits\u201d with \u201ctargets selected on images and face recognition.\u201d The company\u2019s site has since been <a href=\"https:\/\/www.stm.com.tr\/en\/kargu-autonomous-tactical-multi-rotor-attack-uav\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">altered<\/a> to emphasize its adherence to a supposed \u201cman-in-the-loop\u201d principle. However, the U.N. <a href=\"https:\/\/www.npr.org\/2021\/06\/01\/1002196245\/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">has reported<\/a> that a fully-autonomous Kargu-2 was, in fact, deployed in Libya in 2020.<\/p>\n<p>You can <a href=\"https:\/\/www.amazon.com\/Best-Sellers-Toys-Games-Hobby-RC-Quadcopters-Multirotors\/zgbs\/toys-and-games\/11608080011\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">buy your own quadcopter<\/a> right now on Amazon, although you\u2019ll still have to apply some DIY computer skills if you want to get it to operate autonomously.<\/p>\n<p>The truth is that lethal autonomous weapons systems are less likely to look like something from the <em>Terminator<\/em> movies than like swarms of tiny killer bots. Computer miniaturization means that the technology already exists to create effective LAWS. If your smart phone could fly, it could be an autonomous weapon. Newer phones use facial recognition software to \u201cdecide\u201d whether to allow access. It\u2019s not a leap to create flying weapons the size of phones, programmed to \u201cdecide\u201d to attack specific individuals, or individuals with specific features. Indeed, it\u2019s likely such weapons already exist.<\/p>\n<p><strong>Can We Outlaw LAWS?<\/strong><\/p>\n<p>So, what\u2019s wrong with LAWS, and is there any point in trying to outlaw them? Some opponents argue that the problem is they eliminate human responsibility for making lethal decisions. Such critics suggest that, unlike a human being aiming and pulling the trigger of a rifle, a LAWS can choose and fire at its own targets. Therein, they argue, lies the special danger of these systems, which will inevitably make mistakes, as anyone whose iPhone has refused to recognize his or her face will acknowledge.<\/p>\n<p>In my view, the issue isn\u2019t that autonomous systems remove human beings from lethal decisions. To the extent that weapons of this sort make mistakes, human beings will still bear moral responsibility for deploying such imperfect lethal systems. LAWS are designed and deployed by human beings, who therefore remain responsible for their effects. Like the semi-autonomous drones of the present moment (often piloted from half a world away), lethal autonomous weapons systems don\u2019t remove human moral responsibility. They just increase the distance between killer and target.<\/p>\n<p>Furthermore, like already outlawed arms, including chemical and biological weapons, these systems have the capacity to kill indiscriminately. While they may not obviate human responsibility, once activated, they will certainly elude human control, just like poison gas or a weaponized virus.<\/p>\n<p>And as with chemical, biological, and nuclear weapons, their use could effectively be prevented by international law and treaties. True, rogue actors, like the <a href=\"https:\/\/www.npr.org\/2019\/02\/17\/695545252\/more-than-300-chemical-attacks-launched-during-syrian-civil-war-study-says\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">Assad regime in Syria<\/a> or the U.S. military <a href=\"https:\/\/www.theguardian.com\/politics\/2005\/nov\/15\/usa.iraq\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">in the Iraqi city of Fallujah<\/a>, may occasionally violate such strictures, but for the most part, prohibitions on the use of certain kinds of potentially devastating weaponry have held, in some cases for over a century.<\/p>\n<p>Some American defense experts argue that, since adversaries will inevitably develop LAWS, common sense requires this country to do the same, implying that the best defense against a given weapons system is an identical one. That makes as much sense as fighting fire with fire when, in most cases, using water is much the better option.<\/p>\n<p><strong>The Convention on Certain Conventional Weapons<\/strong><\/p>\n<p>The area of international law that governs the treatment of human beings in war is, <a href=\"https:\/\/www.icrc.org\/en\/war-and-law\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">for historical reasons<\/a>, called international humanitarian law (IHL). In 1995, the United States ratified an addition to IHL: the 1980 <a href=\"https:\/\/www.un.org\/disarmament\/the-convention-on-certain-conventional-weapons\/\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">U.N. Convention on Certain Conventional Weapons<\/a>. (Its full title is much longer, but its name is generally abbreviated as CCW.) It governs the use, for example, of incendiary weapons like napalm, as well as biological and chemical agents.<\/p>\n<p>The signatories to CCW meet periodically to discuss what other weaponry might fall under its jurisdiction and prohibitions, including LAWS. The most recent conference took place in December 2021. Although <a href=\"https:\/\/indico.un.org\/event\/29588\/page\/0\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">transcripts<\/a> of the proceedings exist, only a draft final document \u2014 produced before the conference opened \u2014 has been issued. This may be because no consensus was even reached on how to define such systems, let alone on whether they should be prohibited. The European Union, the U.N., at least 50 signatory nations, and (according to polls), most of the world population believe that autonomous weapons systems should be outlawed. The U.S., Israel, the United Kingdom, and Russia disagree, along with a few other outliers.<\/p>\n<p>Prior to such CCW meetings, a Group of Government Experts (GGE) convenes, ostensibly to provide technical guidance for the decisions to be made by the Convention\u2019s \u201chigh contracting parties.\u201d In 2021, the GGE was unable to reach a consensus about whether such weaponry should be outlawed. The United States held that even defining a lethal autonomous weapon was unnecessary (perhaps because if they could be defined, they could be outlawed). The U.S. delegation <a href=\"https:\/\/geneva.usmission.gov\/2021\/08\/04\/u-s-statement-at-the-gge-on-laws-agenda-item-5b\/\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">put it this way<\/a>:<\/p>\n<blockquote class=\"wp-block-quote\"><p>\u201cThe United States has explained our perspective that a working definition should not be drafted with a view toward describing weapons that should be banned. This would be \u2014 as some colleagues have already noted \u2014 very difficult to reach consensus on, and counterproductive.\u00a0Because there is nothing intrinsic in autonomous capabilities that would make a weapon prohibited under IHL, we are not convinced that prohibiting weapons based on degrees of autonomy, as our French colleagues have suggested, is a useful approach.\u201d<\/p><\/blockquote>\n<p>The U.S. delegation was similarly keen to eliminate any language that might require \u201chuman control\u201d of such weapons systems:<\/p>\n<blockquote class=\"wp-block-quote\"><p>\u201c[In] our view IHL does not establish a requirement for \u2018human control\u2019 as such\u2026 Introducing new and vague requirements like that of human control could, we believe, confuse, rather than clarify, especially if these proposals are inconsistent with long-standing, accepted practice in using many common weapons systems with autonomous functions.\u201d<\/p><\/blockquote>\n<p>In the same meeting, that delegation repeatedly insisted that lethal autonomous weapons would actually be good for us, because they would surely prove better than human beings at distinguishing between civilians and combatants.<\/p>\n<p>Oh, and if you believe that protecting civilians is the reason the arms industry is investing billions of dollars in developing autonomous weapons, I\u2019ve got a patch of land to sell you on Mars that\u2019s going cheap.<\/p>\n<p><strong>The Campaign to Stop Killer Robots<\/strong><\/p>\n<p>The Governmental Group of Experts also has about 35 non-state members, including non-governmental organizations and universities. The <a href=\"https:\/\/www.stopkillerrobots.org\/\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">Campaign to Stop Killer Robots<\/a>, a coalition of 180 organizations, among them Amnesty International, Human Rights Watch, and the World Council of Churches, is one of these. Launched in 2013, this vibrant group provides important commentary on the technical, legal, and ethical issues presented by LAWS and offers other organizations and individuals a way to become involved in the fight to outlaw such potentially devastating weapons systems.<\/p>\n<p>The continued construction and deployment of killer robots is not inevitable. Indeed, a majority of the world would like to see them prohibited, including U.N. Secretary General Antonio Guterres. Let\u2019s give him the <a href=\"https:\/\/foreignpolicy.com\/2020\/10\/14\/ai-drones-swarms-killer-robots-partial-ban-on-autonomous-weapons-would-make-everyone-safer\/\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">last word<\/a>:<\/p>\n<blockquote><p><em>\u201cMachines with the power and discretion to take human lives without human involvement are politically unacceptable, morally repugnant, and should be prohibited by international law.\u201d<\/em><\/p><\/blockquote>\n<p>I couldn\u2019t agree more.<\/p>\n<p>______________________________________________<\/p>\n<p class=\"is-style-copyright\"><em>Copyright 2022 Rebecca Gordon<\/em><\/p>\n<p style=\"padding-left: 40px;\"><em>Rebecca Gordon teaches at the University of San Francisco. She is the author of <\/em><a href=\"https:\/\/www.amazon.com\/dp\/0199336431\/ref=nosim\/?tag=tomdispatch-20\"  target=\"_blank\" rel=\"nofollow external noopener noreferrer\" data-wpel-link=\"external\">Mainstreaming Torture<\/a>, <a href=\"https:\/\/www.amazon.com\/dp\/1510703330\/ref=nosim\/?tag=tomdispatch-20\"  target=\"_blank\" rel=\"noopener nofollow external noreferrer\" data-wpel-link=\"external\">American Nuremberg: The U.S. Officials Who Should Stand Trial for Post-9\/11 War Crimes<\/a> <em>and is now at work on a new book on the history of torture in the United States.<\/em><\/p>\n<p><a target=\"_blank\" href=\"https:\/\/tomdispatch.com\/keep-your-laws-off-my-planet\/?utm_source=TomDispatch&amp;utm_campaign=bf9ce3e70c-EMAIL_CAMPAIGN_2021_07_13_02_04_COPY_01&amp;utm_medium=email&amp;utm_term=0_1e41682ade-bf9ce3e70c-308810425#more\" >Go to Original &#8211; tomdispatch.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>9 Jan 2022 &#8211; The continued construction and deployment of killer robots is not inevitable. Let\u2019s give U.N. Secretary General Antonio Guterres the last word: \u201cMachines with the power and discretion to take human lives without human involvement are politically unacceptable, morally repugnant, and should be prohibited by international law.\u201d I couldn\u2019t agree more.<\/p>\n","protected":false},"author":4,"featured_media":202969,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[57],"tags":[867,1161,1188,120,1126,260,487,1050,504,1105,780,769,91,112,109,287,95,70,126,118,492],"class_list":["post-202968","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-militarism","tag-anglo-america","tag-arms-industry","tag-arms-race","tag-conflict","tag-hegemony","tag-history","tag-human-rights","tag-imperialism","tag-international-relations","tag-military-industrial-complex","tag-military-intervention","tag-military-supremacy","tag-nato","tag-pentagon","tag-politics","tag-power","tag-us-military","tag-usa","tag-violence","tag-war","tag-war-on-terror"],"_links":{"self":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/202968","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/comments?post=202968"}],"version-history":[{"count":0,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/202968\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media\/202969"}],"wp:attachment":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media?parent=202968"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/categories?post=202968"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/tags?post=202968"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}