{"id":150351,"date":"2019-12-23T12:00:54","date_gmt":"2019-12-23T12:00:54","guid":{"rendered":"https:\/\/www.transcend.org\/tms\/?p=150351"},"modified":"2023-06-20T05:56:53","modified_gmt":"2023-06-20T04:56:53","slug":"the-invention-of-ethical-artificial-intelligence-how-big-tech-manipulates-academia-to-avoid-regulation","status":"publish","type":"post","link":"https:\/\/www.transcend.org\/tms\/2019\/12\/the-invention-of-ethical-artificial-intelligence-how-big-tech-manipulates-academia-to-avoid-regulation\/","title":{"rendered":"The Invention of \u201cEthical Artificial Intelligence\u201d: How Big Tech Manipulates Academia to Avoid Regulation"},"content":{"rendered":"<div id=\"attachment_150352\" style=\"width: 610px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/artificial-intelligence-tech-media-science.jpg\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-150352\" class=\"wp-image-150352\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/artificial-intelligence-tech-media-science-1024x512.jpg\" alt=\"\" width=\"600\" height=\"300\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/artificial-intelligence-tech-media-science-1024x512.jpg 1024w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/artificial-intelligence-tech-media-science-300x150.jpg 300w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/artificial-intelligence-tech-media-science-768x384.jpg 768w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/artificial-intelligence-tech-media-science.jpg 1440w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a><p id=\"caption-attachment-150352\" class=\"wp-caption-text\">Illustration: Yoshi Sodeoka for The Intercept<\/p><\/div>\n<p><em>20 Dec 2019 &#8211; <\/em>The irony of the ethical scandal enveloping Joichi Ito, the former director of the MIT Media Lab, is that he used to lead academic initiatives on ethics. After the revelation of his financial ties to Jeffrey Epstein, the financier charged with sex trafficking underage girls as young as 14, Ito resigned from multiple roles at MIT, a visiting professorship at Harvard Law School, and the boards of the John D. and Catherine T. MacArthur Foundation, the John S. and James L. Knight Foundation, and the New York Times Company.<\/p>\n<p>Many spectators are puzzled by Ito\u2019s influential role as an ethicist of artificial intelligence. Indeed, his initiatives were crucial in establishing the discourse of \u201cethical AI\u201d that is now ubiquitous in academia and in the mainstream press. In 2016, then-President Barack Obama <a target=\"_blank\" href=\"https:\/\/www.wired.com\/2016\/10\/president-obama-mit-joi-ito-interview\/\" >described him<\/a> as an \u201c<a target=\"_blank\" href=\"http:\/\/news.mit.edu\/2016\/president-obama-discusses-artificial-intelligence-media-lab-joi-ito-1014\" >expert<\/a>\u201d on AI and ethics. Since 2017, Ito financed many projects through the $27 million <a target=\"_blank\" href=\"https:\/\/knightfoundation.org\/aifund-faq\/\" >Ethics and Governance of AI Fund<\/a>, an initiative anchored by the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University. What was all the talk of \u201cethics\u201d really about?<\/p>\n<p>For 14 months, I worked as a graduate student researcher in Ito\u2019s group on AI ethics at the Media Lab. I stopped on August 15, immediately after Ito <a target=\"_blank\" href=\"https:\/\/www.media.mit.edu\/posts\/my-apology-regarding-jeffrey-epstein\/\" >published<\/a> his initial \u201capology\u201d regarding his ties to Epstein, in which he acknowledged accepting money from the financier both for the Media Lab and for Ito\u2019s outside venture funds. Ito did not disclose that Epstein had, at the time this money changed hands, already pleaded guilty to a child prostitution charge in Florida, or that Ito took numerous steps to hide Epstein\u2019s name from official records, as The New Yorker <a target=\"_blank\" href=\"https:\/\/www.newyorker.com\/news\/news-desk\/how-an-elite-university-research-center-concealed-its-relationship-with-jeffrey-epstein\" >later revealed<\/a>.<\/p>\n<blockquote><p><strong><em>The discourse of \u201cethical AI\u201d was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.<\/em><\/strong><\/p><\/blockquote>\n<p>Inspired by whistleblower Signe Swenson and others who have spoken out, I have decided to report what I came to learn regarding Ito\u2019s role in shaping the field of AI ethics, since this is a matter of public concern. The emergence of this field is a recent phenomenon, as past AI researchers had been largely uninterested in the study of ethics. A former Media Lab colleague recalls that Marvin Minsky, the deceased AI pioneer at MIT, used to say that \u201can ethicist is someone who has a problem with whatever you have in your mind.\u201d (In recently unsealed court filings, victim Virginia Roberts Giuffre testified that Epstein directed her to have sex with Minsky.) Why, then, did AI researchers suddenly start talking about ethics?<\/p>\n<p>At the Media Lab, I learned that the discourse of \u201cethical AI,\u201d championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies. A key group behind this effort, with the lab as a member, made policy recommendations in California that contradicted the conclusions of research I conducted with several lab colleagues, research that led us to oppose the use of computer algorithms in deciding whether to jail people pending trial. Ito himself would eventually complain, in private meetings with financial and tech executives, that the group\u2019s recommendations amounted to \u201cwhitewashing\u201d a thorny ethical issue. \u201cThey water down stuff we try to say to prevent the use of algorithms that don\u2019t seem to work well\u201d in detention decisions, he confided to one billionaire.<\/p>\n<p>I also watched MIT help the U.S. military brush aside the moral complexities of drone warfare, hosting a superficial talk on AI and ethics by Henry Kissinger, the former secretary of state and notorious war criminal, and giving input on the U.S. Department of Defense\u2019s \u201cAI Ethics Principles\u201d for warfare, which embraced \u201cpermissibly biased\u201d algorithms and which avoided using the word \u201cfairness\u201d because the Pentagon believes \u201cthat fights should not be fair.\u201d<\/p>\n<p>Ito did not respond to requests for comment.<\/p>\n<div id=\"attachment_150353\" style=\"width: 460px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/joichi-ito-imt-ai-tech.jpg\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-150353\" class=\"wp-image-150353\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/joichi-ito-imt-ai-tech.jpg\" alt=\"\" width=\"450\" height=\"301\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/joichi-ito-imt-ai-tech.jpg 1024w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/joichi-ito-imt-ai-tech-300x200.jpg 300w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/joichi-ito-imt-ai-tech-768x513.jpg 768w\" sizes=\"auto, (max-width: 450px) 100vw, 450px\" \/><\/a><p id=\"caption-attachment-150353\" class=\"wp-caption-text\">Joichi Ito, then-director of MIT Media Lab, speaks during a press conference in Tokyo on July 8, 2016.<br \/>Photo: Akio Kon\/Bloomberg\/Getty Images<\/p><\/div>\n<p>MIT lent credibility to the idea that big tech could police its own use of artificial intelligence at a time when the industry faced increasing criticism and calls for legal regulation. Just in 2018, there were several controversies: Facebook\u2019s breach of private data on more than 50 million users to a political marketing firm hired by Donald Trump\u2019s presidential campaign, revealed in March 2018; Google\u2019s contract with the Pentagon for computer vision software to be used in combat zones, revealed that same month; Amazon\u2019s sale of facial recognition technology to police departments, revealed in May; Microsoft\u2019s contract with the U.S. Immigration and Customs Enforcement revealed in June; and IBM\u2019s secret collaboration with the New York Police Department for facial recognition and racial classification in video surveillance footage, revealed in September. Under the slogan #TechWontBuildIt, thousands of workers at these firms have organized protests and circulated petitions against such contracts. From #NoTechForICE to #Data4BlackLives, several grassroots campaigns have demanded legal restrictions of some uses of computational technologies (e.g., forbidding the use of facial recognition by police).<\/p>\n<p>Meanwhile, corporations have tried to shift the discussion to focus on voluntary \u201cethical principles,\u201d \u201cresponsible practices,\u201d and technical adjustments or \u201csafeguards\u201d framed in terms of \u201cbias\u201d and \u201cfairness\u201d (e.g., requiring or encouraging police to adopt \u201cunbiased\u201d or \u201cfair\u201d facial recognition). In January 2018, Microsoft published its \u201cethical principles\u201d for AI, starting with \u201cfairness.\u201d In May, Facebook announced its \u201ccommitment to the ethical development and deployment of AI\u201d and a tool to \u201csearch for bias\u201d called \u201cFairness Flow.\u201d In June, Google published its \u201cresponsible practices\u201d for AI research and development. In September, IBM announced a tool called \u201cAI Fairness 360,\u201d designed to \u201ccheck for unwanted bias in datasets and machine learning models.\u201d In January 2019, Facebook granted $7.5 million for the creation of an AI ethics center in Munich, Germany. In March, Amazon co-sponsored a $20 million program on \u201cfairness in AI\u201d with the U.S. National Science Foundation. In April, Google canceled its AI ethics council after <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2019\/4\/1\/18290341\/google-heritage-foundation-ai-kay-coles-james\" >backlash<\/a> over the selection of Kay Coles James, the vocally anti-trans president of the right-wing Heritage Foundation. These corporate initiatives frequently cited academic research that Ito had supported, at least partially, through the MIT-Harvard fund.<\/p>\n<p>To characterize the corporate agenda, it is helpful to distinguish between three kinds of regulatory possibilities for a given technology: (1) no legal regulation at all, leaving \u201cethical principles\u201d and \u201cresponsible practices\u201d as merely voluntary; (2) moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits; or (3) restrictive legal regulation curbing or banning deployment of the technology. Unsurprisingly, the tech industry tends to support the first two and oppose the last. The corporate-sponsored discourse of \u201cethical AI\u201d enables precisely this position. Consider the case of facial recognition. This year, the municipal legislatures of San Francisco, Oakland, and Berkeley \u2014 all in California \u2014 plus Somerville, Massachusetts, have passed strict bans on facial recognition technology. Meanwhile, Microsoft has lobbied in favor of less restrictive legislation, requiring technical adjustments such as tests for \u201cbias,\u201d most notably in Washington state. Some big firms may even prefer this kind of mild legal regulation over a complete lack thereof, since larger firms can more easily invest in specialized teams to develop systems that comply with regulatory requirements.<\/p>\n<p>Thus, Silicon Valley\u2019s vigorous promotion of \u201cethical AI\u201d has constituted a strategic lobbying effort, one that has enrolled academia to legitimize itself. Ito played a key role in this corporate-academic fraternizing, meeting regularly with tech executives. The MIT-Harvard fund\u2019s initial director was the former \u201cglobal public policy lead\u201d for AI at Google. Through the fund, Ito and his associates sponsored many projects, including the creation of a prominent conference on \u201cFairness, Accountability, and Transparency\u201d in computer science; other sponsors of the conference included Google, Facebook, and Microsoft.<\/p>\n<p>Although the Silicon Valley lobbying effort has consolidated academic interest in \u201cethical AI\u201d and \u201cfair algorithms\u201d since 2016, a handful of papers on these topics had appeared in earlier years, even if framed differently. For example, Microsoft computer scientists published the <a target=\"_blank\" href=\"https:\/\/dl.acm.org\/citation.cfm?id=2090255\" >paper<\/a> that arguably inaugurated the field of \u201calgorithmic fairness\u201d in 2012. In 2016, the paper\u2019s lead author, Cynthia Dwork, became a professor of computer science at Harvard, with simultaneous positions at its law school and at Microsoft. When I took her Harvard course on the mathematical foundations of cryptography and statistics in 2017, I interviewed her and asked how she became interested in researching algorithmic definitions of fairness. In her account, she had long been personally concerned with the issue of discriminatory advertising, but Microsoft managers encouraged her to pursue this line of work because the firm was developing a new system of online advertising, and it would be economically advantageous to provide a service \u201cfree of regulatory problems.\u201d (To be fair, I believe that Dwork\u2019s personal intentions were honest despite the corporate capture of her ideas. Microsoft declined to comment for this article.)<\/p>\n<p>After the initial steps by MIT and Harvard, many other universities and new institutes received money from the tech industry to work on AI ethics. Most such organizations are also headed by current or former executives of tech firms. For example, the Data &amp; Society Research Institute is directed by a Microsoft researcher and initially funded by a Microsoft grant; New York University\u2019s AI Now Institute\u00a0was co-founded by another Microsoft researcher and partially funded by Microsoft, Google, and DeepMind; the Stanford Institute for Human-Centered AI is co-directed by a former vice president of Google; University of California, Berkeley\u2019s Division of Data Sciences is headed by a Microsoft veteran; and the MIT Schwarzman College of Computing is headed by a board member of Amazon. During my time at the Media Lab, Ito maintained frequent contact with the executives and planners of all these organizations.<\/p>\n<div id=\"attachment_150354\" style=\"width: 410px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/artificial-intelligence-tech-media-science2.jpg\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-150354\" class=\"wp-image-150354\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/artificial-intelligence-tech-media-science2.jpg\" alt=\"\" width=\"400\" height=\"334\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/artificial-intelligence-tech-media-science2.jpg 1024w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/artificial-intelligence-tech-media-science2-300x250.jpg 300w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/artificial-intelligence-tech-media-science2-768x641.jpg 768w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><\/a><p id=\"caption-attachment-150354\" class=\"wp-caption-text\">Illustration: Yoshi Sodeoka for The Intercept<\/p><\/div>\n<p>Big tech money and direction proved incompatible with an honest exploration of ethics, at least judging from my experience with the \u201cPartnership on AI to Benefit People and Society,\u201d a group founded by Microsoft, Google\/DeepMind, Facebook, IBM, and Amazon in 2016. PAI, of which the Media Lab is a member, defines itself as a \u201cmultistakeholder body\u201d and claims it is \u201cnot a lobbying organization.\u201d In an April 2018 hearing at the U.S. House Committee on Oversight and Government Reform, the Partnership\u2019s executive director claimed that the organization is merely \u201ca <em>resource<\/em> to policymakers \u2014 for instance, in conducting research that informs AI best practices and exploring the societal consequences of certain AI systems, as well as policies around the development and use of AI systems.\u201d<\/p>\n<p>But even if the Partnership\u2019s activities may not meet the legal threshold requiring registration as lobbyists \u2014 for example, by seeking to directly affect the votes of individual elected officials \u2014 the partnership has certainly sought to influence legislation. For example, in November 2018, the Partnership staff asked academic members to contribute to a collective statement to the Judicial Council of California regarding a Senate bill on penal reform (S.B. 10). The bill, in the course of eliminating cash bail, expanded the use of algorithmic risk assessment in pretrial decision making, and required the Judicial Council to \u201caddress the identification and mitigation of any implicit bias in assessment instruments.\u201d The Partnership staff wrote, \u201cwe believe there is room to impact this legislation (and CJS [criminal justice system] applications more broadly).\u201d<\/p>\n<p>In December 2018, three Media Lab colleagues and I raised serious objections to the Partnership\u2019s efforts to influence legislation. We observed that the Partnership\u2019s policy recommendations aligned consistently with the corporate agenda. In the penal case, our research led us to strongly oppose the adoption of risk assessment tools, and to reject the proposed technical adjustments that would supposedly render them \u201cunbiased\u201d or \u201cfair.\u201d But the Partnership\u2019s draft statement seemed, as a colleague put it in an internal email to Ito and others, to \u201cvalidate the use of RA [risk assessment] by emphasizing the issue as a technical one that can therefore be solved with better data sets, etc.\u201d A second colleague agreed that the \u201cPAI statement is weak and risks doing exactly what we\u2019ve been warning against re: the risk of legitimation via these industry led regulatory efforts.\u201d A third colleague wrote, \u201cSo far as the criminal justice work is concerned, what PAI is doing in this realm is quite alarming and also in my opinion seriously misguided. I agree with Rodrigo that PAI\u2019s association with ACLU, MIT and other academic \/ non-profit institutions practically ends up serving a legitimating function. Neither ACLU nor MIT nor any non-profit has any power in PAI.\u201d<\/p>\n<p>Worse, there seemed to be a mismatch between the Partnership\u2019s recommendations and the efforts of a grassroots coalition of organizations fighting jail expansion, including the movement Black Lives Matter, the prison abolitionist group Critical Resistance (where I have volunteered), and the undocumented and queer\/trans youth-led Immigrant Youth Coalition. The grassroots coalition argued, \u201cThe notion that any risk assessment instrument can account for bias ignores the racial disparities in current and past policing practices.\u201d There are abundant theoretical and empirical reasons to support this claim, since risk assessments are typically based on data of arrests, convictions, or incarcerations, all of which are poor proxies for individual behaviors or predispositions. The coalition continued, \u201cUltimately, risk-assessment tools create a feedback-loop of racial profiling, pre-trial detention and conviction. A person\u2019s freedom should not be reduced to an algorithm.\u201d By contrast, the Partnership\u2019s statement focused on \u201cminimum requirements for responsible deployment,\u201d spanning such topics as \u201cvalidity and data sampling bias, bias in statistical predictions; choice of the appropriate targets for prediction; human-computer interaction questions; user training; policy and governance; transparency and review; reproducibility, process, and recordkeeping; and post-deployment evaluation.\u201d<\/p>\n<p>To be sure, the Partnership staff did respond to criticism of the draft by noting in the final version of the statement that \u201cwithin PAI\u2019s membership and the wider AI community, many experts further suggest that individuals can never justly be detained on the basis of their risk assessment score alone, without an individualized hearing.\u201d This meek concession \u2014 admitting that it might not be time to start imprisoning people based strictly on software, without input from a judge or any other \u201cindividualized\u201d judicial process \u2014 was easier to make because none of the major firms in the Partnership sell risk assessment tools for pretrial decision-making; not only is the technology too controversial but also the market is too small. (Facial recognition technology, on the other hand, has a much larger market in which Microsoft, Google, Facebook, IBM, and Amazon all operate.)<\/p>\n<p>In December 2018, my colleagues and I urged Ito to quit the Partnership. I argued, \u201cIf academic and nonprofit organizations want to make a difference, the only viable strategy is to quit PAI, make a public statement, and form a counter alliance.\u201d Then a colleague proposed, \u201cthere are many other organizations which are doing much more substantial and transformative work in this area of predictive analytics in criminal justice \u2014 what would it look like to take the money we currently allocate in supporting PAI in order to support their work?\u201d We believed Ito had enough autonomy to do so because the MIT-Harvard fund was supported largely by the Knight Foundation, even though most of the money came from tech investors Pierre Omidyar, founder of eBay, via the Omidyar Network, and Reid Hoffman, co-founder of LinkedIn and Microsoft board member. I wrote, \u201cIf tens of millions of dollars from nonprofit foundations and individual donors are not enough to allow us to take a bold position and join the right side, I don\u2019t know what would be.\u201d (Omidyar funds The Intercept.)<\/p>\n<blockquote><p><strong><em>It is strange that Ito, with no formal training, became positioned as an \u201cexpert\u201d on AI ethics, a field that barely existed before 2017.<\/em><\/strong><\/p><\/blockquote>\n<p>Ito did acknowledge the problem. He had just received a message from David M. Siegel, co-chair of the hedge fund Two Sigma and member of the MIT Corporation. Siegel proposed a self-regulatory structure for \u201csearch and social media\u201d firms in Silicon Valley, modeled after the Financial Industry Regulatory Authority, or FINRA, a private corporation that serves as a self-regulatory organization for securities firms on Wall Street. Ito responded to Siegel\u2019s proposal, \u201cI don\u2019t feel civil society is well represented in the industry groups. We\u2019ve been participating in Partnership in AI and they water down stuff we try to say to prevent the use of algorithms that don\u2019t seem to work well like risk scores for pre-trial bail. I think that with personal data and social media, I have concerns with self-regulation. For example, a full blown genocide [of the Rohingya, a mostly Muslim minority group in Myanmar] happened using What\u2019s App and Facebook knew it was happening.\u201d (Facebook has <a target=\"_blank\" href=\"https:\/\/newsroom.fb.com\/news\/2018\/11\/myanmar-hria\/\" >admitted<\/a> that its platform was used to incite violence in Myanmar; news reports have documented how content on the Facebook platform <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2018\/10\/15\/technology\/myanmar-facebook-genocide.html\" >facilitated<\/a> a genocide in the country despite <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/investigates\/special-report\/myanmar-facebook-hate\/\" >repeated warnings<\/a> to Facebook executives from human rights activists and researchers. Facebook texting service WhatsApp made it harder for its users to forward messages after WhatsApp was <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2018\/05\/14\/technology\/whatsapp-india-elections.html\" >reportedly<\/a> used to spread misinformation during elections in India.)<\/p>\n<p>But the corporate-academic alliances were too robust and convenient. The Media Lab remained in the Partnership, and Ito continued to fraternize with Silicon Valley and Wall Street executives and investors. Ito described Siegel, a billionaire, as a \u201cpotential funder.\u201d With such people, I saw Ito routinely express moral concerns about their businesses \u2014 but in a friendly manner, as he was simultaneously asking them for money, whether for MIT or his own venture capital funds. For corporate-academic \u201cethicists,\u201d amicable criticism can serve as leverage for entering into business relationships. Siegel replied to Ito, \u201cI would be pleased to speak more on this topic with you. Finra is not an industry group. It\u2019s just paid for by industry. I will explain more when we meet. I agree with your concerns.\u201d<\/p>\n<p>In private meetings, Ito and tech executives discussed the corporate lobby quite frankly. In January, my colleagues and I joined a meeting with Mustafa Suleyman, founding co-chair of the Partnership and co-founder of DeepMind, an AI startup acquired by Google for about $500 million in 2014. In the meeting, Ito and Suleyman discussed how the promotion of \u201cAI ethics\u201d had become a \u201cwhitewashing\u201d effort, although they claimed their initial intentions had been nobler. In a message to plan the meeting, Ito wrote to my colleagues and me, \u201cI do know, however, from speaking to Mustafa when he was setting up PAI that he was meaning for the group to be much more substantive and not just \u2018white washing.\u2019 I think it\u2019s just taking the trajectory that these things take.\u201d (Suleyman did not respond to requests for comment.)<\/p>\n<p>Regardless of individual actors\u2019 intentions, the corporate lobby\u2019s effort to shape academic research was extremely successful. There is now an enormous amount of work under the rubric of \u201cAI ethics.\u201d To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on \u201cethical AI\u201d is aligned with the tech lobby\u2019s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies. How did five corporations, using only a small fraction of their budgets, manage to influence and frame so much academic activity, in so many disciplines, so quickly? It is strange that Ito, with no formal training, became positioned as an \u201cexpert\u201d on AI ethics, a field that barely existed before 2017. But it is even stranger that two years later, respected scholars in established disciplines have to demonstrate their relevance to a field conjured by a corporate lobby.<\/p>\n<div id=\"attachment_150355\" style=\"width: 410px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/google-dod-tech-ai-mit.jpg\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-150355\" class=\"wp-image-150355\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/google-dod-tech-ai-mit.jpg\" alt=\"\" width=\"400\" height=\"262\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/google-dod-tech-ai-mit.jpg 1024w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/google-dod-tech-ai-mit-300x197.jpg 300w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2019\/12\/google-dod-tech-ai-mit-768x503.jpg 768w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><\/a><p id=\"caption-attachment-150355\" class=\"wp-caption-text\">Former Google CEO Eric Schmidt,\u00a0now chair of the Department of Defense\u2019s Defense Innovation Board, takes his seat for the House Armed Services Committee hearing on \u201cPromoting DOD\u2019s Culture of Innovation\u201d on April 17, 2018.<br \/>Photo: Bill Clark\/CQ Roll Call\/Getty Images<\/p><\/div>\n<p>The field has also become relevant to the U.S. military, not only in official responses to moral concerns about technologies of targeted killing but also in disputes among Silicon Valley firms over lucrative military contracts. On November 1, the Department of Defense\u2019s innovation board published its recommendations for \u201cAI Ethics Principles.\u201d The board is chaired by Eric Schmidt, who was the executive chair of Alphabet, Google\u2019s parent company, when Obama\u2019s defense secretary Ashton B. Carter established the board and appointed him in 2016. <a target=\"_blank\" href=\"https:\/\/www.propublica.org\/article\/how-amazon-and-silicon-valley-seduced-the-pentagon\" >According to<\/a> ProPublica, \u201cSchmidt\u2019s influence, already strong under Carter, only grew when [James] Mattis arrived as [Trump\u2019s] defense secretary.\u201d The board includes multiple executives from Google, Microsoft, and Facebook, raising controversies regarding conflicts of interest. A Pentagon employee responsible for policing conflicts of interest was removed from the innovation board after she challenged \u201cthe Pentagon\u2019s cozy relationship not only with [Amazon CEO Jeff] Bezos, but with Google\u2019s Eric Schmidt.\u201d This relationship is potentially lucrative for big tech firms: The AI ethics recommendations appeared less than a week after the Pentagon awarded a $10 billion cloud-computing contract to Microsoft, which is being legally challenged by Amazon.<\/p>\n<blockquote><p><strong><em>The majority of well-funded work on \u201cethical AI\u201d is aligned with the tech lobby\u2019s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies.<\/em><\/strong><\/p><\/blockquote>\n<p>The recommendations seek to compel the Pentagon to increase military investments in AI and to adopt \u201cethical AI\u201d systems such as those developed and sold by Silicon Valley firms. The innovation board calls the Pentagon a \u201cdeeply ethical organization\u201d and offers to extend its \u201cexisting ethics framework\u201d to AI. To this end, the board cites the AI ethics research groups at Google, Microsoft, and IBM, as well as academics sponsored by the MIT-Harvard fund. However, there are caveats. For example, the board notes that although \u201cthe term \u2018fairness\u2019 is often cited in the AI community,\u201d the recommendations avoid this term because of \u201cthe DoD mantra that fights should not be fair, as DoD aims to create the conditions to maintain an unfair advantage over any potential adversaries.\u201d Thus, \u201csome applications will be permissibly and justifiably biased,\u201d specifically \u201cto target certain adversarial combatants more successfully.\u201d The Pentagon\u2019s conception of AI ethics forecloses many important possibilities for moral deliberation, such as the prohibition of drones for targeted killing.<\/p>\n<p>The corporate, academic, and military proponents of \u201cethical AI\u201d have collaborated closely for mutual benefit. For example, Ito told me that he informally advised Schmidt on which academic AI ethicists Schmidt\u2019s private foundation should fund. Once, Ito even asked me for second-order advice on whether Schmidt should fund a certain professor who, like Ito, later served as an \u201cexpert consultant\u201d to the Pentagon\u2019s innovation board. In February, Ito joined Carter at a panel titled \u201cComputing for the People: Ethics and AI,\u201d which also included current and former executives of Microsoft and Google. The panel was part of the inaugural celebration of MIT\u2019s $1 billion college dedicated to AI. Other speakers at the celebration included Schmidt on \u201cComputing for the Marketplace,\u201d Siegel on \u201cHow I Learned to Stop Worrying and Love Algorithms,\u201d and Henry Kissinger on \u201cHow the Enlightenment Ends.\u201d As Kissinger declared the possibility of \u201ca world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms,\u201d a <a target=\"_blank\" href=\"https:\/\/thetech.com\/2019\/03\/07\/protest-college-of-computing-kissinger\" >protest outside the MIT auditorium<\/a> called attention to Kissinger\u2019s war crimes in Vietnam, Cambodia, and Laos, as well as his support of war crimes elsewhere. In the age of automated targeting, what atrocities will the U.S. military justify as governed by \u201cethical\u201d norms or as executed by machines beyond the scope of human agency and culpability?<\/p>\n<p>No defensible claim to \u201cethics\u201d can sidestep the urgency of legally enforceable restrictions to the deployment of technologies of mass surveillance and systemic violence. Until such restrictions exist, moral and political deliberation about computing will remain subsidiary to the profit-making imperative expressed by the Media Lab\u2019s motto, \u201cDeploy or Die.\u201d While some deploy, even if ostensibly \u201cethically,\u201d others die.<\/p>\n<p>________________________________________________<\/p>\n<p><em>Related:<\/em><\/p>\n<ul>\n<li><em><a target=\"_blank\" href=\"https:\/\/theintercept.com\/2019\/07\/23\/google-ai-gradient-ventures\/\" ><strong>Google Continues Investments in Military and Police AI Technology Through Venture Capital Arm<\/strong><\/a><\/em><\/li>\n<li><em><a target=\"_blank\" href=\"https:\/\/theintercept.com\/2018\/04\/13\/facebook-advertising-data-artificial-intelligence-ai\/\" ><strong>Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document<\/strong><\/a><\/em><\/li>\n<li><em><a target=\"_blank\" href=\"https:\/\/theintercept.com\/2019\/07\/21\/ai-race-china-artificial-intelligence\/\" ><strong>Why an \u201cAI Race\u201d Between the U.S. and China Is a Terrible, Terrible Idea<\/strong><\/a><\/em><\/li>\n<li><em><a target=\"_blank\" href=\"https:\/\/theintercept.com\/2018\/07\/30\/amazon-facial-recognition-police-military\/\" ><strong>Amazon Promises \u201cUnwavering\u201d Commitment to Police, Military Clients Using AI Technology<\/strong><\/a><\/em><\/li>\n<\/ul>\n<p style=\"padding-left: 40px;\"><em>\u00a0<\/em><em><a target=\"_blank\" href=\"https:\/\/theintercept.com\/staff\/rodrigo-ochigame\/\" >Rodrigo Ochigame<\/a><\/em><\/p>\n<p><a target=\"_blank\" href=\"https:\/\/theintercept.com\/2019\/12\/20\/mit-ethical-ai-artificial-intelligence\/?utm_source=The+Intercept+Newsletter&amp;utm_campaign=0277d72712-EMAIL_CAMPAIGN_2019_12_21&amp;utm_medium=email&amp;utm_term=0_e00a5122d3-0277d72712-124136213\" >Go to Original \u2013 theintercept.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>20 Dec 2019 &#8211; A Silicon Valley lobby enrolled elite academia to avoid legal restrictions on AI.<\/p>\n","protected":false},"author":4,"featured_media":150354,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3078],"tags":[1733,910,304,461],"class_list":["post-150351","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","tag-artificial-intelligence-ai","tag-big-brother","tag-science","tag-technology"],"_links":{"self":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/150351","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/comments?post=150351"}],"version-history":[{"count":1,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/150351\/revisions"}],"predecessor-version":[{"id":237707,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/150351\/revisions\/237707"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media\/150354"}],"wp:attachment":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media?parent=150351"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/categories?post=150351"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/tags?post=150351"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}