{"id":106988,"date":"2018-03-05T12:00:58","date_gmt":"2018-03-05T12:00:58","guid":{"rendered":"https:\/\/www.transcend.org\/tms\/?p=106988"},"modified":"2018-03-01T13:36:33","modified_gmt":"2018-03-01T13:36:33","slug":"he-predicted-the-2016-fake-news-crisis-now-hes-worried-about-an-information-apocalypse","status":"publish","type":"post","link":"https:\/\/www.transcend.org\/tms\/2018\/03\/he-predicted-the-2016-fake-news-crisis-now-hes-worried-about-an-information-apocalypse\/","title":{"rendered":"He Predicted the 2016 Fake News Crisis &#8211; Now He&#8217;s Worried about an Information Apocalypse"},"content":{"rendered":"<blockquote><p><em>\u201cWhat happens when anyone can make it appear as if anything has happened, regardless of whether or not it did?&#8221; <\/em><\/p><\/blockquote>\n<p><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/mask-media.jpg\" ><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-106989\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/mask-media-1024x704.jpg\" alt=\"\" width=\"500\" height=\"344\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/mask-media-1024x704.jpg 1024w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/mask-media-300x206.jpg 300w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/mask-media-768x528.jpg 768w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/mask-media.jpg 1600w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><\/a><\/p>\n<p><em>12 Feb 2018 &#8211; <\/em>In mid-2016, <a target=\"_blank\" href=\"https:\/\/twitter.com\/metaviv?lang=en\" >Aviv Ovadya<\/a> realized there was something fundamentally wrong with the internet \u2014 so wrong that he abandoned his work and sounded an alarm. A few weeks before the 2016 election, he presented his concerns to technologists in San Francisco\u2019s Bay Area and warned of an impending crisis of misinformation in a presentation he titled \u201cInfocalypse.\u201d<\/p>\n<p>The web and the information ecosystem that had developed around it was wildly unhealthy, Ovadya argued. The incentives that governed its biggest platforms were calibrated to reward information that was often misleading and polarizing, or both. Platforms like Facebook, Twitter, and Google prioritized clicks, shares, ads, and money over quality of information, and Ovadya couldn\u2019t shake the feeling that it was all building toward something bad \u2014 a kind of critical threshold of addictive and toxic misinformation. The presentation was largely ignored by employees from the Big Tech platforms \u2014 including a few from Facebook who would later go on to drive the company\u2019s NewsFeed integrity effort.<\/p>\n<div id=\"attachment_106990\" style=\"width: 410px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/Aviv-Ovadya.jpg\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-106990\" class=\"wp-image-106990\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/Aviv-Ovadya-300x200.jpg\" alt=\"\" width=\"400\" height=\"266\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/Aviv-Ovadya-300x200.jpg 300w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/Aviv-Ovadya.jpg 715w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><\/a><p id=\"caption-attachment-106990\" class=\"wp-caption-text\">Aviv Ovadya &#8211; Stephen Lam for BuzzFeed News<\/p><\/div>\n<p>\u201cAt the time, it felt like we were in a car careening out of control and it wasn\u2019t just that everyone was saying, \u2018we\u2019ll be fine\u2019 \u2014 it\u2019s that they didn&#8217;t even see the car,\u201d he said.<\/p>\n<p>Ovadya saw early what many \u2014 including lawmakers, journalists, and Big Tech CEOs \u2014 wouldn\u2019t grasp until months later: Our platformed and algorithmically optimized world is vulnerable \u2014 to propaganda, to misinformation, to dark targeted advertising from foreign governments \u2014 so much so that it threatens to undermine a cornerstone of human discourse: the credibility of fact.<\/p>\n<p>But it\u2019s what he sees coming next that will really scare the shit out of you.<\/p>\n<p>\u201cAlarmism can be good \u2014 you should be alarmist about this stuff,\u201d Ovadya said one January afternoon before calmly outlining a deeply unsettling projection about the next two decades of fake news, artificial intelligence\u2013assisted misinformation campaigns, and propaganda. \u201cWe are so screwed it&#8217;s beyond what most of us can imagine,\u201d he said. \u201cWe were utterly screwed a year and a half ago and we&#8217;re even more screwed now. And depending how far you look into the future it just gets worse.\u201d<\/p>\n<p>That future, according to Ovadya, will arrive with a slew of slick, easy-to-use, and eventually seamless technological tools for manipulating perception and falsifying reality, for which terms have already been coined \u2014 \u201creality apathy,\u201d \u201cautomated laser phishing,\u201d and &#8220;human puppets.&#8221;<\/p>\n<p>Which is why <a target=\"_blank\" href=\"http:\/\/aviv.me\/\" >Ovadya, an MIT grad with engineering stints at tech companies like Quora<\/a>, dropped everything in early 2016 to try to prevent what he saw as a Big Tech\u2013enabled information crisis. \u201cOne day something just clicked,\u201d he said of his awakening. It became clear to him that, if somebody were to exploit our attention economy and use the platforms that undergird it to distort the truth, there were no real checks and balances to stop it. \u201cI realized if these systems were going to go out of control, there\u2019d be nothing to reign them in and it was going to get bad, and quick,\u201d he said.<\/p>\n<blockquote><p><strong><em>&#8220;We were utterly screwed a year and a half ago and we&#8217;re even more screwed now&#8221; <\/em><\/strong><\/p><\/blockquote>\n<p>Today Ovadya and a cohort of loosely affiliated researchers and academics are anxiously looking ahead \u2014 toward a future that is alarmingly dystopian. They\u2019re running war game\u2013style disaster scenarios based on technologies that have begun to pop up and the outcomes are typically disheartening.<\/p>\n<p>For Ovadya \u2014 now the chief technologist for the University of Michigan\u2019s Center for Social Media Responsibility and a Knight News innovation fellow at the Tow Center for Digital Journalism at Columbia \u2014 the shock and ongoing anxiety over Russian Facebook ads and Twitter bots pales in comparison to the greater threat: Technologies that can be used to enhance and distort what is real are evolving faster than our ability to understand and control or mitigate it. The stakes are high and the possible consequences more disastrous than foreign meddling in an election \u2014 an undermining or upending of core civilizational institutions, an &#8220;infocalypse.\u201d And Ovadya says that this one is just as plausible as the last one \u2014 and worse.<\/p>\n<p>Worse because of our ever-expanding computational prowess; worse because of ongoing advancements in artificial intelligence and machine learning that can blur the lines between fact and fiction; worse because those things could usher in a future where, as Ovadya observes, anyone could make it \u201cappear as if anything has happened, regardless of whether or not it did.\u201d<\/p>\n<blockquote><p><strong><em>&#8220;What happens when anyone can make it appear as if anything has happened, regardless of whether or not it did?&#8221; <\/em><\/strong><\/p><\/blockquote>\n<p>And much in the way that foreign-sponsored, targeted misinformation campaigns didn&#8217;t feel like a plausible near-term threat until we realized that it was already happening, Ovadya cautions that fast-developing tools powered by artificial intelligence, machine learning, and augmented reality tech could be hijacked and used by bad actors to imitate humans and wage an information war.<\/p>\n<p>And we\u2019re closer than one might think to a potential \u201cInfocalypse.\u201d Already available tools for audio and video manipulation have begun to look like a potential fake news Manhattan Project. In the murky corners of the internet, people have begun using machine learning algorithms and open-source software to easily <a target=\"_blank\" href=\"https:\/\/motherboard.vice.com\/en_us\/article\/bjye8a\/reddit-fake-porn-app-daisy-ridley\" >create pornographic videos that realistically superimpose the faces of celebrities<\/a> \u2014 or anyone for that matter \u2014 on the adult actors\u2019 bodies. At institutions like Stanford, technologists have built programs that that <a target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=ohmajJTcpNk\" >combine and mix recorded video footage<\/a> with real-time face tracking to manipulate video. Similarly, at the University of Washington computer scientists successfully built a program capable of \u201c<a target=\"_blank\" href=\"http:\/\/grail.cs.washington.edu\/projects\/AudioToObama\/\" >turning audio clips into a realistic, lip-synced video<\/a> of the person speaking those words.\u201d As proof of concept, both the teams manipulated broadcast video to make world leaders appear to say things they never actually said.<\/p>\n<p>httpv:\/\/www.youtube.com\/watch?v=MVBe6_o4cMI<\/p>\n<p>As these tools become democratized and widespread, Ovadya notes that the worst case scenarios could be extremely destabilizing.<\/p>\n<p>There\u2019s \u201cdiplomacy manipulation,\u201d in which a malicious actor uses advanced technology to \u201ccreate the belief that an event has occurred\u201d to influence geopolitics. Imagine, for example, a machine-learning algorithm (which analyzes gobs of data in order to teach itself to perform a particular function) fed on hundreds of hours of footage of Donald Trump or North Korean dictator Kim Jong Un, which could then spit out a near-perfect \u2014 and virtually impossible to distinguish from reality \u2014 audio or video clip of the leader declaring nuclear or biological war. \u201cIt doesn\u2019t have to be perfect \u2014 just good enough to make the enemy think something happened that it provokes a knee-jerk and reckless response of retaliation.\u201d<\/p>\n<blockquote><p><strong><em>&#8220;It doesn\u2019t have to be perfect \u2014 just good enough&#8221; <\/em><\/strong><\/p><\/blockquote>\n<p>Another scenario, which Ovadya dubs \u201cpolity simulation,\u201d is a dystopian combination of political botnets and astroturfing, where political movements are manipulated by fake grassroots campaigns. In Ovadya\u2019s envisioning, increasingly believable AI-powered bots will be able to effectively compete with real humans for legislator and regulator attention because it will be too difficult to tell the difference. Building upon previous iterations, where public discourse is manipulated, it may soon be possible to directly jam congressional switchboards with heartfelt, believable algorithmically-generated pleas. Similarly, Senators&#8217; inboxes could be flooded with messages from constituents that were cobbled together by machine-learning programs working off stitched-together content culled from text, audio, and social media profiles.<\/p>\n<p>Then there\u2019s automated laser phishing, a tactic Ovadya notes security researchers are already whispering about. Essentially, it&#8217;s using AI to scan things, like our social media presences, and craft false but believable messages from people we know. The game changer, according to Ovadya, is that something like laser phishing would allow bad actors to target anyone and to create a believable imitation of them using publicly available data.<\/p>\n<div id=\"attachment_106991\" style=\"width: 410px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/hands.jpg\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-106991\" class=\"wp-image-106991\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/hands-300x200.jpg\" alt=\"\" width=\"400\" height=\"267\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/hands-300x200.jpg 300w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/hands-768x512.jpg 768w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/hands-1024x682.jpg 1024w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/hands.jpg 1040w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><\/a><p id=\"caption-attachment-106991\" class=\"wp-caption-text\">Stephen Lam for BuzzFeed News<\/p><\/div>\n<p>\u201cPreviously one would have needed to have a human to mimic a voice or come up with an authentic fake conversation \u2014 in this version you could just press a button using open source software,\u201d Ovadya said. \u201cThat\u2019s where it becomes novel \u2014 when anyone can do it because it\u2019s trivial. Then it\u2019s a whole different ball game.\u201d<\/p>\n<p>Imagine, he suggests, phishing messages that aren\u2019t just a confusing link you might click, but a personalized message with context. \u201cNot just an email, but an email from a friend that you\u2019ve been anxiously waiting for for a while,\u201d he said. \u201cAnd because it would be so easy to create things that are fake you&#8217;d become overwhelmed. If every bit of spam you receive looked identical to emails from real people you knew, each one with its own motivation trying to convince you of something, you\u2019d just end up saying, \u2018okay, I&#8217;m going to ignore my inbox.\u2019\u201d<\/p>\n<div id=\"attachment_106992\" style=\"width: 610px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/media-youtube-fake-news.jpeg\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-106992\" class=\"wp-image-106992\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/media-youtube-fake-news.jpeg\" alt=\"\" width=\"600\" height=\"310\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/media-youtube-fake-news.jpeg 715w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/media-youtube-fake-news-300x155.jpeg 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a><p id=\"caption-attachment-106992\" class=\"wp-caption-text\">Via YouTube<\/p><\/div>\n<p>That can lead to something Ovadya calls \u201creality apathy\u201d: Beset by a torrent of constant misinformation, people simply start to give up. Ovadya is quick to remind us that this is common in areas where information is poor and thus assumed to be incorrect. The big difference, Ovadya notes, is the adoption of apathy to a developed society like ours. The outcome, he fears, is not good. \u201cPeople stop paying attention to news and that fundamental level of informedness required for functional democracy becomes unstable.\u201d<\/p>\n<p>Ovadya (and other researchers) see laser phishing as an inevitability. \u201cIt\u2019s a threat for sure, but even worse \u2014 I don&#8217;t think there&#8217;s a solution right now,\u201d he said. \u201cThere&#8217;s internet scale infrastructure stuff that needs to be built to stop this if it starts.\u201d<\/p>\n<p>Beyond all this, there are other long-range nightmare scenarios that Ovadya describes as &#8220;far-fetched,&#8221; but they&#8217;re not so far-fetched that he&#8217;s willing to rule them out. And they are frightening. &#8220;Human puppets,&#8221; for example \u2014 a black market version of a social media marketplace with people instead of bots. \u201cIt\u2019s essentially a mature future cross border market for manipulatable humans,\u201d he said.<\/p>\n<p>Ovadya\u2019s premonitions are particularly terrifying given the ease with which our democracy has already been manipulated by the most rudimentary, blunt-force misinformation techniques. The scamming, deception, and obfuscation that\u2019s coming is nothing new; it\u2019s just more sophisticated, much harder to detect, and working in tandem with other technological forces that are not only currently unknown but likely unpredictable.<\/p>\n<p>For those paying close attention to developments in artificial intelligence and machine learning, none of this feels like much of a stretch. Software <a target=\"_blank\" href=\"http:\/\/research.nvidia.com\/sites\/default\/files\/publications\/karras2017gan-paper-v2.pdf\" >currently in development at the chip manufacturer Nvidia<\/a> can already convincingly generate hyperrealistic photos of objects, people, and even some landscapes by <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/interactive\/2018\/01\/02\/technology\/ai-generated-photos.html\" >scouring tens of thousands<\/a> of images. Adobe also recently piloted two projects \u2014 <a target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=I3l4XLZ59iw\" >Voco<\/a> and Cloak \u2014 the first a &#8220;Photoshop for audio,&#8221; the second a tool that can seamlessly remove objects (and people!) from video in a matter of clicks.<\/p>\n<p>In some cases, the technology is so good that it\u2019s startled even its creators. Ian Goodfellow, a <a target=\"_blank\" href=\"https:\/\/www.technologyreview.com\/lists\/innovators-under-35\/2017\/inventor\/ian-goodfellow\/\" >Google Brain research scientist<\/a> who helped code the first \u201cgenerative adversarial network\u201d (GAN), which is a neural network capable of learning without human supervision, cautioned that AI could set news consumption back roughly 100 years. At an MIT Technology Review conference in November last year, <a target=\"_blank\" href=\"https:\/\/www.technologyreview.com\/s\/609358\/ai-could-send-us-back-100-years-when-it-comes-to-how-we-consume-news\/\" >he told an audience<\/a> that GANs have both \u201cimagination and introspection\u201d and \u201ccan tell how well the generator is doing without relying on human feedback.\u201d And that, while the creative possibilities for the machines is boundless, the innovation, when applied to the way we consume information, would likely \u201cclos[e] some of the doors that our generation has been used to having open.\u201d<\/p>\n<div id=\"attachment_106993\" style=\"width: 610px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/images-celebrities-photos.jpeg\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-106993\" class=\"wp-image-106993\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/images-celebrities-photos.jpeg\" alt=\"\" width=\"600\" height=\"335\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/images-celebrities-photos.jpeg 715w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/images-celebrities-photos-300x167.jpeg 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a><p id=\"caption-attachment-106993\" class=\"wp-caption-text\">Images of fake celebrities created by Generative Adversarial Networks.<br \/> (GANs). Tero Karras FI \/ YouTube \/ Via youtube.com<\/p><\/div>\n<p>In that light, scenarios like Ovadya\u2019s polity simulation feel genuinely plausible. This summer, more than one million fake bot accounts flooded the FCC\u2019s open comments system to \u201c<a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/bots-broke-fcc-public-comment-system\/\" >amplify the call to repeal net neutrality protections<\/a>.\u201d Researchers concluded that automated comments \u2014 some using natural language processing to appear real \u2014 obscured legitimate comments, undermining the authenticity of the entire open comments system. Ovadya nods to the FCC example as well as the recent <a target=\"_blank\" href=\"https:\/\/www.politico.com\/magazine\/story\/2018\/02\/04\/trump-twitter-russians-release-the-memo-216935\" >bot-amplified #releasethememo<\/a> campaign as a blunt version of what&#8217;s to come. &#8220;It can just get so much worse,&#8221; he said.<\/p>\n<blockquote><p><strong><em>\u201cYou don&#8217;t need to create the fake video for this tech to have a serious impact. You just point to the fact that the tech exists and you can impugn the integrity of the stuff that\u2019s real.\u201d <\/em><\/strong><\/p><\/blockquote>\n<p>Arguably, this sort of erosion of authenticity and the integrity of official statements altogether is the most sinister and worrying of these future threats. \u201cWhether it\u2019s AI, peculiar Amazon manipulation hacks, or fake political activism \u2014 these technological underpinnings [lead] to the increasing erosion of trust,\u201d computational propaganda researcher Renee DiResta said of the future threat. \u201cIt makes it possible to cast aspersions on whether videos \u2014 or advocacy for that matter \u2014 are real.\u201d DiResta pointed out Donald Trump\u2019s <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2017\/11\/28\/us\/politics\/trump-access-hollywood-tape.html\" >recent denial that it was his voice<\/a> on the infamous <em>Access Hollywood<\/em> tape, citing experts who told him it\u2019s possible it was digitally faked. \u201cYou don&#8217;t need to create the fake video for this tech to have a serious impact. You just point to the fact that the tech exists and you can impugn the integrity of the stuff that\u2019s real.\u201d<\/p>\n<p>It\u2019s why researchers and technologists like DiResta \u2014 <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2017\/11\/12\/technology\/social-media-disinformation.html\" >who spent years of her spare time<\/a> advising the Obama administration, and now members of the Senate Intelligence Committee, against disinformation campaigns from trolls \u2014 and Ovadya (though they work separately) are beginning to talk more about the looming threats. Last week, the NYC Media Lab, which helps the city\u2019s companies and academics collaborate, announced a plan to bring together technologists and researchers in June to \u201cexplore worst case scenarios\u201d for the future of news and tech. The event, which they\u2019ve named Fake News Horror Show, is billed as \u201ca science fair of terrifying propaganda tools \u2014 some real and some imagined, but all based on plausible technologies.\u201d<\/p>\n<p>\u201cIn the next two, three, four years we\u2019re going to have to plan for hobbyist propagandists who can make a fortune by creating highly realistic, photo realistic simulations,\u201d Justin Hendrix, the executive director of NYC Media Lab, told BuzzFeed News. \u201cAnd should those attempts work, and people come to suspect that there&#8217;s no underlying reality to media artifacts of any kind, then we&#8217;re in a really difficult place. It&#8217;ll only take a couple of big hoaxes to really convince the public that nothing\u2019s real.\u201d<\/p>\n<p>Given the early dismissals of the efficacy of misinformation \u2014 like Facebook CEO Mark Zuckerberg\u2019s now-infamous statement that it was &#8220;crazy&#8221; that fake news on his site played a crucial role in the 2016 election \u2014 the first step for researchers like Ovadya is a daunting one: Convince the greater public, as well as lawmakers, university technologists, and tech companies, that a reality-distorting information apocalypse is not only plausible, but close at hand.<\/p>\n<blockquote><p><strong><em>&#8220;It&#8217;ll only take a couple of big hoaxes to really convince the public that nothing\u2019s real.&#8221; <\/em><\/strong><\/p><\/blockquote>\n<p>A senior federal employee explicitly tasked with investigating information warfare told BuzzFeed News that even he&#8217;s not certain how many government agencies are preparing for scenarios like the ones Ovadya and others describe. \u201cWe&#8217;re less on our back feet than we were a year ago,&#8221; he said, before noting that that&#8217;s not nearly good enough. \u201cI think about it from the sense of the enlightenment \u2014 which was all about the search for truth,\u201d the employee told BuzzFeed News. \u201cI think what you\u2019re seeing now is an attack on the enlightenment \u2014 and enlightenment documents like the Constitution \u2014 by adversaries trying to create a post-truth society. And that\u2019s a direct threat to the foundations of our current civilization.&#8221;<\/p>\n<p>That\u2019s a terrifying thought \u2014 more so because forecasting this kind of stuff is so tricky. Computational propaganda is far more qualitative than quantitative \u2014 a climate scientist can point to explicit data showing rising temperatures, whereas it\u2019s virtually impossible to build a trustworthy prediction model mapping the future impact of yet-to-be-perfected technology.<\/p>\n<p>For technologists like the federal employee, the only viable way forward is to urge caution, to weigh the moral and ethical implications of the tools being built and, in so doing, avoid the Frankensteinian moment when the creature turns to you and asks, &#8220;Did you ever consider the consequences of your actions?&#8221;<\/p>\n<p>&#8220;I\u2019m from the free and open source culture \u2014 the goal isn&#8217;t to stop technology but ensure we&#8217;re in an equilibria that&#8217;s positive for people. So I\u2019m not just shouting \u2018this is going to happen,&#8217; but instead saying, \u2018consider it seriously, examine the implications,&#8221; Ovadya told BuzzFeed News. \u201cThe thing I say is, \u2018trust that this isn&#8217;t not going to happen.\u2019\u201d<\/p>\n<p>Hardly an encouraging pronouncement. That said, Ovadya does admit to a bit of optimism. There\u2019s more interest in the computational propaganda space then ever before, and those who were previously slow to take threats seriously are now more receptive to warnings. \u201cIn the beginning it was really bleak \u2014 few listened,\u201d he said. &#8220;But the last few months have been really promising. Some of the checks and balances are beginning to fall into place.&#8221; Similarly, there are solutions to be found \u2014 like cryptographic verification of images and audio, which could help distinguish what&#8217;s real and what&#8217;s manipulated.<\/p>\n<p>Still, Ovadya and others warn that the next few years could be rocky. Despite some pledges for reform, he feels the platforms are still governed by the wrong, sensationalist incentives, where clickbait and lower-quality content is rewarded with more attention. &#8220;That&#8217;s a hard nut to crack in general, and when you combine it with a system like Facebook, which is a content accelerator, it becomes very dangerous.&#8221;<\/p>\n<p>Just how far out we are from that danger remains to be seen. Asked about the warning signs he\u2019s keeping an eye out for, Ovadya paused. \u201cI\u2019m not sure, really. Unfortunately, a lot of the warning signs have already happened.\u201d<\/p>\n<p>______________________________________________<\/p>\n<p style=\"padding-left: 30px;\"><em><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/charliewarzel-v2-13394-1427384276-23_large.jpg\" ><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-106996\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2018\/03\/charliewarzel-v2-13394-1427384276-23_large.jpg\" alt=\"\" width=\"70\" height=\"70\" \/><\/a>Charlie Warzel is a senior writer for <\/em>BuzzFeed<em> News and is based in New York. Warzel reports on and writes about the intersection of tech and culture. Contact: <a href=\"mailto:charlie.warzel@buzzfeed.com\">charlie.warzel@buzzfeed.com<\/a>. <\/em><\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.buzzfeed.com\/charliewarzel\/the-terrifying-future-of-fake-news?utm_term=.asRaXOOxG#.kvpNlJJ78\" >Go to Original \u2013 buzzfeed.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u201cWhat happens when anyone can make it appear as if anything has happened, regardless of whether or not it did?&#8221; <\/p>\n","protected":false},"author":4,"featured_media":106989,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[62],"tags":[],"class_list":["post-106988","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-media"],"_links":{"self":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/106988","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/comments?post=106988"}],"version-history":[{"count":0,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/106988\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media\/106989"}],"wp:attachment":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media?parent=106988"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/categories?post=106988"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/tags?post=106988"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}