The company has publicly committed to fighting disinformation around the world, but a ProPublica analysis, the first ever conducted at this scale, documented how Google’s sprawling automated digital ad operation placed ads from major brands on global websites that spread false claims on such topics as vaccines, COVID-19, climate change and elections.
How Google’s Ad Business Funds Disinformation around the World
MEDIA, 31 Oct 2022
The largest-ever analysis of Google’s ad practices on non-English-language websites reveals how the tech giant makes disinformation profitable.
29 Oct 2022 – Google is funneling revenue to some of the web’s most prolific purveyors of false information in Europe, Latin America and Africa, a ProPublica investigation has found.
In one instance, Google continued to place ads on a publication in Bosnia and Herzegovina for months after the U.S. government officially imposed sanctions on the site. Google stopped doing business with the site, which the U.S. Treasury Department described as the “personal media station” of a prominent Bosnian Serb separatist politician, only after being contacted by ProPublica.
Google ads are a major source of revenue for sites that spread election disinformation in Brazil, notably false claims about the integrity of the voting system that have been advanced by the incumbent president, Jair Bolsonaro. Voters in Brazil are going to the polls on Sunday with the outcome in doubt after Bolsonaro’s unexpectedly strong showing in the first round of voting.
The investigation also revealed that Google routinely places ads on sites pushing falsehoods about COVID-19 and climate change in French-, German- and Spanish-speaking countries.
The resulting ad revenue is potentially worth millions of dollars to the people and groups running these and other unreliable sites — while also making money for Google.
Platforms such as Facebook have faced stark criticism for failures to crack down on disinformation spread by people and governments on their platforms around the world. But Google hasn’t faced the same scrutiny for how its roughly $200 billion in annual ad sales provides essential funding for non-English-language websites that misinform and harm the public.
Google’s publicly announced policies bar the placement of ads on content that makes unreliable or harmful claims on a range of issues, including health, climate, elections and democracy. Yet the investigation found Google regularly places ads, including those from major brands, on articles that appear to violate its own policy.
ProPublica’s examination showed that ads from Google are more likely to appear on misleading articles and websites that are in languages other than English, and that Google profits from advertising that appears next to false stories on subjects not explicitly addressed in its policy, including crime, politics, and such conspiracy theories as chemtrails.
A former Google leader who worked on trust and safety issues acknowledged that the company focuses heavily on English-language enforcement and is weaker across other languages and smaller markets. They told ProPublica it’s because Google invests in oversight based on three key concerns.
“The number one is bad PR — they are very sensitive to that. The second one is trying to avoid regulatory scrutiny or potentially regulatory action that could impact their business. And number three is revenue,” said the former leader, who agreed to speak on the condition that their name not be used in order not to hurt their business and career prospects. “For all these three, English-speaking markets primarily have the biggest impact. And that’s why most of the efforts are going into those.”
ProPublica used data provided by fact-checking newsrooms, researchers and website monitoring organizations to scan more than 13,000 active article pages from thousands of websites in more than half a dozen languages to determine whether they were currently earning ad revenue with Google. (To read a detailed breakdown of how ProPublica obtained and analyzed the data, see this accompanying article.)
The analysis found that Google placed ads on 41% of roughly 800 active online articles rated by members of the Poynter Institute’s International Fact-Checking Network as publishing false claims about COVID-19. The company also served ads on 20% of articles about climate change that Science Feedback, an IFCN-accredited fact-checking organization, has rated false.
A number of Google ads viewed by ProPublica appeared on articles published months or years ago, suggesting that the company’s failure to block ads on content that appears to violate its rules is a long-standing and ongoing problem.
In one example, Google recently placed ads for clothing brand St. John on a two-year-old Serbian article falsely claiming that cat owners don’t catch COVID-19. Google placed an ad for the American Red Cross on a May 2021 article from a far-right German site that claimed COVID-19 is comparable in danger to the flu. An ad for luxury retailer Coach was recently attached to an April article in Serbian that repeated the false claim that the COVID-19 vaccines change people’s DNA.
Last August, the Greek edition of the Epoch Times, a far-right U.S. publication connected to the Falun Gong spiritual movement, published an article that falsely claimed the sun, and not increased levels of carbon dioxide, could be responsible for global warming. That story had multiple Google ads when ProPublica viewed it, even though it appears to clearly violate Google’s policy against climate disinformation.
A spokesperson for the Red Cross said its ad appeared on the far-right German site due to an automated placement it did not directly control.
“Please note that based upon our Fundamental Principles of impartiality and neutrality, the Red Cross does not take sides in issues of a political, racial, religious or ideological nature, so we would purposefully not advertise on a story or site such as the one you shared with us,” said a statement from the organization.
Coach and St. John did not respond to requests for comment.
Google’s policy is to remove ads from individual articles that violate its rules, and to take sitewide action if violations reach a specific undisclosed threshold. Google removed ads from at least 14 websites identified in the investigation after being contacted by ProPublica.
Google spokesperson Michael Aciman said the company has put more money into non-English-language enforcement and oversight, which has led to an increase in the number of ads blocked on pages that violate its rules. He declined to provide figures or to say how many people Google has working on non-English-language content and ad review.
“We’ve developed extensive measures to tackle misinformation on our platform, including policies that cover elections, COVID-19 and climate change, and work to enforce our policies in over 50 languages,” Aciman said. “In 2021, we removed ads from more than 1.7 billion publisher pages and 63,000 sites globally. We know that our work is not done, and we will continue to invest in our enforcement systems to better detect unreliable claims and protect users around the world.”
The data about ad removals comes from Google’s most recent Ads Safety report, which emphasized the removal of ads from more than half a million pages that violated policies against harmful claims about COVID-19 and false claims that could undermine elections. But Google does not release a list of pages or publishers it took action against, the countries and languages they operate in or other data related to its Ads Safety report.
Google has been vocal about its $300 million commitment, announced in 2018, to fight misinformation, support fact-checkers and “help journalism thrive in the digital age.” But the investigation shows that as one arm of Google helps support fact-checkers, its core ad business provides critical revenue that ensures the publication of falsehoods remains profitable.
Laura Zommer, the general director of the Argentina-based Chequeado, founded in 2010 as the first fact-checking organization in Latin America, said Google’s failure to invest in oversight of sites in languages other than English causes serious harm in emerging democracies.
“The problem is that disinformation that takes hold in less developed democracies can cause even more damage than the disinformation circulating in countries with more developed democracies,” said Zommer, who is also the co-founder of Factchequeado, an initiative to counter Spanish-language disinformation in the U.S.
In Serbia, Croatia and Bosnia, three Balkan countries where democracy is fragile, 26 of the 30 most prolific publishers of false and misleading claims in the region earn money from Google, according to data from local fact-checkers.
“If the world’s largest online advertising platform doesn’t care that it has made false information, hate speech and toxic propaganda profitable in societies like ours, and has no intention to do anything to change because it wouldn’t financially pay off, that is devastating,” said Tijana Cvjetićanin, a member of the editorial board of Bosnian fact-checking site Raskrinkavanje, which shared data with ProPublica.
A comparison with English-language outlets suggests Google is more rigorous in choosing its publisher partners in that language. ProPublica found Google placed ads on 13% of English-language websites that NewsGuard deemed unreliable for having repeatedly published false content or deceptive headlines and failing to meet transparency standards. In contrast, ProPublica’s analysis found anywhere from 30% to 90% of the sites most often flagged for false claims by fact-checkers in the non-English languages examined were monetizing with Google.
Along with unequal enforcement across languages, ProPublica found disparity across and within regions.
Africa Check shared a list of 68 active English-language URLs that had been fact- checked as false by teams in South Africa, Nigeria and Kenya since 2019, as well as 45 French-language articles that had been debunked by its French-language checkers. ProPublica’s analysis found that 57% of debunked English-language articles in Africa had ads from Google, while the percentage was higher, 66%, for French-language articles.
Alexandre Alaphilippe, executive director of the EU Disinfo Lab, a non-profit organization that researches disinformation, said Google should be required to equally enforce its policies across languages and regions and to be transparent about its oversight decisions.
“These companies have decided to go global in their services, and that was their own decision for growth and to make revenue,” he said. “It’s not possible to make this choice and not face the accountability needed to be in all of these countries at the same time.”
Google’s Global Ad Dominance
Google is the world’s biggest digital advertising business. Last year it generated a record $257 billion in revenue. Most of that money comes from companies paying to place ads on Google products such as search and YouTube. But in 2021 Google earned $31 billion by placing its customers’ ads on more than 2 million websites around the world. They’re part of what the company calls the Google Display Network.
These publishing partners range from major news outlets such as The New York Times to small sites run by individuals. In order to join the Google Display Network, a publisher must meet requirements that include publishing original content and adhering to policies against unreliable and harmful claims and sexually explicit content, among others. Once accepted, Google says, publishers in the network receive 68% of the money spent on each ad placed on their site.
Google’s ad systems are also used to place ads on websites that are not necessarily members of its Display Network. These publishers work with ad technology companies that have partnered with Google, and which use its technology to buy and sell ads. As with ads placed on sites in the Display Network, Google and the publisher both earn money.
Google places ads on publisher sites using an automated auction system called programmatic advertising. The process starts when a person visits a webpage or opens an app. As the page loads, the site or app owner collects information about the ad space available along with data about the user, which can include location, age range, browsing history and interests.
The data is sent to an ad exchange like the one operated by Google, where ad buyers — ranging from major brands like Spotify to smaller local businesses — can place a bid to show an ad to the specific user visiting the website or app. Bids are placed, or not, based on the user and publisher data shared with potential advertisers and the price an advertiser is willing to pay to reach that person.
In the blink of an eye, the top bidder wins the auction and the ad loads on the page. Money flows from the ad buyer to the ad exchange (and any other intermediaries involved in the transaction), eventually making its way to the website or app publisher.
In 2019, the Global Disinformation Index, a nonprofit that analyzes websites for false and misleading content, estimated that disinformation websites earned $250 million per year in revenue, of which Google was responsible for 40% and the rest came from other ad tech companies. NewsGuard, which employs human reviewers to evaluate and rate websites based on a set of criteria including accuracy, estimated in 2021 the annual ad revenue earned by sites spreading false or misleading claims is $2.6 billion. The report did not say how much of that Google might be responsible for.
How much of Google’s revenue comes from monetizing false and misleading content is difficult to estimate. Each of the billions of digital display ads placed every day by Google has a different price point that fluctuates based on the combination of advertiser, target website and the users the ad will be shown to. It’s all part of a complex, opaque and largely automated digital ad buying and selling process dominated by Google. This means advertisers have to rely in part on the mix of automation and human review Google uses to ensure its publisher partners don’t violate its rules.
The findings of fact-checkers could be used by Google to enforce its policy against placing ads next to content that makes unreliable and harmful claims. There are more than 350 fact-checking projects around the world that employ journalists, and in some cases scientists, to identify and investigate claims spreading on the web, on social media and in traditional media. Their articles and associated ratings are used by platforms including Meta to help enforce policies around false and harmful content. Google already highlights fact-checks in search and Google News results to direct people to trustworthy information. But the company does not use fact-checks to keep ads off of pages with unreliable or harmful claims. And unlike Meta and TikTok, it does not pay fact-checkers for the results of their research.
“When it comes to ads, they obviously monetize disinformation. Whether it’s without knowing or knowing, it doesn’t matter,” said Baybars Örsek, director of the International Fact-Checking Network. “There has never been a public announcement from Google’s side that has acknowledged fact-checking as a signal for their ads monetization business.”
Tags: Corruption, Disinformation, Fake News, Fake Report, Google, Hoax, Media, Official Lies and Narratives, Propaganda
DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.