Rotten Science: How We Got Duped by Fake Chocolate Science
ACADEMIA-KNOWLEDGE-SCHOLARSHIP, 1 Jun 2015
28 May 2015 – It’s been a bad month for science. First, a highly-cited study published in Science that found that voters’ opinions on same-sex marriage could be swayed by conversations with openly gay canvassers was retracted after UC Berkeley grad students pointed out irregularities and one of the study’s authors failed to produce the raw data. That author may also have lied about who funded the study. Then, yesterday, science journalist John Bohannon unveiled in io9 how he tricked the world into thinking that eating a daily dose of chocolate could help you shed pounds by publishing a bogus study in the journal International Archives of Medicine.
On Retraction Watch, a site dedicated to tracking scientific papers that are retracted, newly debunked studies are piling up. The reason we keep getting duped is that science isn’t just science anymore. It’s Big Business. And it’s time we start thinking about it that way because, as in any big industry, there are some disturbing things going on that most people outside of scientific circles don’t know about — but should. After all, these missteps affect our lives.
Some poor sweet-toothed soul probably started eating chocolate in the hopes that it would tip the scales. That sounds silly enough, but there are other science frauds that have far more serious ramifications. Anti-vaxxers are the direct result of a now widely refuted study in a high-caliber journal linking autism to childhood vaccines.
People are outraged that science is broken, but part of it is on us. There are ways to avoid being duped by the charlatans. We should always have our BS radar on when reading about science, be it in a mainstream publication or in an academic journal. Here are things to remember to be a better, more skeptical reader of science news.
Journals are money-making operations.
Not every journal is created equal. There’s a very clear hierarchy based on something called the impact factor, a measure of how often a particular journal is cited in a given year. Top-tier journals, like Science, Nature and Cell, have impact factors in the 30s. The higher the impact factor of the journal, the more validity your paper gets by association. That’s why most people didn’t initially question the canvassing study, which came with Science‘s high-caliber seal of approval. But that’s like saying that every person who goes to an Ivy League school is a prodigy. It’s simply not true.
On the other end of the impact spectrum, there’s been a huge proliferation of crappy ‘journals,’ if you can even call them that. In 2013, the journalist behind the chocolate study stunt wrote another bogus paper about a cancer-fighting chemical, which he then submitted to a bunch of journals. It was part of an investigation into open-access publishing, testing out if journals were conducting rigorous peer review—the process by which other scientists judge the merit of a given study. His shitty study was accepted by more than half the journals to which he submitted it. So what’s going on? According to Bohannon, these journals are nothing more than money-making machines:
Open-access scientific journals have mushroomed into a global industry, driven by author publication fees rather than traditional subscriptions. Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured…Internet Protocol (IP) address traces within the raw headers of e-mails sent by journal editors betray their locations. Invoices for publication fees reveal a network of bank accounts based mostly in the developing world.
If scientists actually read these studies, they’d be able to tell they weren’t reliable and the results wouldn’t go anywhere because they’d know not to cite them. (In science, the number of citations you get is a measure of how important your research has been.) But we now live in the age of the Internet, where any person can access scientific studies, whole or as abstracts, and blog about them. That can be a recipe for pseudoscience gone viral. And that’s exactly what happened with Bohannon’s chocolate study. Several media organizations including Shape, Cosmopolitan’s German website, and the Daily Star, covered it. Ooops.
Peer review is flawed.
The public, probably thanks to how the media covers science, often regards the findings in peer-reviewed papers as irrefutable fact. But peer review is highly imperfect. As Bohannon points out in io9, his study was published just two weeks after he and his co-authors paid the journal’s publication fee. The journal claimed it has a rigorous peer review but that’s doubtful. “Not a single word was changed,” he wrote.
Having peer-reviewed papers myself, that’s not the norm. At least it shouldn’t be. If you’re submitting to a respected journal like Nature, Cell, Neuron, PNAS, Science, or the Journal of the American Medical Association, a paper will have to pass, first through a staff editor, and then through a handful of reviewers, other scientists who are specialists in the area of research the paper deals with. (Yes, a tiny group of people get to decide what is good science and what isn’t.) Usually, good reviewers tease a paper apart, looking for holes in the data. Often, they’ll ask for extra experiments to round out a study, or suggest ways in which an experiment could be done better. This is supposed to serve as a filter, a quality-control to weed out bad studies. Of course, that doesn’t always work: the latest Science retraction is testament to that. But the problem is bigger. A recent study, for instance, found that 100 of psychology’s most seminal papers were not reproducible.
For now, peer review is the best we’ve got, which is to say we need to do better. Some journal startups, like PeerJ, are trying out a more crowdsourced approach to peer review, but the verdict is still out on whether this model works.
Part of the problem is that reviewers aren’t paid. It’s a volunteer gig. You get an email from a journal asking you to review a paper. Can we really expect overworked scientists to dedicate the necessary amount of time to dissect each paper they’re asked to review? Not always.
Also, what passes peer review can reflect people’s biases rather than a valid assessment of the research. Take, for instance, deep learning—the AI technique that’s en vogue right now at all of the biggest tech companies. In the early aughts, top researchers in this field, like Facebook’s Yann LeCun and Google’s Geoff Hinton, had a hard time getting their papers published if they as much as mentioned deep learning because most computer-vision experts thought the technique was subpar, despite some promising results and commercial applications, like reading digits on checks. Now, things have shifted because the science politics have shifted. Hinton and his grad students, for instance, blew competitors away in a computer vision competition some years ago, and the rest of the field had no choice but to take notice. Now deep learning summits are standing-room only and journals like Nature are publishing articles on the topic.
Science journals can fall prey to trends, and even chase Internet memes. What we see in journals isn’t absolute truth or reflective of the entirety of what’s happening in the scientific world. The gatekeepers’ biases play an important part in what gets published where. Keep that in mind.
Enter rat race to the top.
Because we place such a high premium on high-impact, peer reviewed papers, scientists are judged almost exclusively by their publication record. As Adam Marcus and Ivan Oransky wrote in their editorial about scientific fraud in The New York Times, “scientists view high-profile journals as the pinnacle of success — and they’ll cut corners, or worse, for a shot at glory.”
A paper in Science or Nature as a grad student can set you on the road to academic success. If you can keep that up through your post-doc years, you might even get a shot at a tenure track position. And getting one of those is increasingly difficult. Graduate programs are pumping out more and more grad students, but very few actually manage to climb to the top of the ivory tower. That has bred a highly competitive culture in which “sexy” results are rewarded more readily than meticulous research.
I think part of the problem is funding and job security. Federal funding for science has been drying up in recent years. Pay for young scientists is just atrocious. The baseline salary for a recently minted PhD, as set by the National Institutes of Health, is a measly $42,000 — and that’s for working 80-100 hours a week. Post-docs and grad students are basically cheap, high-quality labor. So the way to work your way up is to come up with interesting data because that can translate into grants, a raise, and perhaps down the line a coveted tenure-track professorship.
Show me the money.
And with that comes the promise of even more money. Many researchers have funding from industry. They get paid to give talks or to do consulting, which can lead to conflicts of interest. When publishing studies, corporate-funded researchers are supposed to disclose this, especially if the way they interpret the data could be biased by company interests, but that doesn’t always happen. Again, something to keep in mind.
We need better ways to make science and its shortcomings more transparent. But this month’s news isn’t all bad: we found out things went awry and how, thanks to those Berkeley students and Bohannon. The next step is doing something about it because these things won’t just fix themselves. Scientists should work together to devise a better way to screen research and the public should be more critical. If it sounds too good to be true, wait for a second study before filling your larders with chocolate.
Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.
DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.
One Response to “Rotten Science: How We Got Duped by Fake Chocolate Science”
Click here to go to the current weekly digest or pick another article: