Does “peer-reviewed” mean “true”?
Peer review isn't foolproof. Here's why:
For some people, PubMed is a useful source of data.
For others, it is a weapon.
Oh, so you don’t like my favorite diet? PEW PEW PEW! I’ll throw a bunch of study links at you.
Most link slingers don’t even read the studies they link to. They glance at abstracts, pick those that support their point, and discard those that don’t.
Some will go one step further and have a quick look at the conclusion of the paper, but they’ll skip most of the methods section. All that complicated stuff was already vetted by experts during peer review, anyway, so why bother?
🕵️ Peer review isn’t foolproof
I’ve only peer-reviewed a few journal submissions myself, but that (and discussions with colleagues) taught me that the process is much different than you’d think.
First, note that reviewers are unpaid. Some will pay close attention anyway. Others … won’t. But even dutiful reviewers will have limited time to devote to the task. They have their own papers to write, after all, and classes they’re actually paid to teach.
So while some papers do get criticized and perfected, others only get a quick peek and a nod.
😈 From tricky papers to dirty tricks
To be fair, reading papers can be tricky — some are complex, some badly written, and a few are even designed to trick the reader. Some researchers will alter photos and fudge data. Others, more cautious, will use real data … but only the data that suit them (you can read about data dredging here).
Why would anyone do that?
Conflict of interest is one obvious reason. For example, compared to studies funded by nonprofits, studies funded by soda manufacturers are much more likely to find that sugary or diet drinks cause no harm. This doesn’t mean you should automatically dismiss industry-funded studies, but they do deserve extra scrutiny.
Another reason researchers can be tempted to tweak the data — or to at least cast the data in a certain light — is “publish or perish”. Studies that report a big effect, whether beneficial or harmful, are more likely to get published and cited (and thus to prop up one’s career) than null studies.
📚 One study is just one study
For media outlets, a new study showing a big effect isn’t just an exciting study; it’s often the only study worth mentioning. So they’ll tell you about this cool new paper that shows that raspberry ketone is a miracle fat burner — and they’ll ignore any past evidence that it isn’t.
Which means that reading the full text of just one study (something that link slingers don’t even bother to do) isn’t enough. To get the full picture, you need to know about related studies, and you need to be able to compare the methods of all these studies.
The methods section of a study notably details the characteristics of the participants. If you’re a healthy young woman, let’s say, the results of a study of older men with diabetes may not apply to you.
🧬 There is only one you
Lastly, remember that we’re all different. A paper may conclude that a given diet led to an average weight loss of 20 pounds, but if you dive deeper, you’ll find that some of the participants actually gained weight on this diet.
Real life trumps all. What works for you may not work for your best friend, or even for your sibling, and vice versa. Likewise, what doesn’t hurt you may hurt other people, and vice versa.
Let’s say you have a friend who feels crappy whenever he drinks lemonade. It doesn’t make any sense, and there haven’t been any randomized trials, so … it must be psychosomatic.
No. It may be psychosomatic. And it may not. Absence of evidence isn’t evidence of absence. The absence of studies showing that lemonade makes some people feel crappy doesn’t mean that lemonade doesn’t have this effect — because who the heck is going to fund a large-scale study of lemonade’s side effects?
📝 Thank you for staying with me
… all the way to the end. Usually we share interesting studies, but once in a while, it’s nice to ponder the broader picture. If you hated this article (or want more like it), let us know!