From the Editor
I’d like to bring two things to your attention for this issue: one housekeeping item and one that’s interesting because it speaks to a question a lot of people with a cursory knowledge of nutrition research sometimes have: why do some studies’ conclusions seem to flip-flop?
In housekeeping news: the NERD has made its transition into a web-based format, which has the advantage of easier accessibility, searching, and tagging. We’re hard at work getting the entire NERD back catalog to the web portal, too.
We don’t want to leave those of you who prefer PDFs out in the cold though, so we put together a PDF version of each issue that you can download and read at your convenience as well. That way, you’ll hopefully get the best of both worlds.
The PDF will be stitched together from our usual bi-montly releases. So, you’ll get two volumes per issue as normal that will be released on the web first. Then, once those are up, we’ll put the PDF issue together from those. Since it doesn’t make a whole lot of sense for me to “introduce” an issue when it’s not bundled yet, I’ll be cutting down my commentary from once per web-released volume to once per issue.
Now, on to some commentary on what we’re all here for: nutrition science… and one reason why it sometimes looks like it conflicts itself.
One article in this issue speaks to this indirectly: the review covering an updated meta-analysis on marine-derived omega-3 fatty acids’ effect on cardiovascular disease (CVD) risk. This is one topic where the science seems to have flip-flopped over the years. You don’t need to look further than the pages of the NERD to witness this reversal for yourself. Back in NERD #42, we covered a meta-analysis looking at omega 3s’ impact on CVD risk. It concluded that marine-derived omega-3s had no clear effect.
However, since that meta-analysis was released, three big new trials looking at the issue have come out. The meta-analysis we cover in this issue builds on the one we reviewed back in issue #42 by adding these new trials to the mix. And (spoiler alert) now it looks like there’s an effect again.
On the surface, this can look as though the two studies are contradicting each other. Omega 3s don’t work! They work! What’s going on?
These two meta-analyses are a great example of one of many reasons why you can see headlines flip-flop, especially when meta-analyses are involved. However, this is exactly the kind of behavior you should expect when something works, but its effect size is small.
As more trials and participants are added to a meta-analysis, its uncertainty, as represented by the 95% confidence interval, shrinks. The easiest way to think of a 95% confidence interval is as the range of effect sizes that the study is compatible with. Since a real, but small, effect is close to 0, it takes a big sample size to shrink those error bars enough so that they no longer overlap 0, which ultimately yields a statistically significant effect. Since a lot of people focus on statistical significance (not a great idea IMHO) over other measures like the confidence interval, this gets translated to: “It doesn’t work! But now it does!”
My take on the study we review in this volume is that is exactly what’s happening here; on average, marine oils can impact cardiovascular disease risk, but only a little. So, it takes a very large sample size to see this small effect. And we may finally have it.
There’s a lot of nuance I’m skipping over here, such as whether higher doses in higher risk people could have a bigger effect, as suggested by the REDUCE-IT trial. There’s also the problem of the study using an inappropriate fixed-effect model for its meta-analyses, which probably underestimates the uncertainty.
But for the most part I think the general picture about marine omega-3s’ impact on CVD is finally becoming clearer. For many people, on average, and at the most studied doses (around 1 gram daily), marine oils seem to reduce CVD risk… but not by a lot.
Gregory Lopez, MA, PharmD
Editor-in-chief, Nutrition Examination Research Digest