Examine publishes rigorous, unbiased analysis of the latest and most important nutrition and supplementation studies each month, available to all Examine Members. Click here to learn more or log in.

Quick Navigation

Issue #55 (May 2019)

From the Editor

Volume 1

I attended a workshop a couple of years ago for productivity and self improvement. One of the key takeaways I got from it was a phrase I can say to myself that acts as a cue for reframing problems: “things are made of parts.” The idea is that problems can seem harder to solve when you take them as being one big monolithic thing. Framing the problem as being made up of a bunch of smaller issues that interact with each other can trigger new ways at looking at the problem to understand it better. Having a better idea of what processes go into making the problem a problem in the first place can make it easier to know where to intervene and predict what kinds of effects an intervention may have. In short, knowing how something works can make it a lot easier to manipulate it. Knowing how plumbing works makes it easier to repair. Knowing how a program works makes it easier to interpret and fix errors. And knowing how a problem works makes it easier to solve.

But the phrase “things are made of parts” creates friction with another phrase that’s stuck in my head from pharmacy school: “mechanistic explanations aren’t evidence”. That phrase doesn’t quite roll off the tongue as well as the first one, but it’s very important for me to keep in mind when editing the NERD, and when looking at scientific literature in general. Discussion sections of papers are littered with just-so stories that weave a narrative around the data that, while consistent with the evidence, doesn’t actually add anything to the evidence.

But that doesn’t mean narratives are useless. With a little work, they can be used to generate hypotheses that can be tested. That’s what inspired research we review in this volume’s “The myth of the sugar rush”. In the introduction to the paper we reviewed, the authors present an interesting case for how dietary carbohydrates could regulate mood: through raising the brain’s levels of tryptophan, the amino acid that is the major precursor for serotonin production. Without any knowledge of the parts which this system is made out of, this seems implausible. But the authors present a story made up of a few moving parts, along with some evidence backing each of those parts. While the evidence isn’t perfect, it’s strong enough to support a hunch that’s worth testing, which the authors did through meta-analysis.

You can read on for more details, but the main take-away was that dietary carbohydrates may, at best, have a mild effect on some minor aspects of mood at certain times, but most of the evidence was consistent with little to no effect. So, while the authors had a plausible hypothesis worth testing, the evidence suggests that the effect they hypothesized is weak or, in most cases, nonexistent.

This is by no means a failure. Scientific progress is built upon a pile of nixed hypotheses. Also, knowing that carbs seem to have little to no effect on mood is still knowledge that’s actionable. And it reinforces an important point: that mechanistic stories may be useful to generate ideas to test, but those ideas should indeed be tested. Plausible stories about mechanism aren’t the end of the story when it comes to buying a claim.

Gregory Lopez, MA, PharmD
Editor-in-chief, Nutrition Examination Research Digest


Volume 2

In the last volume of the NERD, I talked a bit about why justifying a nutritional claim with mechanistic reasoning can be a bit dangerous. I also talked about a phrase I picked up from a workshop: “things are made of parts”. In this volume of the NERD, these two ideas collide in a meta-analysis of the impact of coenzyme Q10 (CoQ10) on migraine headaches. A plausible mechanism for CoQ10’s effect on migraines inspired research into its efficacy. This body of research was meta-analyzed by a team who found that CoQ10 supplementation lowers the frequency of migraine attacks. The problem is that this conclusion rests on somewhat shaky ground due to the parts of their argument not quite fitting together.

The mechanistic story that raises the possibility of the utility of coenzyme Q10 (CoQ10) for migraines comes from one theory about what causes migraine headaches, which involves problems with the mitochondria. Since the mitochondria are major players in cellular energy production, if they malfunction, energy production can decrease. This, in turn, could ultimately increases the susceptibility for migraine attacks by affecting both blood flow in the brain as well as how the neurons in the brain fire. Since CoQ10 plays an integral role in mitochondrial energy production, supplementation may help the mitochondria make more energy, and therefore reduce the risk of migraine attacks.

The systematic review and meta-analysis we cover in this volume puts this theory to the test by looking at the literature concerning CoQ10’s clinical efficacy. If you just read the systematic review’s abstract, you may come away with the idea that CoQ10 supplementation reduces by around two attacks a month on average. But the reliability of this number is open to question for reasons we cover in the review. I’d like to highlight two of them here.

The first reason is that a meta-analysis is only as good as the studies that go into it. The authors used four studies in their meta-analysis of CoQ10’s impact on migraine frequency. Unfortunately, one of these four used neither randomization nor a placebo control. It’s probably not a coincidence that this trial also found the highest reduction in migraine frequency. Including this trial in the meta-analysis probably made CoQ10 look more effective than it actually is. A second part of the authors’ argument that doesn’t quite fit is that they used a fixed effects model to meta-analyze CoQ10’s impact on migraine frequency. They justified this by appealing to the low heterogeneity between trials. But the fixed effects model presumes that the people in all the trials come from the same population, and it’s pretty clear that these trials differed enough from each other so that a random effects model was warranted. For example, one of the trials they included involved only children, while the other trials involved only adults. These are pretty different populations, so it doesn’t make much sense to use a fixed effects model. Fixed effects models give smaller estimates for the error than random effects, so it’s possible that the authors’ choice to use a fixed effects model contributed to the statistically significant effect they found.

Meta-analyses are very useful for summarizing the state of the literature. But, just like all arguments, they’re made of parts that come together to bolster the conclusion. And if the parts don’t fit well, then the conclusion rests on a shaky foundation.

Gregory Lopez, MA, PharmD
Editor-in-chief, Nutrition Examination Research Digest

See other articles in Issue #55 (May 2019) of Study Deep Dives.