Examine publishes rigorous, unbiased analysis of the latest and most important nutrition and supplementation studies each month, available to all Examine Members. Click here to learn more or log in.

Quick Navigation

Issue #37 (November 2017)

From the Editor

Volume 1

In early November, the American Society for Clinical Oncology (ASCO) released a scientific statement on alcohol and cancer. Their explicit goals for releasing the statement were to promote public education about the link between “alcohol abuse and certain types of cancer”, support evidence-based policy for reducing “excessive use of alcohol”, educate oncologists on “excessive alcohol use and cancer risks”, and to identify areas for future research. Pretty bland, science-y type stuff, I’d say.

But you wouldn’t know any of that from reading the headlines covering the statement.

Pretty much all the media coverage focused on one thing that the ASCO statement mentioned: that even moderate drinking is associated with an increased risk of some cancers. The best evidence suggests that this indeed may be the case. However, this isn’t exactly breaking news, since this claim was based mainly on three meta-analyses: one[1] from 2013 focusing on light drinking, a 2014[2] dose-response study, and another meta-analysis[3] from 2015. But I guess a headline like “ASCO supports evidence-based policy for reducing excessive alcohol use” wouldn’t generate as many clicks, even though this news is a few years old.

While it wouldn’t be as catchy of a headline, it’d be more accurate. ASCO ends their statement, after building their evidence-based case, by recommending several policy measures to limit alcohol use. Their argument is based in part on the observational meta-analyses mentioned above in combination with mechanistic considerations. However, as you may have noticed in the sections I quoted, their main focus is on “high-risk” and “excessive” alcohol consumption. The main idea isn’t that everyone needs to quit drinking entirely or they’ll all get cancer.

To cut the media some slack, some of the statement can be read as supporting teetotalling, since the evidence does suggest that risk of cancers of the mouth, throat, and breast do increase somewhat with light drinking, which is usually defined as one drink a day for women and two for men. However, they shoot up much, much more with moderate to heavy drinking. Hence the “high-risk” focus of the ASCO statement.

But, if someone who does drink decides to stop in order to lower their cancer risk, the authors present some bad news: it takes about 20 years after quitting for cancer risk to drop to the same level as never-drinkers. However, they mention that this estimate is based on very limited evidence that is probably confounded by the fact that many people who stop drinking entirely may have been heavy drinkers beforehand. This is one of a handful of areas the authors say warrants further research.

So, be careful with getting your science from headlines.They can put undue focus on the most sensationalistic parts.

Gregory Lopez, MA, PharmD
Editor-in-chief, Nutrition Examination Research Digest


Volume 2

This NERD has a theme that we didn’t set out to create, but that happened anyway: that you can learn a lot from studies that don’t find a clear effect.

All three of the reviews in this volume mostly turned up empty-handed when searching for statistically significant effects. However, this fact alone isn’t necessarily evidence against the interventions working. But sometimes it can be. So, when does a null study count as evidence that the thing doesn’t work? Let’s take a peek at the three interventions covered in this volume to find out, starting with D-aspartic acid’s (DAA’s) effect on testosterone.

This study pretty much came up with zilch, and I take this as evidence against its efficacy. The main reason is because this particular upped the ante compared to previous ones: it used a larger dose and for a longer time than previous research did. These two factors make it more likely that if there’s an effect to be seen on testosterone, this study would have seen it. The fact that it came up with nothing counts against DAA working to boost testosterone, at least in healthy young men, since that was the population that was studied. Maybe a larger sample size would reveal a small effect, but as mentioned in the review, small changes in testosterone levels don’t make a big difference in healthy people. So, I personally wouldn’t place any money on DAA.

The other two studies in this volume also came up null. But, unlike the DAA study, I take them to mean that more research is needed before coming to any conclusions, but for different reasons.

In the case of whole grain’s influence on CVD risk, my reasoning mostly comes down to the consistency of the observational evidence. Study after observational study has found that whole grain intake is correlated with a host of heart health factors. As you’re probably aware, observational evidence can’t prove causality. However, when observations in different populations all look similar, it starts to smell a bit like causality. If there wasn’t a causal link, I’d expect more variety in the observational outcomes, since the confounding factors would change from study to study. This, in turn, would lead to variability in the outcomes: some studies would find an effect, some wouldn’t. But we don’t see much of this in the observational evidence. So, in this case, I think that with longer RCTs, we’ll see stronger effects on surrogate markers, and hopefully hard endpoints.

For vitamin D’s influence on athletic performance, there’s simply a paucity of evidence. There could be no effect, or an effect — we need to wait for more evidence to come in.

The take-home message here is lack of a significant effect doesn’t always imply no effect. But sometimes it could. The devil’s in the details.

Gregory Lopez, MA, PharmD
Editor-in-chief, Nutrition Examination Research Digest

See other articles in Issue #37 (November 2017) of Study Deep Dives.