Examine publishes rigorous, unbiased analysis of the latest and most important nutrition and supplementation studies each month, available to all Examine Members. Click here to learn more or log in.

Quick Navigation

Issue #54 (April 2019)

From the Editor

Volume 1

While going over the articles for this volume’s NERD, a theme jumped out at me regarding the size of the effects seen, and whether they actually matter.

Those of you with some basic statistical chops are probably familiar with the difference between statistical significance and clinical significance. While statistical significance tests have been getting a (perhaps well-deserved) wallop in the statistical literature recently, they’re still being used in the large majority of the nutritional literature. Putting the problems with statistical significance testing and its abuse to one side, there’s a clear difference between statistical and clinical significance.

How statistical significance is usually treated is that if something is statistically significant, it’s deemed a “real” effect that isn’t due to chance (this interpretation of p-values and statistical significance is very problematic, but, again, we’re not opening that can of worms here!). But an effect can be practically meaningless even if it is real.

For instance, if a magical supplement somehow dropped the systolic blood pressure of absolutely everyone who took it by 0.1 mmHg, that probably wouldn’t make any meaningful impact on the cardiovascular complications of hypertension. Even though this is a real effect, and if you got a big enough sample size, it would be statistically significant, it still isn’t clinically significant, since it probably wouldn’t make any meaningful impact on clinical outcomes like heart attacks or strokes.

The issue of clinical significance is relevant for two articles in this volume. The first involves a meta-analysis of cinnamon’s effect on weight. It found a reduction of around 1 kilogram, or just over 2 pounds on average. While that isn’t huge, it’s not terrible, until one considers the population, which had obesity and overweight alongside insulin resistance. Also, the results were short term; if the weight loss isn’t maintained over the longer term, then the chance that these changes are clinically significant is lessened. Finally, this change is much smaller than the potential impact of dietary changes, which have the potential to make a larger dent in bodyweight.

A similar short-term pattern of small changes can be seen in another study we cover in this volume concerning the possible adverse effects of energy drinks in a relatively young population. As you’ll see in that review, small, possibly negative changes were found, but one of those changes concerning insulin resistance is questionable, and the other changes seen were relatively small and on a short time scale. While the study may give some cause for further research regarding energy drink’s safety, on its own, it’s not enough to raise many alarm bells in my view.

When looking at the effects of a nutritional intervention, it pays to keep the clinical significance of the size of the effect in mind.

Gregory Lopez, MA, PharmD
Editor-in-chief, Nutrition Examination Research Digest


Volume 2

The concept of odds comes up a couple of times in this volume of the NERD. We haven’t talked too much about what exactly odds are in the past, so it may be useful to give readers interested in brushing up on their basic probability theory, or in reducing their sleep latency, a quick rundown.

Odds are directly related to probabilities. The probability that something occurs can be estimated from the number of times the event occurs in a given population. For instance, if I sample five NERD readers and find that one of them dozed off while reading about probability and statistics, I can estimate the probability of dozing off as ⅕, or 20%. The odds is just the probability of an event occurring divided by the probability that it doesn’t happen. So, in our example concerning soporific statistics, the odds are 1:4 of falling asleep, since the probability of dozing is 20% and 80% for staying awake. Another way to think about it is that, in this group of five NERD readers, one falls asleep while the other four stay awake. Hence, an odds of 1:4, or ¼ = 0.25. Note that the odds do not have the same value as the probability. This can be a problem if you interpret the ratios of odds and probabilities as equal.

Odds ratios are just the ratio of two odds. In this volume of the NERD, we review research examining arginine supplementation’s influence on erectile dysfunction (ED), and one of the ways its efficacy is reported is as odds ratios; specifically, the odds of ED improving for men who take arginine versus the odds of improvement if they take placebo. The researchers get some pretty big-sounding estimates of arginine’s effectiveness in terms of the odds ratio — for instance, the main outcome for all studies involving arginine yielded an odds ratio roughly equal to six. However, it wouldn’t be correct to say that the chance of ED improvement is six times higher for men who take arginine, since “chance” usually means the probability, not the odds.

However, odds ratios are roughly equal to probabilities if the frequency of the outcome in the population is pretty low. One rule of thumb[1] is less than 10%. But since ED is more prevalent than 10%, it’s not accurate to think that arginine supplementation boosts the chance of ED improvement six times.

Working with odds and odds ratios have a few benefits. Sometimes they’re just easier numbers to crunch. Other times, they’re the only numbers available to you because you can’t estimate the base rate in the population. A prime example of this would be retrospective case-control studies, where you look at a group of people with a certain outcome (i.e., cases), and a group of people who don’t (controls), and then look back in time for risk factors. For example, this study[2] found psoriasis to be a risk factor for ED using this method. If you peek at the abstract, you’ll notice that the main result is reported as an odds ratio.

While odds ratios can be useful, they’re also kind of… odd. It pays to be careful in how you interpret them if you come across any.

Gregory Lopez, MA, PharmD
Editor-in-chief, Nutrition Examination Research Digest

See other articles in Issue #54 (April 2019) of Nutrition Examination Research Digest.