A meta-analysis is usually produced as part of a systematic review. It combines the numerical results from several studies into an average number. Studies that differ too much in design cannot be properly meta-analyzed.


When you ask if a diet or supplement works, you don’t want just a yes-or-no answer. After all, if taking a fat burner for two years makes you lose only one more pound than you would have lost without it, you may not want to invest your time and money in it.

No, if something works, you want to know how well it works. And for that, you need numbers.

Individual studies often give us plenty of numbers to work with, but the numbers from one study may differ from those of another. A meta-analysis combines the numbers from several studies into a single value representing the average effect size (which should be interpreted as a correlation if the meta-analyzed studies are observational) of the intervention.

Meta-analyses are almost always run on studies collected during a Systematic review, so as to reduce bias. After all, if you just cherry-picked studies with a large effect, meta-analyzing them would unsurprisingly yield a large average effect. Good meta-analyses strive to be unbiased and to detect and report possible bias in the included studies.

A meta-analysis combines the results of multiple studies into an average effect or correlation. It also tries to detect if some of those studies are biased.

One major choice you have to make when you run a meta-analysis is whether to use a fixed-effect or random-effects model.

A fixed-effect model assumes that all the studies included in the meta-analysis are very similar to one another and can therefore be treated as different samples from the same population. In other words, it assumes that the intervention (e.g., diet, supplement), population (e.g., age, sex, health condition), and other important characteristics are all pretty much the same across the included studies.

This assumption may be justified if the meta-analysis uses a small subset of tightly controlled medical studies, but meta-analyses of nutrition and supplementation studies are usually better off using a random-effects model, which allows for some differences between the studies.

Random-effects models lead to more uncertainty and fewer statistically significant results than fixed-effect models, but fixed-effect models simply won’t provide valid results if the included studies aren’t very similar.

Sometimes, studies included in a systematic review are simply too dissimilar to meta-analyze, even with a random-effects model. For instance, if three studies measured blood pressure but used three completely different interventions (e.g., a drug, a diet, and an exercise program), meta-analysing them would be meaningless.

Meta-analyses use either a fixed-effect model (when the included studies are very similar) or a random-effects model (when the included studies aren’t similar enough, as is usually the case with nutrition studies). Studies that are too dissimilar cannot be properly meta-analyzed.