Weaponizing "the science" to gain followers

    The problem with declarative social media statements

    Quiz of the week!


    Have you ever seen posts like these on social media?

    • Follow the science: organic is no different than conventional produce
    • The science is clear: artificial sweeteners are safe

    Ignoring the topics themselves for a moment, I very much dislike the way “science” is used in these kinds of posts, and find it harmful to open discourse.

    There are two main reasons why:

    1. Science is iterative and complex, not a blunt weapon to be wielded

    Science includes two related components:

    • The existing body of knowledge about how the world works
    • Cultivating new and more accurate knowledge by hypothesizing, experimenting, and iterating

    Science isn’t about running across a review paper, skimming the abstract, then raising it up like baby Simba and declaring yourself the Science King. That’s social media. Science is the part before all that.

    Science can be contentious, even among scientist colleagues, and the unknowns are too numerous to wield “the science” as a singular weapon. To further elaborate on this point, let’s quickly touch on the above topics:

    While organic produce isn’t vastly different from conventional produce in terms of nutrient content, many organically farmed plant foods have at least a bit higher levels of certain minerals and antioxidants (such as organic garlic and onions). But nutrient differences are small potatoes (pun intended) in the grand scheme of things.

    Rather, the burning questions are:

    “At typical human intake levels, which residual synthetic pesticides and fertilizers used with conventional produce might be substantially more harmful than their counterpart naturally-derived pesticides and fertilizers used with organic produce? And which naturally-derived ones might be more dangerous than their synthetic counterparts?”

    The answer to this question informs whether the huge price premium on organic produce is worth it. Unfortunately, I’m not knowledgeable enough to answer with any confidence and granularity, and without devoting much, much more research time.

    All of Examine’s revenue comes from our Examine+ membership. The more funding we have, the more we can do and the more we can dig into questions like “Is the huge premium on organic produce worth it?” If you like what you read every week, we have a 60-day money-back guarantee and a 7-day free trial, so there’s no risk to trying us out.

    Artificial sweeteners aren’t one monolithic category, and different ones act differently in the body. Artificial sweetener risk is also difficult to assess precisely, partly because you can’t easily do long-term randomized trials in humans, so various lines of animal research weigh more heavily. Here are a couple snippets from previous emails I’ve sent:

    This mouse study released after the WHO reports came out showed heritable learning and memory deficits from relatively low aspartame intake. It’s obviously in mice rather than humans, so it shouldn’t be weighed too heavily, but preclinical evidence is still important to incorporate into an overall evidence base on safety.

    Very limited human trial evidence does suggest that aspartame may have mood-depressing or headache-inducing effects in some subsets of people, even at doses below the maximum recommended by the WHO and FDA, but these trials need to be replicated in larger samples, given that the evidence on those outcomes is mixed.

    So while artificial sweeteners are far from categorically unhealthy and can be a useful option (especially for weight loss, at low-to-moderate doses and not using the same sweetener every single day during your entire adulthood in order to minimize cumulative exposure), there are open research questions about safety that are actively being researched. Science isn’t static, so social media posts implying that the poster is a “defender of the science” often rub me the wrong way.

    1. Speaking of “the science”, we know much less than we think

    I’m 44 years old, and this is my 20th year of being formally in the nutrition science space.

    When I was 24, I worked with a professor at the Johns Hopkins School of Public Health to help compile findings of the 100 most important randomized public health trials of all time, including a variety of nutrition ones.

    The trial-centric world was compelling to me. Well-conducted trials saved millions of lives. Trials and meta-analyses also seemed to show which diets and foods were healthiest, making my job as a nutrition student easier.

    And then, between the ages of 24 and 44, I became much more skeptical about the trial-centric world. Or at least how trials and reviews were commonly utilized.

    You see, Examine does secondary research:

    We don’t directly do experiments or care for patients, and because of that, I keep my ear keenly to the ground to get some sense of what we’re missing.

    Reviews and meta-analyses are amazing for certain purposes: tiering what generally “works” and “doesn’t work”, getting a lay of the land as a clinician or policy maker, figuring out how findings change over time, assessing potential publication bias, etc., etc.

    But showing reviews at the top of a pyramid, implying evidential superiority, is in my opinion misleading. Important details are easy to miss when individual participant data is synthesized into a trial that reports averages, and even more granularity is lost when that study is included in a meta-analysis with a bunch of other studies.

    Example #1: A famous low-carb vs. low-fat study

    One issue with over-applying trial findings is that they’re designed to show averages, not individual results.

    A great example is a famous large low-fat vs. low-carb study. Researchers wanted to shed light on which diet works better for weight loss. They found that ultimately, it doesn’t matter: both diets resulted in weight loss in roughly equal amounts. You might have seen similar statements on social media.

    But that’s on average. For some participants in the study, the diet mattered a lot. Check out Examine’s chart detailing the specific amount of weight lost by each participant:

    Pay especially close attention to the left side of the figure, where participants who lost the most weight are charted. The low-fat and low-carb diets really worked for several people!

    The correct conclusion from this paper may not be that low-fat or low-carb are equivalent for weight loss. It might actually be that efficacy is dependent on the individual: what works for one person may fail for another.

    The way to test that hypothesis would be to repeat this study but make it a two-year crossover study, where each participant gets both diets sequentially for a year each, to test for intra-individual variation in results. These trials are extremely expensive though, and adherence plummets as time goes on, so don’t hold your breath waiting for the twice-as-long sequel trial to come out!

    Example #2: Genetic variability

    Large trials and especially meta-analyses can lag behind clinician and patient observations by years or even decades. And while reviews are great for summarizing, they can’t fit in all the important details present in the full body of literature.

    For example, many years ago I noticed that sometimes local anesthetic injections didn’t work very well for me, and doctors sometimes didn’t believe that my pain could still be present. Later on, I was diagnosed with a genetic connective tissue disorder called Ehlers-Danlos Syndrome (EDS), and discovered that up to 88% of EDS patients reported anesthetic resistance.

    How many clinicians haven’t believed patients because their claim wasn't reflected in published research? Of course, the clinician’s job is extremely difficult, because pathophysiology and life science in general are so complex, and patients can also misreport. This is where having caring, open, and balanced bedside manner becomes critical.

    Example #3: Atypical food reactions

    Similarly, atypical food reactions are sometimes overlooked when a food has an air of healthfulness and shows positive results in trials and meta-analyses.

    For example, natto (the Japanese fermented bean that smells quite strongly) is frequently cited as one of the healthiest foods you can eat.

    Which it is, partly due to having uniquely high vitamin K2 levels and historically being a dietary staple of long-lived Japanese people.

    In 2021, a handful of bacterial bloodstream infections were reported in Japanese people with gastrointestinal perforations, stemming from the bacteria that ferments natto. While this was a novel finding, it kind of made sense because the patients had perforations that made them susceptible.

    In 2024, another natto bacteria infection was reported in a Japanese patient without gastrointestinal perforations who was not immunocompromised. Whaaaa?!?

    Natto can also cause gastrointestinal or other issues in people with histamine sensitivity, FODMAP issues, or mast cell activation syndrome.

    My point isn’t to show that natto is dangerous, because by and large it isn't, plus these people were eating natto pretty much every single day. Nor am I trying to scare you away from eating healthy foods based on rare case reports.

    Rather, I’m trying to show you that quick summaries declaring that something is healthy/unhealthy or safe/dangerous can be based on incomplete and highly summarized knowledge.

    For many people, it’s okay to use these summaries to guide decisions about health. But for many others, especially people with certain health conditions (gastrointestinal conditions, rare conditions, or multiple conditions) or atypical reactions to foods and medications, patient and clinician observations can be as or more important than the published literature.

    Examine still has a relatively small social media following. Part of the reason is that we don’t try to attract followers using declarative statements, and we don’t have any particular viewpoint on controversial issues — we present pros, cons, and nuances.

    We will continue not weaponizing “science” to gain followers. Instead, we’re going to meet our (future) readers where they are: more and more people are getting their health science information from social media and podcasts. Look for more (and more interesting) such content from Examine starting in the next month or two.

    And as always, if you liked or hated this type of long form email, just reply back and let me know! Using the scientific method, I can iterate and test reader feedback again and again with these emails. :)

    Kamal Patel
    Co-founder, Examine

    P.S. Our tech team rewrote the front end of the site, and while everything looks the same, Examine’s pages should load two to three times faster than before. If you see any issues, please let us know!

    Quiz of the week


    Answer: An ecdysteroid

    Turkesterone is an ecdysteroid found in a plant called Ajuga turkestanica. It is sometimes taken as a supplement with the goal of improving body composition and athletic performance. Turkesterone is not an androgenic anabolic steroid, and it does not bind to human androgen receptors. This is why ecdysteroids do not result in the same negative androgenic side effects often associated with steroid use. While animal studies have shown ecdysteroids to be effective, there is no robust evidence to support their efficacy in humans. Read more about ecdysteroid.