Why Red Meat Negative Health Claims are False

Spread the love

Guest post by S. Stanley Young and Warren Kindzierski


The World Economic Forum, assisted by food researchers in academia, wants you to believe that meat is unhealthy compared to soy, tofu, insect and fungus protein diets. Statistical workings of food research are presented here to show this is not true. Food frequency questionnaires (FFQs) are used in studies of population cohorts. Years later, this information together with health outcome observations are combined in statistical analyses. These analyses easily lead to over 20,000 food−disease associations tested in a typical FFQ study – called multiple testing. Researchers can then search thru and select and only report the results they want, but many of these can be false. Red meat is not unhealthy. It is belief of deceptive statistical practices and false claims from academic food researchers that are unhealthy.


Kip Hansen’s recent WUWT article was dead-on about nonsense behind meat being a problem for climate change. The World Economic Forum (WEF), assisted by academics, wants you to believe that meat is unhealthy compared to soy, tofu, insect and fungus protein diets.

The WEF asserts that in the future “…meat will be a special treat, not a staple for the good of the environment and our health.” Academics claim that eating red meat causes mortality, numerous types of cancer (colorectal, breast), Type 2 diabetes, and the list goes on. Does this make sense?

There is a saying… math is hard. Well, as will be shown, statistics appears to be even harder for academic food researchers. A look inside the statistical workings of food research (nutritional epidemiology) is a way to show this and to address doubtful red meat−negative health claims.


Many food claims – beneficial or harmful – are made based on observational study of large groups of people called cohorts. These cohorts are given a food frequency questionnaire, FFQ. A FFQ asks questions about different types and portion sizes of foods consumed. Years later food researchers ask about their health conditions.

They then perform statistical analysis of food−disease associations with the data collected. Surprising food−disease associations end up as published research claims. But are these claims true?

Unhealthy red meat claims merit special attention given the WEF’s fixation on it. Kip Hansen’s WUWT article pointed out an evaluation of red meat FFQ studies completed by the Bradley Johnston research group in 2019. It was an international collaboration examining red meat consumption and 30 different health outcomes.

The Johnston research group reviewed published literature, selected 105 FFQ studies, analyzed them and presented their findings in the journal Annals of Internal Medicine. They took a position opposite to the WEF – studies implicating red meat were unreliable. Their findings created a firestorm among food researchers, who are mostly academics. More about that later.


Statistically confirming the same claim in another study is a cornerstone of science. This is called replication. Given the potential importance of the Johnston study, it was recently independently evaluated in a National Association of Scholars report.

In the report, 15 of the 105 FFQ studies were randomly selected and subjected to counting of specific details. This included counting number of food categories, number of health outcomes and number of adjustment factors in each of the 15 studies.

Food researchers use various techniques to manipulate FFQ data they collect. Researcher flexibility allows food categories from FFQs to be analyzed and presented in several ways. This includes individual foods, food groups, nutrient indexes or food-group-specific nutrient indexes. It was found that there were from 3 to 51 (median of 15) food categories used in the 15 studies.

The number of health outcomes ranged from just 1 to 32 (median of 3) in the 15 studies. Adjustment factors can modify a food−disease association. Nutrition researchers almost always include these factors in their analysis. These factors ranged from 3 to 17 (median of 9) in the 15 studies.

With these counts, the analysis search space can be estimated. This is the number of possible food−disease associations tested in a FFQ study. It is estimated as estimated as the ‘number of food categories’ ´ ‘number of health outcomes’ ´ ‘2 raised to the power of the number of adjustment factors’.

The typical (median) analysis search space estimated in the 15 studies was over 20,000. A large analysis search space means many possible associations can be tested. Food researchers can then search thru their results and select and only report surprising results, but also most likely false ones as we now show.

Now the elephant in the room… many of these types of analyses are likely performed by researchers with an inadequate understanding of statistical methods.

A p-value is a number calculated from a statistical test. It describes how likely (the probability) you are to have found a surprising result. It is a number between 0 and 1. The smaller the number the more surprise (the greater the probability).

The normal threshold for statistical significance for most science disciplines is a p-value of less than 0.05. Researchers can claim a surprising result if the p-value in a statistical test is less than 0.05.

However, a false (chance) finding may occur about 5% of the time when multiple tests are performed on the same set of data using a threshold of 0.05. Five percent of 20,000 possible associations tested may lead to 1,000 false findings mistaken as true results in a study.

The practice of performing many, many tests on a data set is called multiple testing. Say 20,000 associations are tested on a red meat FFQ study data set. Normally only several dozen results from all these tests would eventually be presented in a published study.

Of course, some of the results would be surprising. For example, a wild claim that red meat may lead to complications associated with erectile dysfunction. Otherwise, their study might not be accepted for publication.

Given these many tests with 1,000 possible false findings and only several dozen results presented, how does one tell whether a result claiming red meat leads to erectile dysfunction complications is true or just a false finding?

Without having access to the original data set to check or confirm a claim, you can’t! The Johnston research group was right to call out red meat FFQ studies as unreliable.

Cue the firestorm. Nutrition thought leaders – from Harvard – badgered the main editor of Annals of Internal Medicine to withdraw Johnson’s paper before it even appeared in print. The editor held firm. The food research mob did not prevail.


Too much nutrition thought leaders, mostly academics, take a position that multiple testing is not a problem in food research. They teach it is not a problem. They are wrong, it is a big problem.

No problem for them, but massive disinformation problems for everyone else when false findings are claimed as true results. John Ioannidis from Stanford and others have called out multiple testing as one of the greatest contributors to false published research claims.

FFQ studies using multiple testing and claiming red meat is unhealthy are largely academic exercises in statistical flimflamming. Red meat is not unhealthy. It is belief of deceptive statistical practices and false claims from academic food researchers that are unhealthy.

There are over 50,000 food−disease studies published since the FFQ was introduced in the mid-1980s. Essentially all these studies involve multiple testing and are very likely false.

     S. Stanley Young is the CEO of CGStat in Raleigh, North Carolina and is the Director of the National Association of Scholars’ Shifting Sands Project. Warren Kindzierski is a retired college professor in St Albert, Alberta.

via Watts Up With That?

August 11, 2022