Debunking Oreskes

Spread the love

From Climate Scepticism

By JOHN RIDGWAY

Remind me again who the experts are

I doubt that there are many aficionados of this website who have not heard the name ‘Oreskes’. This is because Professor Naomi Oreskes is an academic who specialises in debunking climate change sceptics. Yes, she takes people like you and cruelly exposes your naïve acceptance of the supposed obfuscation, distortions and downright lies issued by Big Oil. You can read all about it in her infamous exposé, The Merchants of Doubt, where she documents in forensic detail how Big Oil employed a panoply of propaganda techniques taken straight from the tobacco industry ‘playbook’. And one reason why she is deemed qualified to do this is because she has a firm grounding in the history of statistics and how it should be employed.

Except the truth is that she has neither. In fact, such is her profound lack of understanding in those two vital areas, that one has to wonder why anyone listens to her at all. And it isn’t as though this critical shortcoming has hitherto been hidden from the public gaze. For many years she has been writing articles that others, who are far better qualified to comment, have been quick to destroy. And yet she is still here, exhibiting the same levels of pseudo-expertise that she claims exist only within the ranks of the climate change sceptical. How does that work?

Today, and purely for your entertainment1, I wish to take you back to 2015, to provide you with a prime example of her seriously flawed understanding of how statistics works. Furthermore, I will demonstrate to you how, even then, it wasn’t difficult to find people who were able to expose her junk wisdom. So prepare for a masterclass in debunking, not from me but from a certain Nathan Schachtman, Esq., PC, who for over 40 years has specialised in the application of statistics and causal analysis in order to address scientific and medical legal issues.

In typical fashion, Oreskes set out her intention to deprecate the climate change sceptic by giving her article2 the title, ‘Playing Dumb on Climate Change’. Nevertheless, the statistically trained lawyer, Schachtman, was having none of it and responded with his own article, ‘Playing Dumb on Statistical Significance’. So what is so dumb about the Oreskes article? I’ll let Schachtman explain:

Oreskes wants her readers to believe that those who are resisting her conclusions about climate change are hiding behind an unreasonably high burden of proof, which follows from the conventional standard of significance in significance probability. In presenting her argument, Oreskes consistently misrepresents the meaning of statistical significance and confidence intervals to be about the overall burden of proof for a scientific claim.

To illustrate Oreskes’ intentions, Schachtman provides the following quote from her article:

Typically, scientists apply a 95 percent confidence limit, meaning that they will accept a causal claim only if they can show that the odds of the relationship’s occurring by chance are no more than one in 20. But it also means that if there’s more than even a scant 5 percent possibility that an event occurred by chance, scientists will reject the causal claim. It’s like not gambling in Las Vegas even though you had a nearly 95 percent chance of winning.

And to explain why this misrepresents the meaning of statistical significance and confidence limits, he points out the following:

Although the confidence interval is related to the pre-specified Type I error rate, alpha, and so a conventional alpha of 5% does lead to a coefficient of confidence of 95%, Oreskes has misstated the confidence interval to be a burden of proof consisting of a 95% posterior probability. The “relationship” is either true or not; the p-value or confidence interval provides a probability for the sample statistic, or one more extreme, on the assumption that the null hypothesis is correct. The 95% probability of confidence intervals derives from the long-term frequency that 95% of all confidence intervals, based upon samples of the same size, will contain the true parameter of interest.

To add to this, Schachtman points out:

[A]lthough statisticians have debated the meaning of the confidence interval, they have not wandered from its essential use as an estimation of the parameter (based upon the use of an unbiased, consistent sample statistic) and a measure of random error (not systematic error) about the sample statistic.

All of this might seem a bit convoluted but the essence is this: Oreskes has taken a concept that represents the likelihood of data (i.e. a measure of random error about the sample statistic) and interpreted it as a posterior probability that the hypothesis is true. It’s a classic transposition of the conditional, i.e. treating P(E|H) as if it were P(H|E) . As such, it’s the same gaffe that Professor Fenton pointed out when the IPCC had stated in their AR6 executive summary for policy makers that there was at least a 95% degree of certainty that more than half the recent warming is man-made. In fact, what the body of the report had actually said was that the probability of observing the recent warming was only 5% if AGW was not deemed to be contributing over half. That is a very different statement. To turn the latter into the former requires one to take into account the a priori probability that the climate models are a faithful and accurate representation of the warming processes. And that’s a very open question, despite what Oreskes would have you believe.

However, the catalogue of Oreskean error does not end there, since she dares to venture further into the expert domain of the statistical lawyer, with the following:

But the 95 percent level has no actual basis in nature. It is a convention, a value judgment. The value it reflects is one that says that the worst mistake a scientist can make is to think an effect is real when it is not. This is the familiar “Type 1 error”…The fear of the Type 1 [false positive] error asks us to play dumb; in effect, to start from scratch and act as if we know nothing. That makes sense when we really don’t know what’s going on, as in the early stages of a scientific investigation. It also makes sense in a court of law, where we presume innocence to protect ourselves from government tyranny and overzealous prosecutors — but there are no doubt prosecutors who would argue for a lower standard to protect society from crime.

Once again, the lawyer with the statistics background has to remind Oreskes that you cannot equate the 95% coefficient of confidence in statistical theory with the legal standard known as “beyond a reasonable doubt”:

The truth of climate change opinions do not turn on sampling error, but rather on the desire to draw an inference from messy, incomplete, non-random, and inaccurate measurements, fed into models of uncertain validity. Oreskes suggests that significance probability is keeping us from acknowledging a scientific fact, but the climate change data sets are amply large to rule out sampling error if that were a problem. And Oreskes’ suggestion that somehow statistical significance is placing a burden upon the “victim,” is simply assuming what she hopes to prove; namely, that there is a victim (and a perpetrator).

The bottom line is that if you want to talk about Type I errors and burdens of proof, you need a much better grasp of statistical concepts than would seem to be the case with Oreskes. She goes on to try to rescue the situation with arguments that look Bayesian in nature but even then she falters badly by choosing passive smoking as her example. Schachtman has a long and successful career handling passive smoking claims in the courts, and he takes no prisoners in dismantling her ‘scientific’ arguments – but I’ll let you read that part for yourselves. What I will do here instead is leave you with Schachtman’s own closing statement:

I will leave substance of the climate change issue to others, but Oreskes’ methodological misidentification of the 95% coefficient of confidence with burden of proof is wrong. Regardless of motive, the error obscures the real debate, which is about data quality. More disturbing is that Oreskes’ error confuses significance and posterior probabilities, and distorts the meaning of burden of proof. To be sure, the article by Oreskes is labeled opinion, and Oreskes is entitled to her opinions about climate change and whatever.  To the extent that her opinions, however, are based upon obvious factual errors about statistical methodology, they are entitled to no weight at all.

Ooof!  That’s gonna hurt in the morning.

Epilogue

On his blog, dated 28th February 2023, Schachtman recounts how our intrepid expert, Professor Oreskes, sought to provide her expert testimony in support of Michael E. Mann’s defamation case against National Review magazine, the Competitive Enterprise Institute (CEI), and Mark Steyn. Despite being a professor of the History of Science, Oreskes’ hopes of putting her weight fully behind Mann were dashed when Judge Alfred S. Irving, Jr. decreed that she had no relevant expertise to offer. Oreskes’ opinions, at issue in the Mann case, were on:

  • the general basis for finding scientific research to be reliable, and
  • that “think-tanks” (including the defendant CEI) “ignore, misrepresent, or reject” principled scientific thought on environmental issues.

On the first issue, Irving ruled that her opinions were redundant, given that she is a historian and not a climate scientist. On the second issue, Irving asked what her expert methodology was for deciding whether principled scientific thought had been ignored, misrepresented or rejected. She described for Irving’s consideration something she referred to as a ‘content analysis’ she had performed when investigating Exxon:

We applied a well-established method in social science, which is broadly accepted as being, you know, a reputable method of analyzing something, content analysis, in order to show that there was this fairly substantial disparity between what the company [Exxon] scientists were saying in their private reports and publishing in peer-reviewed scientific literature which was essentially consistent with what other scientists were saying versus what the company was saying in public in advertisements that were aimed at the general public.

Except that, in the Mann case, Oreskes had to admit that she hadn’t actually used ‘content analysis’. Candidly she had to concede:

If you want me to tell you what my method is, it’s reading and thinking. We read. We read documents. And we think about them.

On the basis that it could be assumed that the jury members had already mastered the concepts of reading and thinking for themselves, Oreskes undoubted talents were politely declined by the judge, leaving Mann to do his own bullshitting. At least the courts were spared Oreskes’ botched explanations of ‘statistical significance’ and having to listen to her explaining why the scientists’ fear of Type I errors is making them far too conservative for the public good.

 Footnotes:

[1] I say just for your entertainment because nothing I write here about Oreskes will have the slightest impact on her reputation outside of Cliscep.

[2] I’m afraid that the Oreskes article was published in the New York Times and so it is behind a paywall. However, fortunately for the impoverished readers of Cliscep, Schachtman’s takedown includes extensive quotes from the NYT article, so you needn’t worry about redirecting funds from your jealously protected heat pump savings account.