Biases in Climate Fingerprinting Methods

Spread the love

From Climate Etc.

by Ross McKitrick

  • Optimal fingerprinting is a statistical method that estimates the effect of greenhouse gases (GHGs) on the climate in the form of a regression slope coefficient.
  • The larger the coefficient associated with GHGs, the bigger the implied effect on the climate system.
  • In 2003 Myles Allen and Simon Tett published an influential paper in Climate Dynamics recommending the use of a method called Total Least Squares in optimal fingerprinting regression to correct a potential downward bias associated with Ordinary Least Squares
  • The problem is that in most cases TLS replaces the downward bias in OLS with an upward bias that can be as large or larger
  • Under special conditions TLS will yield unbiased estimates, but you can’t test if they hold
  • Econometricians never use TLS because another method (Instrumental Variables) is a better solution to the problem

Introduction

The method of “optimal fingerprinting” works by regressing a vector of climate observations on a set of climate model-generated analogues (called “signals”) which selectively include or exclude GHG forcing. According to the theory behind the methodology, the coefficient associated with the GHG signal indicates the size of the effect of GHGs on the real climate. If the coefficient is greater than zero then the signal is “detected”. The larger the coefficient value, the larger is the implied effect on the real climate.

The seminal method of optimal fingerprinting was presented in a 1999 Climate Dynamics paper by Myles Allen and Simon Tett. With some modifications it has been widely used by climate scientists ever since. Last year I published a paper in Climate Dynamics showing that the basis for believing the method yields unbiased and significant findings was flawed. This website provides links to my paper, as well as to the Allen and Tett (1999) paper I critiqued, a non-technical summary of my argument, Myles Allen’s reply and my response, and a comment by Richard Tol.

One of the arguments Allen made in response was that the issue is now moot because the method he co-authored has been replaced by newer ones (emphasis added):

“The original framework of AT99 was superseded by the Total Least Squares approach of Allen and Stott (2003), and that in turn has been largely superseded by the regularised regression or likelihood-maximising approaches, developed entirely independently. To be a little light-hearted, it feels a bit like someone suggesting we should all stop driving because a new issue has been identified with the Model-T Ford.”

Ha ha, Model T Ford; we all drive Teslas now, aka Total Least Squares. But in 20 years of usage did any climate scientists check if TLS actually solves the problem? A few statisticians looked at it over the years and have expressed significant doubts about TLS. But once it was adopted by climatologists that was that; with few exceptions no one asked any questions.

I have just published a new paper in Climate Dynamics critiquing the use of TLS in fingerprinting applications. TLS was intended to correct a potential downward bias in OLS coefficient estimates which could understate the influence of GHG’s on the climate. While there is a legitimate argument that OLS can be biased downward, the problem is that in typical usage TLS is biased upwards, in other words it overstates the influence of GHGs. There is a special case in which TLS gives unbiased results, but a user cannot know if a data set matches those conditions. Moreover, TLS is specifically unsuitable for testing the null hypothesis in signal detection and its results ought to be confirmed using OLS.

The Errors-in-Variables Problem and the Weakness of TLS

OLS models assume that the explanatory variables in a regression are accurately measured, so the “errors” separating the dependent variable from the regression line are entirely due to randomness in the dependent variables. If the explanatory variables also contain randomness, for instance due to measurement error, OLS will typically yield biased slope estimators. In a simple model with one explanatory (x) variable and one dependent (y) variable the bias will be downward, which is called “attenuation bias.” David Giles has a nice explanation of the problem here, and you can also look at econometrics texts like Wooldridge or Davidson and MacKinnon.

The measurement problem is referred to as errors-in-variables or EIV. Since climate models yield noisy or uncertain estimates of the true climate “signals” Allen and Stott (2003) suggested the TLS method as a remedy. This is not how econometrics deals with the issue. In every econometrics textbook of which I am aware, the recommended treatment for EIV is Instrumental Variables estimation, which can be shown to yield unbiased and consistent coefficient estimates. I have never seen TLS covered in any econometrics textbook, ever. Nor have I ever seen it used in economics, or anywhere else outside of climatology except in the small literature looking at the properties of TLS estimators, primarily a 1987 book by Wayne Fuller, a 1981 article in the Annals of Statistics by Leon Gleser and a 1996 article in The American Statistician by RJ Carroll and David Ruppert.

Both Fuller and Gleser discuss the difficulty of proving that TLS (or orthogonal regression as it is more commonly called) yields unbiased and consistent estimates. The problem, as explained by Carroll and Ruppert, is that the method requires estimating more parameters than there are “sufficient statistics” in the data: or in other words, more parameters than the data can identify. Implementation of TLS therefore requires arbitrarily choosing the value of one of the parameters. Both y and x have error terms with variances needing to be estimated, and the assumption in practise is that they are equal, so only one needs to be estimated. If they happen to be equal, Gleser shows that the TLS estimate is consistent (meaning any bias goes to zero as the sample goes to infinity). If not, consistency cannot be guaranteed. In the signal detection application this means that unless model-generated signals contain random errors with exactly the same variance as the random errors in the observed climate (or if they can be rescaled to make them equal), TLS cannot be shown to yield unbiased slope coefficients.

Carroll and Ruppert also point out that TLS depends on the assumption that the regression model itself is correctly specified, in other words the regression model includes everything that explains variations in the dependent variable. OLS assumes this as well, but it is more robust to model errors. If the model omits one or more variables but they are uncorrelated with the included variables then OLS coefficients will not be biased, but if any of the omitted variables are correlated with the included variable OLS will be biased up or down depending on the sign of the correlation. With TLS, bias arises either way, whether the omitted variable is correlated with the included variables or not, but the bias is always upwards. Unless you happen to have a regression model that fully explains the dependent variable, so that in the absence of random noise every observation would lie exactly on the regression line, the default assumption should be that TLS overestimates the parameter values.

Thus TLS can, in principle, yield unbiased signal detection coefficients, but only if the climate model that generates the signals includes everything that explains the observed climate, and adds random noise to the signals with precisely the same variance as the randomness in the observed climate. Of course, if those claims were true we wouldn’t need to do signal detection regressions in the first place. If we wanted to know how GHG’s influence the climate, we could just look inside the model. Signal detection regressions are motivated by the fact that climate models are neither perfect nor complete, yet the claim that the results are unbiased presumes that they are both.

Comparing TLS and OLS in practise

To investigate how these issues affect signal detection regressions I ran simulated regressions as follows. Imagine a sample of surface temperature trends (y) from a sample of 200 locations stretching from the North Pole to the South Pole. I constructed two uncorrelated explanatory variables X1 and X2. X1 can be thought of as 200 simulated trends (or “signals”) for those locations from a model forced with anthropogenic greenhouse gases and X2 comes from a model with only natural forcings. Then I added some random noise to the X’s yielding the random variables W1 and W2. Since every regression model potentially omits at least one relevant explanatory variable I also generated two additional variables Q1 and Q2. Q1 is just an uncorrelated set of random numbers. Q2 is a set of random numbers partially correlated with X1.

Then I generated 9 versions of the dependent variable y:

Y1 = bX1 + X2/2 + v where b was set equal to 0.0, 0.5 or 1.0 and v is white noise;

YQ1 = bX1 + X2/2 + Q1 + v

and

YQ2 = bX1 + X2/2 + Q2 + v;

and in each of the latter two b was again allowed to be 0.0, 0.5 or 1.0.

I regressed each version of y on W1 and W2:

Y1 = b1 W1 + b2 W2 + e;

YQ1 = b1 W1 + b2 W2 + e

and

YQ2 = b1 W1 + b2 W2 + e.

Each time I estimated the coefficients b1 and b2 using both OLS and TLS. By construction b2 should always equal 0.5 and I didn’t focus on it. Instead I focused on b1, which should equal 0.0, 0.5 or 1.0 depending on the simulation.

The important thing to bear in mind is that a researcher doesn’t know which dependent variable he or she has used. If we assume it’s Y1 then we are assuming the regression model is correctly specified, the only problem is W1 is a noisy version of X1. If we used YQ1 that means we assume the regression model omits an uncorrelated explanatory variable and if we assume we used YQ2 that means the regression model omits a correlated explanatory variable. There is no reason to assume we only ever use Y1 in practice: wouldn’t that be nice.

I ran these 20,000 times each and looked at the distributions of b1 under OLS and TLS. I then added a couple of other wrinkles. First I reduced the variance on the noise term on the X’s, which is analogous to improving the signal-noise ratio in X. I also ran a version in which the X’s are slightly negatively correlated, to correspond to the situation in signal detection applications where the anthropogenic and natural signals are negatively correlated.

The working assumption in the signal detection field is that the OLS estimates of b1 are biased low but the TLS estimates are unbiased. In the first set of results the distributions of b1 were as follows.

OLS is in blue and TLS is in red. A solid line means the dependent variable was Y1, a dashed line means it was YQ1 and a dotted line means it was YQ2. Looking at the OLS results, attenuation bias is multiplicative so when the true value of b is zero OLS is unbiased. It remains unbiased if the model omits an independent explanatory variable but if the omitted variable is correlated with X1 (dashed line) the OLS estimate is biased upward. As the true value of b rises the OLS estimate becomes centered below the true value. In the bottom panel, dotted line, the attenuation bias and the omitted variable bias roughly cancel each other out (dotted line) but this is just a fluke, not a general rule.

The TLS results are different. First of all the distribution is much wider because TLS is less efficient. When the true value of b is zero and there are no omitted variables the distribution is centered on zero. As the true value of b goes up, all three versions of the TLS regression yield positively-biased estimates.

Positive bias matters not only because of the risk of false positives but because the coefficient magnitude itself feeds into “carbon budget” calculations. The higher the coefficient value the smaller the “allowable” carbon budget when estimating the point at which the world crosses a certain climate target. These are important calculations with very large global macroeconomic consequences so I find it disconcerting that the problem of positive bias in TLS-based fingerprinting regression results hasn’t been examined before.

For the next batch of estimates I reduced the variance of the noise on the X’s, which I call the high SNRx case.

Now OLS moves towards the true value when there is no correlated omitted variable, which makes sense because as the noise on X goes to zero we approach the case where OLS is known to be unbiased. But TLS does not have the same tendency, indeed the positive bias gets slightly worse in the omitted variables case. This is not a good property of an estimator: as an important noise component shrinks you’d expect it to converge on the true value.

Next I looked at the case when the noise on the X’s and on y is the same magnitude, which is the optimal configuration for TLS because the assumed variance ratio in the computation algorithm corresponds to the actual unobservable variance ratio. If the regression model is correctly specified TLS is unbiased. But if a variable is omitted, even an uncorrelated one, and the true value of beta is >0, TLS has an upward bias. OLS has a downward bias except when Q2 is missing, then its net bias is upward.

I examined numerous other configurations of the simulation model and discussed the question of which estimator should be preferred. The differences in results do not reflect methodological choices, they reflect different assumptions about the underlying data generating processes and if the researcher has no idea which one best describes the data set at hand, OLS is more often a preferred option than TLS, notwithstanding its known biases. Yes OLS sometimes yields a coefficient biased towards zero, but it is a known bias. TLS will typically yield a coefficient with a positive bias, and the size of the bias is difficult to predict in part because of the large variance.

Interestingly, as the true value of b goes to zero, the estimator preference unambiguously goes toward OLS because the attenuation bias goes to zero and the TLS estimator becomes undefined. This means that if we are testing the null hypothesis that b=0, in other words greenhouse forcing does not explain observed climate changes, we shouldn’t rely on TLS since if the null is true we wouldn’t use TLS, we would use OLS. Or, put another way, if a significant signal detection result depends on using TLS rather than OLS, it is not a robust result.

Next Steps

I have another study under review in which I explore in some detail the consequences of allowing the X’s to be correlated with each other. I included a preliminary look at this case in the present paper. I found that when the signals are correlated OLS still exhibits attenuation bias even when the true value of b = 0 and TLS exhibits a positive bias, but in this case the TLS bias gets large enough to risk false positives: namely an apparently “significant” value of b even when the true value is zero.

In sum I conclude that in general TLS over-corrects for attenuation bias, thereby yielding signal coefficients that are too large. It also yields extremely unstable estimates with large variances. Researchers should not rely on TLS for signal detection inferences, unless they have done the required testing (as discussed in my paper) that establishes that TLS is appropriate for the context.

Also, climate scientists should consider using Instrumental Variables as a remedy for the EIV problem, since it can be shown to yield unbiased and consistent results.

Note: when I did the page proofs the main results tables as rendered on screen looked OK, but the print version is messed up. The 1st, 7th and 13th rows should each be shifted down one row from where they are.

Arrgh.

via Watts Up With That?

June 2, 2022