August 25, 2020 Hurricane LAURA / Earthquakes / World Weather / Disaster Alerts

New confirmation that climate models overstate atmospheric warming

by Ross McKitrick

Two new peer-reviewed papers from independent teams confirm that climate models overstate atmospheric warming and the problem has gotten worse over time, not better.

The papers are Mitchell et al. (2020) “The vertical profile of recent tropical temperature trends: Persistent model biases in the context of internal variability” Environmental Research Letters, and McKitrick and Christy (2020) “Pervasive warming bias in CMIP6 tropospheric layers” Earth and Space Science. John and I didn’t know about the Mitchell team’s work until after their paper came out, and they likewise didn’t know about ours.

Mitchell et al. look at the surface, troposphere and stratosphere over the tropics (20N to 20S). John and I look at the tropical and global lower- and mid- troposphere. Both papers test large samples of the latest generation (“Coupled Model Intercomparison Project version 6” or CMIP6) climate models, i.e. the ones being used for the next IPCC report, and compare model outputs to post-1979 observations. John and I were able to examine 38 models while Mitchell et al. looked at 48 models. The sheer number makes one wonder why so many are needed, if the science is settled. Both papers looked at “hindcasts,” which are reconstructions of recent historical temperatures in response to observed greenhouse gas emissions and other changes (e.g. aerosols and solar forcing). Across the two papers it emerges that the models overshoot historical warming from the near-surface through the upper troposphere, in the tropics and globally.

Mitchell et al. 2020

Mitchell et al. had, in an earlier study, examined whether the problem is that the models amplify surface warming too much as you go up in altitude, or whether they get the vertical amplification right but start with too much surface warming. The short answer is both.

In this Figure the box/whiskers are model-predicted warming trends in the tropics (20S to 20N) (horizontal axis) versus altitude (vertical axis). Where the trend magnitudes cross the zero line is about where the stratosphere begins. Red= models that internally simulate both ocean and atmosphere. Blue: models that take observed sea surface warming as given and only simulate the air temperature trends. Black lines: observed trends. The blue boxes are still high compared to the observations, especially in the 100-200hPa level (upper-mid troposphere).

Overall their findings are:

  • “we find considerable warming biases in the CMIP6 modeled trends, and we show that these biases are linked to biases in surface temperature (these models simulate an unrealistically large global warming).”
  • “we note here for the record that from 1998 to 2014, the CMIP5 models warm, on average 4 to 5 times faster than the observations, and in one model the warming is 10 times larger than the observations.”
  • “Throughout the depth of the troposphere, not a single model realization overlaps all the observational estimates. However, there is some overlap between the RICH observations and the lowermost modelled trend, which corresponds to the NorCPM1 model.”
  • “Focusing on the CMIP6 models, we have confirmed the original findings of Mitchell et al. (2013): first, the modeled tropospheric trends are biased warm throughout the troposphere (and notably in the upper troposphere, around 200 hPa) and, second, that these biases can be linked to biases in surface warming. As such, we see no improvement between the CMIP5 and the CMIP6 models.” (Mitchell et al. 2020)

A special prize goes to the Canadian model! “We draw attention to the CanESM5 model: it simulates the greatest warming in the troposphere, roughly 7 times larger than the observed trends.” The Canadian government relies on the CanESM models “to provide science-based quantitative information to inform climate change adaptation and mitigation in Canada and internationally.” I would be very surprised if the modelers at UVic ever put warning labels on their briefings to policy makers. The sticker should read: “WARNING! This model predicts atmospheric warming roughly 7 times larger than observed trends. Use of this model for anything other than entertainment purposes is not recommended.”

Although the above diagram looks encouraging in the stratosphere, Mitchell et al. found the models get it wrong too. They predict too little cooling before 1998 and too much after, and the effects cancel in a linear trend. The vertical “fingerprint” of GHG in models is warming in the troposphere and cooling in the stratosphere. Models predict steady stratospheric cooling should have continued after late 1990s but observations show no such cooling this century. The authors suggest the problem is models are not handling ozone depletion effects correctly.

The above diagram focuses on the 1998-2014 span. Compare the red box/whiskers to the black lines. The red lines are climate model outputs after feeding in observed GHG and other forcings over this interval. The predicted trends don’t match the observed trend profile (black line) – there’s basically no overlap at all. They warm too much in the troposphere and cool too much in the stratosphere. Forcing models to use prescribed sea surface temperatures (blue), which in effect hands the “right” answer to the model for most of the surface area, mitigates the problem in the troposphere but not the stratosphere.

McKitrick and Christy 2020

John Christy and I had earlier compared models to observations in the tropical mid-troposphere, finding evidence of a warming bias in all models. This is one of several papers I’ve done on tropical tropospheric warm biases. The IPCC cites my work (and others’) and accepts the findings. Our new paper shows that, rather than the problem being diminished in the newest models, it is getting worse. The bias is observable in the lower- and mid-troposphere in the tropics but also globally.

We examined the first 38 models in the CMIP6 ensemble. Like Mitchell et al. we used the first archived run from each model. Here are the 1979-2014 warming trend coefficients (vertical axis, degrees per decade) and 95% error bars comparing models (red) to observations (blue). LT=lower troposphere, MT=mid-troposphere. Every model overshoots the observed trend (horizontal dashed blue line) in every sample.

Most of the differences are significant at <5%, and the model mean (thick red) versus observed mean difference is very significant, meaning it’s not just noise or randomness. The models as a group warm too much throughout the global atmosphere, even over an interval where modelers can observe both forcings and temperatures.

We used 1979-2014 (as did Mitchell et al. ) because that’s the maximum interval for which all models were run with historically-observed forcings and all observation systems are available. Our results would be the same if we use 1979-2018, which includes scenario forcings in final years. (Mitchell et al. report the same thing.)

John and I found that models with higher Equilibrium Climate Sensitivity (>3.4K) warm faster (not surprisingly), but even the low-ECS group (<3.4K) exhibits warming bias. In the low group the mean ECS is 2.7K, the combined LT/MT model warming trend average is 0.21K/decade and the observed counterpart is 0.15K/decade. This figure (green circle added; see below) shows a more detailed comparison.

The horizontal axis shows the model warming trend and the vertical axis shows the corresponding model ECS. The red squares are in the high ECS group and the blue circles are in the low ECS group. Filled shapes are from the LT layer and open shapes are from the MT layer. The crosses indicate the means of the four groups and the lines connect LT (solid) and MT (dashed) layers. The arrows point to the mean observed MT (open arrow, 0.09C/decade) and LT (closed arrow, 0.15 C/decade) trends.

While the models in the blue cluster (low ECS) do a better job, they still have warming rates in excess of observations. If we were to picture a third cluster of models with mean global tropospheric warming rates overlapping observations it would have to be positioned roughly in the area I’ve outlined in green. The associated ECS would be between 1.0 and 2.0K.

Concluding remarks

I get it that modeling the climate is incredibly difficult, and no one faults the scientific community for finding it a tough problem to solve. But we are all living with the consequences of climate modelers stubbornly using generation after generation of models that exhibit too much surface and tropospheric warming, in addition to running grossly exaggerated forcing scenarios (e.g. RCP8.5). Back in 2005 in the first report of the then-new US Climate Change Science Program, Karl et al. pointed to the exaggerated warming in the tropical troposphere as a “potentially serious inconsistency.” But rather than fixing it since then, modelers have made it worse. Mitchell et al. note that in addition to the wrong warming trends themselves, the biases have broader implications because “atmospheric circulation trends depend on latitudinal temperature gradients.” In other words when the models get the tropical troposphere wrong, it drives potential errors in many other features of the model atmosphere. Even if the original problem was confined to excess warming in the tropical mid-troposphere, it has now expanded into a more pervasive warm bias throughout the global troposphere.

If the discrepancies in the troposphere were evenly split across models between excess warming and cooling we could chalk it up to noise and uncertainty. But that is not the case: it’s all excess warming. CMIP5 models warmed too much over the sea surface and too much in the tropical troposphere. Now the CMIP6 models warm too much throughout the global lower- and mid-troposphere. That’s bias, not uncertainty, and until the modeling community finds a way to fix it, the economics and policy making communities are justified in assuming future warming projections are overstated, potentially by a great deal depending on the model.

References:

Karl, T. R., S. J. Hassol, C. D. Miller, and W. L. Murray (2006). Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences. Synthesis and Assessment Product. Climate Change Science Program and the Subcommittee on Global Change Research

McKitrick and Christy (2020) “Pervasive warming bias in CMIP6 tropospheric layers” Earth and Space Science.

Mitchell et al. (2020) “The vertical profile of recent tropical temperature trends: Persistent model biases in the context of internal variability” Environmental Research Letters.

Posted on August 25, 2020 by curryja

Climate Etc.

Yes, there is a cure for COVID-19

The world has a cure for COVID-19. The current COVID-19 mortality in all countries of the Northern Hemisphere, except for Israel, is 4-40 times lower than in the US. Israel stopped using Hydroxychloroquine, probably under the influence of the Big Tech misinformation.

Fig. 1. COVID-19 mortality in the US, Canada, Europe, Asia, Africa, China, and India. Deaths per Million, per day, averaged over 7 days.

Our World in Data (1), 2020-08-25

Latin America has higher current COVID-19 mortality than the US. Latin America may be suffering from the same internal political problems as the US or they may be mirroring US policies. Additionally, most South American countries have winter, and are in the end of their flu season now.  South Africa has almost the same mortality as the US, but it is winter and a flu season there. (There is a tool visualizing flu seasons by country.) We will exclude them from consideration.

The US has more than 1,000 COVID-19 deaths per day. This is 2.93 deaths per Million per day. Israel has 2.56.  The world average, India, and Russia have about 0.7 – 4x less than the US. Europe has 0.42. Asia has 0.32. Africa has 0.25. Canada has 0.19. There are differences in age structure of different countries, but they are typically compensated by similar differences in medical care. Fig. 2 shows select countries.

Fig. 2. COVID-19 mortality in the US and select countries, deaths per Million, per day, averaged over 7 days.

Our World in Data (2), 2020-08-25

Whatever the trajectory of COVID-19 spread countries had, most of the world arrived at low current mortality now. The US and some European countries have suffered more than 400 deaths per Million. Sweden and many large cities in the US and Western Europe are thought to approach “herd immunity”. But Turkey and Hungary, for example, accumulated only 71 and 63 deaths per Million, respectively. They have a current mortality of 0.24 and 0.06 (!) deaths/M/day. Their death rates peaked on the week of April 19 and have steadily decreased since then. They used HCQ. Country-level statistical analysis confirms that HCQ is the difference.

We cannot passively wait for “herd immunity” acquisition. Even if 1,000+ deaths per day were acceptable, we should expect that COVID-19 will become worse in the fall. The respiratory infections season in the US starts in early October. This year it might start earlier because beach closures and lockdowns prevented people from sunlight exposure and summer physical activities, probably compromising natural immunity that people acquire in summer.

In the words of Yale epidemiologist Harvey Risch: “The Key to Defeating COVID-19 Already Exists. We Need to Start Using It.” Based on dozens peer-reviewed studies, and the experience of thousands doctors, one cure is:

Treatment: HCQ + AZ + Zn, given early, upon onset of symptoms.

Prophylaxis (high risk population): HCQ + Zn

Anybody who knows a better one is welcome.

via Science Defies Politics

https://ift.tt/3aYwu7K

August 25, 2020 at 03:41PM

New paper suggests historical period estimates of climate sensitivity are not biased low by unusual variability in sea surface temperature patterns

Charles Rotter / 16 mins ago August 25, 2020

Reposted from Dr. Judith Curry’s Climate Etc.

Posted on August 24, 2020 by niclewis |

By Nic Lewis

An important new paper by Thorsten Mauritsen, Associate Professor at Stockholm University[i] and myself has just been accepted for publication (Lewis and Mauritsen 2020)[ii]. Its abstract reads:

Recently it has been suggested that natural variability in sea surface temperature (SST) patterns over the historical period causes a low bias in estimates of climate sensitivity based on instrumental records, in addition to that suggested by time-variation of the climate feedback parameter in atmospheric general circulation models (GCMs) coupled to dynamic oceans. This excess, unforced, historical “pattern effect” (the effect of evolving surface temperature patterns on climate feedback strength) has been found in simulations performed using GCMs driven by AMIPII SST and sea ice changes (amipPiForcing). Here we show in both amipPiForcing experiments with one GCM and through using Green’s functions derived from another GCM, that whether such an unforced historical pattern effect is found depends on the underlying SST dataset used. When replacing the usual AMIPII SSTs with those from the HadISST1 dataset in amipPiForcing experiments, with sea ice changes unaltered, the first GCM indicates pattern effects that are indistinguishable from the forced pattern effect of the corresponding coupled GCM. Diagnosis of pattern effects using Green’s functions derived from the second GCM supports this result for five out of six non-AMIPII SST reconstruction datasets. Moreover, internal variability in coupled GCMs is rarely sufficient to account for an unforced historical pattern effect of even one-quarter the strength previously reported. The presented evidence indicates that, if unforced pattern effects have been as small over the historical record as our findings suggest, they are unlikely to significantly bias climate sensitivity estimates that are based on long-term instrumental observations and account for forced pattern effects obtained from GCMs.

In this article I explain in more detail what Lewis and Mauritsen (2020)  is all about and what its main findings and conclusions are. For a full picture, please read the paper, which is open-access; it is available in a reformatted version here.

Introduction

The back-story is concerns have been expressed that accounting for changing temperature patterns increases historical period energy budget based estimates of climate sensitivity.[iii] [iv] [v] This idea is now being used in assessments of climate sensitivity to increase significantly estimates based on historical period warming (e.g., Sherwood et al. 2020[vi]).

As I explained in a detailed 2018 article,[vii] the key paper making and quantifying this effect (Andrews et al 2018) was based on simulations driven by an observationally-based estimate of the evolution of SST and sea-ice over the historical period (amipPiForcing experiments, over 1871–2010), that showed, in six models, their climate feedback strength (λ, here λamip) on average to be substantially greater, and hence their effective climate sensitivity (EffCS[viii], here EffCSamip), substantially lower, than when responding to long-term CO2 forcing.

Only a relatively small part of the differences reported in Andrews et al. (2018) can be attributed to effective climate sensitivity to CO2 forcing increasing over time in most atmosphere-ocean global climate models (AOGCMs). Based on typical CMIP5 AOGCM behaviour, that factor, which is allowed for in some historical period energy budget based estimates, such as Lewis and Curry (2018)[ix], would only account for approximately 5% out of the 40% shortfall in effective climate sensitivity that Andrews et al. found,[x] with a further 7% due to their use of mismatching CO2 forcing values.7 This implies that the bulk of the average difference they found was instead attributable to unforced (internal) climate variability having affected SST patterns – a (positive) unforced historical pattern effect. Although I put forward arguments, both in Lewis and Curry 2018 and in my 2018 article, against claims of such an effect having occurred, such claims have become widely accepted by climate scientists.

There are other possible explanations for the differences reported in Andrews et al. (2018). One is that the AOGCMs’ simulated long-term SST and sea-ice patterns, and the resulting radiative responses, are unrealistic. Another is that the GCM radiative responses in amipPiForcing experiments are unrealistic. In this article I shall put those questions aside. A further possibility is that the forced response of the climate system to the historical mixture of forcings differs significantly from that to pure CO2 with the same time-profile of evolving effective radiative forcing. LC18 put forward evidence against that being the case. Moreover, I produced evidence in a subsequent article that, in the two models for which radiative response in standard CMIP5 climate model historical experiment was accurately known, there was no evidence that the response to the mix of anthropogenic forcings differed from that to pure CO2 forcing.[xi]

I wrote in my 2018 article that to justify the a existence of a dampening unforced historical pattern effect one would need – even assuming long-term SST and sea-ice patterns, and the radiative response to them, simulated by AOGCMs to be realistic – to establish:

  1. that correctly-calculated EffCSamip estimates are adequately robust to choice of historical SST and sea-ice observational dataset;
    .
  2. that the differences between climate feedback strength over the historical period in amipPiForcing simulations (λamip) and when AOGCMs generate their own SST and sea-ice patterns in response to radiative forcing (λhist) could feasibly be due to natural internal climate system variability.

It is standard to use the AMIPII SST and sea-ice dataset[xii] to drive GCMs in amipPiForcing experiments. The AMIPII sea-ice dataset is based closely on HadISST1 data throughout the historical period, with only minor modifications. However, the AMIPII SST dataset is only based on HadISST1 data until late 1981, after which it is based on OIv2 SST data[xiii].  The OIv2 post-1980 SST dataset is based largely on the same in situ and satellite data as HadISST1, but with different bias corrections and a different interpolation method for reconstructing SSTs in areas lacking in situ observations.

Andrews et al. (2108) showed, in their Supporting Information, that EffCSamip estimates are not robust to choice of historical sea-ice observational dataset. They found that climate feedback in amipPiForcing simulations by two UK Meteorological Office GCMs was much weaker – λamip was smaller, and hence EffCSamip was higher[xiv] – when the HadISST2 rather than the AMIPII sea-ice dataset was used, in conjunction with HadISST2 SST data. The difference was mainly due to the change in sea-ice data rather than in SST data, and was large enough to reverse the sign of the unforced historical pattern effect, making it negative.[xv] While sea-ice variation is thus an important contributor to differences in climate feedback, we do not explore sensitivity to sea-ice dataset in Lewis and Mauritsen (2020), caveating our results in that respect. Instead we use consistent sea-ice data, enabling isolation of the influence of SST dataset on historical climate feedback.

Thorsten and I show in our paper that λamip estimates are far from robust to choice of historical SST dataset, and that when the widely used HadISST1 dataset is used in place of the AMIPII SST dataset – with unchanged AMIPII sea-ice data – λamip and λhist are indistinguishable: no unforced historical pattern effect is found with the models  we used. We also investigated the unforced historical pattern effect using five other SST datasets, finding a significant estimated effect only in one case.[xvi]

Although the Lewis and Mauritsen (2020) findings are based on simulations by only two GCMs, directly for ECHAM6.3 and indirectly (via “Green’s functions”) for CAM5.3,[xvii] they are consistent with historical warming in the Indo-Pacific warm pool, relative to that over the ice-free ocean as a whole, being much higher in the AMIPII than in the HadISST1 SST dataset.[xviii]

There is evidence, at least in CMIP5 models, that climate feedback strength is strongly positively related to relative warming in the Indo-Pacific warm pool.[xix] On physical grounds, warming in tropical ascent regions, of which the most important is the Indo-Pacific warm pool, relative to elsewhere is expected to produce a strong increase in outgoing radiation at the top of atmosphere.[xx] This effect, as estimated using the CAM5.3 Green’s functions, is illustrated in Figure 1 of our paper, reproduced below.

Figure 1. CAM5.3 Green’s functions: panels (a) and (b) show the change in respectively global mean Ts (K) and in global mean R (Wm−2) per 1K increase in local grid-cell SST, while panel (c) shows the global climate feedback parameter λ (Wm−2K−1) for a change in local grid-cell SST (the ratio of the values plotted in panel (b) to those plotted in panel (a)).

Our results

The Lewis and Mauritsen (2020)  main results are set out in its Tables 1 and 2, reproduced below:

Table 1. Excess Indo-Pacific warm pool SST trends and climate feedback, in ECHAM6.3 amipPiForcing simulations and in MPI-ESM1.1 coupled 1pctCO2 and historical simulations. All values are based on ensemble mean Ts and R data (save for AMIPII and HadISST1 SST trends and standard deviations of individual run feedback estimates). Feedback estimates are from OLS regression, of pentadal mean data for amipPiForcing simulations. Values in brackets are standard errors of the OLS regression feedback estimates, which reflect underlying deviations from a linear relationship as well as internal variability.

Table 2. Excess Indo-Pacific warm pool SST trends and Green’s function derived estimates of climate feedback in CAM5.3 AMIPII-based amipPiForcing simulations, in CESM1-CAM5 coupled 1pctCO2 and historical/RCP8.5 simulations, and for warming in six observational SST datasets, along with feedback estimated from the actual CAM5.3 AMIPII-based amipPiForcing simulation data. Feedback estimates are from OLS regression of  pentadal mean R and Ts values derived from the evolving SST warming patterns in the relevant simulation or observationally-based dataset. Data over 1871-2010, the amipPiForcing experiment period, is used, with data from the historical experiment extended using RCP8.5 experiment data, save in the 1pctCO2 simulation case where years 1–70 data is used.

It has been found in CMIP5 AOGCMs that on average approximately 60% of the change in feedback parameter over time during abrupt4×CO2 simulations comes from the tropics (30°N–30°S),[xxi] due in particular to the west tropical Pacific warming significantly less than the east tropical Pacific, with the tropical pattern becoming more El Niño like as simulations progress. However, the Green’s function feedback estimates for the seven observationally-based SST datasets are strongly correlated with warm pool SST trends relative to those over the tropics and mid-latitudes (r=0.90), but not relative to those over the tropics alone (r=−0.10) (Figure 2).

Figure 2: a reproduction of Figure 4 of Lewis and Mauritsen (2020). The relationship between climate feedback strength, estimated using the CAM5.3 Green’s functions and pentadal regression, and the warming trend in the Indo-Pacific Warm Pool relative to that over either 30°S–30°N (blue circles) or 50°S–50°N (red circles), both over 1871–2010, for SST per seven observational datasets (AMIPII, HadISST1, HadISST2, Had4_krig_v2,  HadSST4_krig_v2, COBE-SST2, ERSSTv5). The red line shows a linear fit between the warming trend in the IPWP relative to that over 50°S–50°N and estimated climate feedback strength (r = 0.90). No equivalent fit is shown for the warming trend in the IPWP relative to that over 30°S–30°N, as the relationship is very weak (r  = −0.10).

The Andrews et al. (2018) AMIPII-based λamip values, which are based on regressing annual ensemble mean simulated values of outgoing radiation R on surface air temperature T, exceed our estimates on the same basis of the corresponding λhist values for all six of the AGCMs involved. That implies a positive unforced historical pattern effect in all cases when using the AMIPII dataset. However, we found that regressing annual mean data, as is standard practice, non-negligibly biased λamip estimates (although not λhist estimates) in some cases. We therefore used instead estimates from regressing pentadal mean data. Using pentadal mean data substantially reduces noise in the regressor variable, which through regression dilution causes a downward bias in the slope coefficient, and also greatly diminishes the effect of responses to interannual fluctuations, thus providing more robust estimation.[xxii] As we show in our paper, the Andrews et al. λamip estimates are up to 9% too strong relative to those based on pentadal mean data, due to responses to interannual fluctuations.

We show in our paper that, for the five GCMs featured in Andrews et al. (2018) for which the estimated AMIPII-based unforced historical pattern effect derived from regressing pentadal data was positive, unforced variability in preindustrial control run segments from 43 CMIP5 AOGCMs is in all but 0.06% of cases inadequate to account for that unforced historical pattern effect. Moreover, in only 10% of cases is such variability sufficient to capture unforced pattern effects of one-quarter their strength. Of course, the realism of multidecadal internal variability in AOGCMs could be questioned. However, we concluded that if internal variability in at least some CMIP5 AOGCMs is realistic, it seems highly probable that either the AMIPII SST dataset is flawed or at least part of the historical pattern effect detected when using AMIPII SST data is forced.

Conclusions

Our principle conclusion is:

‘In this study we have found no evidence for a substantial unforced pattern effect over the historical period, arising from internal variability, in the available sea surface temperature datasets, save for when the AMIPII and ERSSTv5 datasets are used. Our results imply that the evidence suggesting existing constraints on EffCS from historical period energy budget considerations are biased low due to unusual internal variability in SST warming patterns is too weak to support such conclusion, and suggest that any such bias is likely to be small and of uncertain sign.’

We also say:

‘The various datasets try, in different ways, to take advantage of the satellite observations from when they become available around 1980. The post-1981 AMIPII dataset interpolation method, however does so in a way that emphasizes small scale features at the expense of the large scale patterns central to the study of pattern effects (Hurrell et al. 2008). Perhaps as a result, AMIPII warms more in the western tropical ocean basins and less in the eastern subsidence regions when compared to HadISST1. Earlier studies have in other contexts pointed to issues with the patterns of tropical warming in AMIPII’

and

‘It is unclear from our results to what extent there is a robust relationship between stronger climate feedback and higher SST trends in the Indo-Pacific warm pool compared with elsewhere, at least where the comparison is limited to the tropics.’

Nicholas Lewis                                                            24 August 2020

Originally posted here, where a pdf copy is also available


[i]  Meteorology Department. Previously at the Max Planck Institute for Meteorology in Hamburg, where he worked closely with Bjorn Stevens.

[ii]  Lewis, N. and Mauritsen, T., 2020: Negligible unforced historical pattern effect on climate feedback strength found in HadISST-based AMIP simulations. Journal of Climate, 1-52, https://doi.org/10.1175/JCLI-D-19-0941.1

[iii]  Gregory , J. M. , and T. Andrews , 2016 : Variation in climate sensitivity and feedback parameters during the historical period . Geophys. Res. Lett., 43 , 3911 –3920 , https://doi.org/10.1002/2016GL068406

[iv]  Andrews T. et al., 2018 Accounting for changing temperature patterns increases historical estimates of climate sensitivity. Geophys. Res. Lett. https://doi.org/10.1029/2018GL078887

[v]  Gregory, J.M., Andrews, T., Ceppi, P., Mauritsen, T. and Webb, M.J., 2019. How accurately can the climate sensitivity to CO₂ be estimated from historical climate change? Climate Dynamics. https://doi.org/10.1007/s00382-019-04991-y

[vi]  Sherwood, S., et al. “An assessment of Earth’s climate sensitivity using multiple lines of evidence.” Reviews of Geophysics (2020): e2019RG000678. https://doi.org/10.1029/2019RG000678

[vii]  Warming patterns are unlikely to explain low historical estimates of climate sensitivity, 5 September 2018.

[viii]  Effective climate sensitivity (EffCS) is an estimate of equilibrium climate sensitivity (ECS) derived by estimating climate feedback strength (λ) in a non-equilibrium situation and dividing it into an appropriately-derived estimate of the effective radiative forcing (ERF) from a doubling of preindustrial CO2 concentration. In an AOGCM experiment involving a step increase in CO2 concentration, this equates to linearly projecting warming to the point where the Earth’s radiation balance has been fully restored, and then scaling it appropriately if the increase in CO2 was not a doubling.

[ix]  Lewis, N. and J. Curry, 2018: The Impact of Recent Forcing and Ocean Heat Uptake Data on Estimates of Climate Sensitivity. J. Climate, 31, 6051–6071, https://doi.org/10.1175/JCLI-D-17-0667.1; also Masters, T., 2014: Observational estimate of climate sensitivity from changes in the rate of ocean heat uptake and comparison to CMIP5 models. Climate Dyn., 42, 2173 –2181. https://doi.org/10.1007/s00382-013-1770-4

[x] Lewis and Curry (2018) estimated a 10% difference when using long term EffCS estimates based on regression over years 21–150 of CMIP5 abrupt4xCO2 simulations, but that would reduce to 5% if instead basing them on regression over years 1–150, the method used in Andrews et al. (2018).

[xi] Gregory et al 2019: Does climate feedback really vary in AOGCM historical simulations? 31 October 2019

[xii] Hurrell, J.W., Hack, J.J., Shea, D., Caron, J.M. and Rosinski, J., 2008: A new sea surface temperature and sea ice boundary dataset for the Community Atmosphere Model. J. Climate21(19), 5145-5153. https://doi.org/10.1175/2008JCLI2292.1

[xiii] Reynolds, R.W., Rayner, N.A., Smith, T.M., Stokes, D.C. and Wang, W., 2002. An improved in situ and satellite SST analysis for climate. Journal of climate15(13), pp.1609-1625. https://doi.org/10.1175/1520-0442(2002)015<1609:AIISAS>2.0.CO;2

[xiv] Climate feedback strength λ is reciprocally related to EffCS. Note that we use a positive sign convention for climate feedback, but Andrews et al. (2018) use a negative sign convention, so care is needed in interpreting their statements about it.

[xv] Using feedback estimated by regression over years 1-50 of the parent AOGCMs’ abrupt4xCO2 simulations as a proxy for their forced historical feedback over 1871-2010.

[xvi] We computed feedback using surface skin temperature (Ts) rather than near-surface air temperature (T), for the reasons set out in our paper, save when working with data from Andrews et al. (2018), who used T.

[xvii] Through amipPiForcing simulations by ECHAM6.3, and through applying Green’s functions derived from multiple patch-warming simulations by CAM5.3. The Green’s function approach exploits the apparent linear superpositionality in space of GCM responses to warming. Global changes in surface temperature Ts and outgoing radiation R resulting from imposed evolving historical SST patterns can thus easily be emulated by the sums of the global responses to SST changes in individual locations weighted by time-invariant Green’s function values for each location, and associated climate feedback estimates derived. Sea-ice is held constant in the CAM5.3 patch-warming simulations, which reduces changes in the emulated values of both Ts and R.

[xviii] We define the Indo-Pacific warm pool as the region 15°S–15°N, 45°E–195°E, and compare its warming trend with that for the ocean from 50°S–50°N as a whole, that area being essentially ice-free all year.

[xix] Dong, Y., Proistosescu, C., Armour, K.C. and Battisti, D.S., 2019: Attributing Historical and Future Evolution of Radiative Feedbacks to Regional Warming Patterns using a Green’s Function Approach: The Pre-eminence of the Western Pacific. Journal of Climate, (2019).

[xx] That is because surface temperature in convective areas controls temperature in the tropical free troposphere, which spatially is fairly uniform, and influences temperature in the extratropics. An increase in free tropospheric temperature relative to surface temperature in descent regions strengthens the boundary layer temperature inversion, which is known to increase low cloud cover and hence reflected solar radiation.

[xxi] Andrews, T., Gregory, J. M., and Webb, M. J., 2015: The dependence of radiative forcing and feedback on evolving patterns of surface temperature change in climate models. J. Climate28(4), 1630-1648. https://doi.org/ 10.1175/JCLI-D-14-00545.1

[xxii] Using the ensemble mean from a number of amipPiForcing simulation runs data does not provide an adequate solution, because the noise in the SST data used to force the GCM is the same in all runs.

https://wattsupwiththat.com/

Heads I Win Tails You Lose: The Canadian Pandemic Model

Charles Rotter / 8 hours ago August 25, 2020

Guest post by Brian,

Introduction

A detailed analysis of the University of Manitoba’s recent model prepared on behalf of the Canadian Government illustrates exaggerated and incalculable conclusions. These explicitly theoretical projections, which have little evidence to support them, set an unrealistic foundation of what is considered a success or not with regards to Dr. Tam’s policies. In this case, the models that are used to predict the effects of Sars-Cov-2 adapts a completely unrealistic and unattainable worst-case scenario. Essentially any result, and every result possible, will be hailed as a resounding success – which is disingenuous. The virus would not come close to manifesting the chaos projected, even among a society with the loosest of policies. Fortunately, there are examples as many countries had their own approach in fighting the virus.

The basics of the SEIR model are rudimentary – though filled with several bells and whistles that create the sense of false precision.  This commentary will go through these one at a time. I find it important to note the modelling is unrealistic, serves to spread unwarranted fear in the Canadian population, and a breach of the trust placed in the Government of Canada by all Canadian citizens.

Comments on the Canadian Health Ministries Latest Sars-Cov-2 Projections

Introduction

A detailed analysis of the University of Manitoba’s recent model prepared on behalf of the Canadian Government illustrates exaggerated and incalculable conclusions. These explicitly theoretical projections, which have little evidence to support them, set an unrealistic foundation of what is considered a success or not with regards to Dr. Tam’s policies. In this case, the models that are used to predict the effects of Sars-Cov-2 adapts a completely unrealistic and unattainable worst-case scenario. Essentially any result, and every result possible, will be hailed as a resounding success – which is disingenuous. The virus would not come close to manifesting the chaos projected, even among a society with the loosest of policies. Fortunately, there are examples as many countries had their own approach in fighting the virus.

The basics of the SEIR model are rudimentary – though filled with several bells and whistles that create the sense of false precision.  This commentary will go through these one at a time. I find it important to note the modelling is unrealistic, serves to spread unwarranted fear in the Canadian population, and a breach of the trust placed in the Government of Canada by all Canadian citizens.

Canada’s COVID-19 Modeling

[i]

Fatality and Nursing Homes

Over 80% of all Sars-Cov-2 deaths in Canada are from Long term Care (“LTC”) and nursing home facilities[ii]. The inhabitants of these facilities are the oldest and weakest among the population.  In fact, only wealthy populations have extensive LTC communities which are mainly in Europe and North America.  Over 50% of Sars-Cov-2 fatality in the US and Europe are from these facilities[iii][iv].

The spread in these facilities is nosocomial. Meaning, not random population spread but a contagious virus dropped into a closed environment that then rips through the residents and staff[v],[vi].  These were all belatedly protected and we tragically saw the results of this inaction.  Random spread can have no relationship to LTC spread if proper policy and funding is in place. It is important to note that the Canadian government modelers openly reference non-random influenza spread from a 2017 paper, but do not account for this in their modeling.  This is completely inconsistent.

Having said that, any model that does not account separately for LTC spread and LTC fatality is simply a failure in illustrating the complete picture of the virus.  The single largest source of risk and fatality not being broken out means this Canadian model has no ability to properly project fatality.

“Conclusions. Our study revealed a highly structured contact and movement patterns within the LTCF. Accounting for this structure—instead of assuming randomness—in decision analytic methods can result in substantially different predictions.” (https://doi.org/10.1177/0272989X17708564)

Infectious Fatality Rate (“IFR”)

IFR is simply defined as one risk of dying if infected and is not to be confused with Case Fatality Rate (“CFR”) which divides fatality by confirmed cases.  CFR is an irrelevant statistic unless testing rate is relatively constant.  The CFR misrepresents the danger of the virus. Incidentally, a corollary is new case counts that do not predict new fatality which sounds like a paradox but is statistically true. Unfortunately, the media is obsessed with case counts, but they are the least valuable statistic in describing the state of spread currently available, including the danger of the virus. 

IFR varies by age and this is universal to all countries[vii]. All Canadian Government models[viii] have used an IFR of 1.2% – or 15x the true non-LTC Sars-Cov-2 risk, despite very strong evidence in March that its was 0.1% – 0.35%, or, near the flu.  Even the CDC when adjusted for asymptomatic infection has IFR 0.1%-0.35% inclusive of LTC fatality.  Recently Alberta concluded its antibody study.  Based on the results alone, the IFR in Alberta at the time as ~0.35%, however 75% of those fatalities were LTC[ix]. Non-LTC IFR is 0.08% – which is 50% less risky than the common influenza[x]

For the general population, the Canadian Government models intended to dictate health policy say the virus is 17X more deadly than reality – which is misleading and instills an unwarranted fear in Canadian citizens.

Canadian Government Estimates (per million) of Hospitalisations, ICUs, and Fatalities vs Alberta Serology Based Actual Percentages (Ex-LTC)  

A true predictive model would break out IFR by age and separately break out LTC fatality[xi].  The LTC break out is important – an Ontario government study concluded an LTC resident was 13x more likely to die than the same age non-LTC resident.

Serology studies in Africa and India – with poor health care relative to Canada show IFR’s 0.005% – 0.06%.  These younger populations with no LTC community have virtually no risk to dying despite little access to treatment.

No Canadian government model has performed this basic and necessary inclusion – which causes the modelling to be inaccurate and overstates the severity of the virus.

R0 Assumptions

R0 and R(t) are measurements on rate of viral spread.  R0 is the rate of spread assuming no existing interactions with Sars-Cov-2, while R(t) which goes down over time adjusted R(0) for infected, recovered and dead.

The newest model uses R0 of 2.9, 3.3 and 3.7.  These are results from old studies in hyperdense China.  Side note, Canadian modeling only references old Chinese studies (dated) and the Imperial College/Neal Ferguson study (model failure) while excluding newer and more accurate studies. Models are a tool that require accurate inputs to accurately assess risk.  The use of inaccurate inputs leads to inaccurate outputs.[xii],[xiii],[xiv].

Canadian R0, given our lower density (hence lower transmissible interactions) is about 2.0 nationally while early in the spread.  There is tremendous supporting documentation/evidence of this, and it is unclear why the Government would only allow a lower bound almost 50% higher than actual and an upper bound almost 100% more.

The misuse of the R0 variable is another main driver – like the similar failed Imperial College model before it – the new Canadian model does not replicate spread in places like Florida or Sweden.  It dramatically overestimates real life outcomes and should be compared against reality before providing outcomes to the public that cannot possibly happen under any scenario.

I’ll revisit R0 when discussing heterogeneity below.

Infectious period

There are multiple studies that the maximum infectious period of Sars-Cov-2 is about 8 days (known since early March)[xv].  The average time an infected person can infect another is about 4 days with a maximum of 8.  The Canadian government model assumes an average of 10 days – which does not align with observable data. There is no science behind this assumption but has the effect of magnifying model spread and generating unnecessary fear.

Heterogeneity of Spread, Herd Immunity and the Function of T-cells

Herd Immunity Threshold (“HIT”) is defined as the point at which spread can only decay lower i.e. R(t) < 1 permanently[xvi].  Using basic math, HIT is reached when 1-1/R0 of the population is infected.  If R0 is 2.0 – then 50% is HIT, if its 3.3 then ~70% need to be infected.  But this isn’t true in the real world.

The main (and inaccurate) assumption is that everyone mixes perfectly – a concept called homogeneity. Using an analogy, the Canadian model assumes a bartender at a popular restaurant in downtown Toronto interacts with others the same about in a week as a person living alone in a cabin in the Yukon.  The variation in interactions is called heterogeneity – uneven mixing.  Uneven mixing lowers HIT.  A lot.  To assume mixing is equal across all people in Canada is the absolute worst-case scenario mathematically possible.

There are various ways to model heterogeneity, but Dr. Tam’s group explicitly ignores its existence in a government model intended to guide policy[xvii]. They have decided to model only the worst case.  Heterogeneity lowers R0 over time as highly interactive individuals spread the virus early and then become blockers – slowing the spread and lowering R0 and R(t).  This is one large factor why Sweden[xviii] and other places have reached HIT when looking at their spread at far, far lower levels than this misguided Canadian model.

Heterogeneity is easily evidenced and can be partially quantified by the far higher spread in cities vs rural settings all over the world[xix],[xx].  Not accounting for these concepts – which are easily incorporated – is a breach of trust to Canadian citizens relying on knowledgeable health experts to provide accurate information. 

Another related factor is T-cell immunity, a growing and popular area of research.  It is not without contestation that Sars-Cov-2 is NOT “novel”; i.e. no one has existing defenses[xxi],[xxii],[xxiii].

  • In February (Singapore), Sars-Cov-1 patients showed 100% immunity to Sars-Cov-2 despite being infected 17 years ago. 
  • We know that common cold coronavirus is cross reactive to Sars-Cov-2 initiating a T-cell response and destroying the virus[xxiv]
  • T-cell protection does not create IgG antibodies (what antibody tests measure), but IgG antibodies create long term T-cell protection in at least 83% of cases. Antibody decay translates to long term immunity[xxv],[xxvi].
  • T-cell protected persons get the virus but almost always fight it off. They show positive on PCR test but not antibody tests.  Studies show on average 1.8x as PCR positive but antibody negative – meaning the virus has spread possibly 1.8x more than antibody tests alone imply.  This translates into lower IFR; meaning the virus is even far less deadly than the flu.

The new government model does not even bother to address to existence of T-cell immunity despite its widespread acceptance in the medical community – which further compounds the inaccuracy of the model used to derive policy. 

Conclusion

These new model outcomes have no basis in reality and should not be used for policy planning.  Better and more accurate models do exist, but it is unclear why the Canadian government does not use them.  This new model is beyond worst case – it is an impossibility like the models before it.  It is intended only as a counterfactual. Furthermore, it has been paid for by Canadian taxpayers, who’s trust has been which depend on accurate information.  Although I would prefer it were not true, I believe the model is being used purely as a preplanned counterfactual defense to Dr. Tam and her group’s expensive and mostly ineffective policy actions. 

The most likely outcome in Canada assuming no lockdown is 10-15% antibody spread or 18-28% true spread including T-cells and about 4,000-8,000 non-LTC residents fatalities from Sars-Cov-2 (government estimate in April – 300,000).  It is unclear that any interventions beyond full lockdowns have any material effect to slow viral spread; and full lockdown have tremendous cost.  In fact, it’s very debatable that lockdowns have any net positive effect on fatality. The idea of ‘better safe than sorry’ policies undertaken not just by Canada, but other countries, are starting to show irreparable damage to citizens. This could be due to damage to the economic livelihoods of the citizenry, increases in mental illness, drug abuse, child abuse, incremental global famine, child development, etc[xxvii]. This is largely due to poor information communication, lack of education on the subject matter, and a lack of putting statistics into real world context. This only instills fear which can illicit irrational, sometimes dangerous behavior by citizens. I need not get into examples of what fear and irrational behavior can do within a society historically as there are countless amounts of them[xxviii]. To put it into graphical context, Franklin Templeton put out a survey to gauge fear of death from Sars-Cov-2 among all age groups. 

Is this rational thought? Is this how we want people to live their daily lives? Between the ages of 18-64, there are a great many other things that have a higher chance of causing death outside of Sars-Cov-2. Not to mention people who are already struggling hard with mental illness. Many people who struggle with addiction depend so much on having structure, going to school/job, having hobby’s, meeting with friends etc. Video conferencing does very little for those who struggle with addiction. By enforcing isolationist policies, the biggest support of having ‘normality’ in their daily lives is eliminated and thus take a part in destroying the foundation of any form of happiness. What if they also have families, what if the person they depend on for their livelihood is the one that struggles with addiction? There are an estimated 2 million people who subscribe to Alcoholics Anonymous[xxix], and these are those who admit that they have a problem. If even 10% of them completely lose control of their lives because of these ill-conceived policies, that’s 200,000 people minimum who have their livelihoods destroyed with very little means to recover.

Granted, masks, basic social distancing, hand washing all may have an effect, but they appear to be less effective than we have been led to believe by the Canadian government. Most spread can be explained be reasonable heterogeneity models and T-cell immunity

The single best NPI the Canadian government can do is to open borders with no restriction to herd immune countries (Sweden, US, India, Mexico, France, and Brazil among others).  Canada will import lots of immune “blockers” and almost no live infections.  These blockers will serve to reduce R0 and R(t) – a concept easily modeled.  This single action is an order of magnitude more helpful in slowing spread permanently than masks, further lockdowns or even handwashing.  It is permanent and has the effect of positive economic and social benefit (all other NPI’s are varying degrees of negative).

We should all implore Dr. Tam and our highly compensated health experts to incorporate widely available empirical evidence to provide projections that accurately represent the risk of Sars-Cov-2 to Canadians.  Its very probable that the true outcome of such work will demonstrate the risk from Sars-Cov-2 was not and is not severe, outside of nursing homes. The work is also likely to show all the interventions, costs, and fear to slow its inevitable spread was not necessary.  Yes, it would be a devasting blow to Dr. Tam and our government’s reputation, but the good of Canadians is what matters. The current model has no basis in reality and has constituted a breach of the trust placed in Dr. Tam by the Canadian citizenry. 

I will reiterate, I prefer it were not true, but building such an obviously counterfactual model so Dr. Tam can later point to the outcomes being better than the model and say “see, I saved lives” seems to be the only point of the modelling exercise. This serves nothing but to instill unwarranted fear in the citizenry and provide a façade of competency in government policy.

Its disappointing that a knowledgeable individual such as Dr. Tam, whose expertise include infectious disease, would allow this model to be released.


End Notes (References):

[i] https://www.cbc.ca/news/politics/covid19-pandemic-modelling-tam-fall-peak-1.5686250

[ii] https://www.theglobeandmail.com/canada/article-new-data-show-canada-ranks-among-worlds-worst-for-ltc-deaths/

[iii] https://www.eurosurveillance.org/content/10.2807/1560-7917.ES.2020.25.22.2000956#html_fulltext

[iv] https://www.wsj.com/articles/coronavirus-deaths-in-u-s-nursing-long-term-care-facilities-top-50-000-11592306919

[v] http://www.imperial.ac.uk/mrc-global-infectious-disease-analysis/covid-19/report-13-europe-npi-impact/

[vi] https://doi.org/10.1177/0272989X17708564

[vii] https://www.cebm.net/covid-19/global-covid-19-case-fatality-rates/

[viii] https://www.canada.ca/en/public-health/services/reports-publications/canada-communicable-disease-report-ccdr/monthly-issue/2020-46/issue-6-june-4-2020/predictive-modelling-covid-19-canada.html

[ix] https://www.cbc.ca/news/canada/calgary/covid-19-deaths-long-term-care-cihi-1.5626821#:~:text=The%20analysis%20found%20537%20confirmed,per%20cent%20of%20total%20deaths.

[x] https://calgaryherald.com/news/local-news/about-36000-albertans-had-covid-19-by-mid-may-new-serology-testing-suggests

[xi] https://www.medrxiv.org/content/10.1101/2020.05.13.20101253v3

[xii] https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/imperial-college-covid19-npi-modelling-16-03-2020.pdf

[xiii] https://doi.org/10.1503/cmaj.200476

[xiv] https://doi.org/10.1016/S1473-3099(20)30243-7

[xv] https://www.acpjournals.org/doi/10.7326/M20-0504

[xvi] https://www.medrxiv.org/content/10.1101/2020.06.26.20140814v2

[xvii] https://globalnews.ca/news/7249803/coronavirus-vaccine-restrictions-theresa-tam/

[xviii] https://www.biorxiv.org/content/10.1101/2020.06.29.174888v1.full.pdf

[xix] https://www.medrxiv.org/content/10.1101/2020.04.27.20081893v3

[xx] https://www.medrxiv.org/content/10.1101/2020.07.15.20154294v1

[xxi] https://www.nature.com/articles/s41586-020-2550-z

[xxii] https://www.biorxiv.org/content/10.1101/2020.05.26.115832v1

[xxiii] https://science.sciencemag.org/content/early/2020/08/04/science.abd3871

[xxiv] https://www.livescience.com/common-cold-coronaviruses-t-cells-covid-19-immunity.html

[xxv] https://science.sciencemag.org/content/early/2020/08/04/science.abd3871

[xxvi] https://www.nature.com/articles/s41586-020-2598-9

[xxvii] https://www.medrxiv.org/content/10.1101/2020.08.12.20173302v1

[xxviii] https://www.franklintempletonnordic.com/investor/article?contentPath=html/ftthinks/common/cio-views/on-my-mind-they-blinded-us-from-science.html

[xxix] https://www.aa.org/pages/en_US/aa-around-the-world

https://wattsupwiththat.com/