Tag Archives: climate model

Climate Model Bias 3: Solar Input

From Watts Up With That?

By Andy May

In part 2 we discussed the IPCC hypothesis of climate change that assumes humans and our greenhouse gas emissions and land use choices are the climate change “control knob.”[1] This hypothesis underpins their attempts to model Earth’s climate. But the model output fails to match many critical observations and in some cases the model/observation mismatches are getting worse with time.[2] Since these mismatches have persisted through six major iterations of the models, it is reasonable to assume the flaw is in the assumptions, that is within the hypothesis itself, as opposed to being in the model construction. In other words, it is likely the IPCC conceptual model should be scrapped, and a new one using different assumptions constructed. In this post we examine their underlying assumption that the Sun has not varied significantly, at least from a climate perspective, over the past 150-170 years.

As well-explained by Bob Irvine,[3] there are only two things that contribute to the thermal energy content of a planet, the amount of incoming energy and the energy residence time within the system. These two things, along with the climate system heat capacity, determine the surface temperature. Arrhenius assumed and the IPCC still assumes the Sun delivers a nearly constant amount of energy to Earth over periods of a few hundred years, constant enough that it has no impact on our climate. In addition, they work with annual averages to avoid seasonal and orbital changes. In AR6, the base period is 1750 to 2019. The IPCC assumes the Sun is invariant, at least on an annual basis, over this period and volcanic activity is just slightly negative, as shown in figure 2.[4] AR6 summarizes their views as follows:

Changes in solar and volcanic activity are assessed to have together contributed a small change of –0.02 [–0.06 to +0.02] °C since 1750 (medium confidence).”AR6 p. 962.

The change of “-0.02°C” is indistinguishable from zero. Since the IPCC assumes that solar input to Earth’s climate system does not change, temperature only varies as a function of the “energy residence time,” which they assume is controlled by human activity and greenhouse gas emissions.

As explained in part 2, greenhouse gases absorb radiation emitted by Earth’s surface and use it to warm the lower atmosphere, thus delaying its eventual escape to space. It is uncontroversial that adding more of these gases increases the delay, warming the planet’s surface.

The IPCC assumes that the radiative forcing for a doubling of CO2 from 1750 levels is 3.9[5] W/m2 or less and that the climate impact of this forcing change is roughly equivalent to a change in solar forcing of 3.9 W/m2.[6] But a 3.9 W/m2 change in greenhouse gas emissions from the atmosphere in the infrared frequencies cannot penetrate the top millimeter of the ocean. Thus, it has a different impact than a 3.9 W/m2 change in solar radiation, part of which penetrates more than 100 meters into the ocean before it is fully absorbed. Oceans cover 70% of Earth’s surface and have a low albedo (reflectivity) to sunlight, thus the oceans absorb most sunlight reaching Earth.

Figure 2. The IPCC AR6 model simulated temperature change components for the period 1750-2019. Source: AR6, p 961, figure 7.7.

Downwelling greenhouse gas radiation warms the surface of the ocean briefly, then most of it is quickly carried away by the overlying wind or as latent heat of evaporation. It has a short residence time in the ocean and in Earth’s climate system. A change in incoming solar radiation is absorbed deeper in the ocean and has a longer residence time. This increases the ocean warming effect at the point of incidence and spreads the new thermal energy over a larger volume of water. The difference in the surface warming effect can be a factor of three or more, Watt-per-Watt, relative to a change in greenhouse gas back-radiation.[7]

Evidence that Bob Irvine’s hypothesis is correct includes the change in ocean temperatures over the course of one approximately 11-year solar cycle.[8] The shallow ocean heat storage above the 22°C isotherm,[9] increases almost an order of magnitude more than the direct effect[10] of the solar cycle radiation increase. Further, this change is in phase with the solar cycle. Small changes in the Sun’s output can accumulate over time, increasing their effect on total climate system heat storage.

Wigley and Raper calculated that for a change in solar output of about 1.1 W/m2, roughly the change over one solar cycle, the direct change in Earth’s surface temperature should theoretically be in the range of 0.014°C to 0.025°C, which is undetectable.[11] However Judith Lean shows the observed surface temperature change, due to the increase in solar activity is about 0.1°C, 4 to 7 times what is expected and the increase in the upper atmosphere is 0.3°C, more than an order of magnitude more than expected from the change in radiation delivered to Earth.[12]

Lean also adds that were the Sun to become anonymously low, like during the Maunder Solar Grand Minimum (from 1645 to 1715) the expected global surface temperature cooling would still be less than a few tenths of a °C. This is only true if the cooling is linear with the change in radiation and if there are no unexpected amplifiers in the climate system, both assumptions are unlikely. We know that there are amplifiers in the climate system because the warming and cooling over the solar cycle are more than the theoretical change as Wigley and Raper have shown. The warming and cooling could be linear with the change in radiation, but there is no reason to assume this, Earth’s surface is complex and ever changing.[13]

More simply put, we know that the climate system somehow amplifies changes in insolation, but we don’t know exactly how. We know that solar output in the Maunder Solar Grand Minimum was less than now and the change from current solar output is small in percentage terms, but we have no idea what the effect on Earth’s climate of the change was, only that historical records and climate proxies suggest the effect was very large.

Known solar cycles correlate well with known climate cycles and are in phase with them.[14] Various hypotheses have been proposed to show how the Sun’s output changes over time periods of a thousand years or less. These are periods short enough to affect surface temperature from 1750, near the end of the Little Ice Age,[15] to 2019. The problem is that although a correlation between solar activity proxies[16] and climate change can be demonstrated,[17] a mechanism for the change in solar activity cannot. Attempts to explain solar variability by internal changes in the Sun only work in some cases. For example, Frank Stefani and colleagues have shown how the approximately 193-year de Vries solar cycle may be a beat period between the 22.14 Hale Solar Cycle and the 19.86-year orbit of the Sun around the solar system barycenter.[18]

Nicola Scafetta and Antonio Bianchini have shown that the orbits of the planets around the Sun correlate with solar activity proxies.[19] However, exactly how the small gravitational changes influence the solar dynamo is unclear. Thus, the hypothesis that solar activity is regulated within the Sun itself cannot completely reproduce observations, and planetary tidal forces seem too weak to accomplish the changes. These gaps in our knowledge of the mechanisms impede the acceptance that multi-centennial or multi-millennial solar changes can influence our climate. The Sun does change according to accepted solar proxies, like carbon-14 and beryllium-10 records, but the change mechanism is unclear.

The problem with the IPCC (and Arrhenius’) assumptions is that they ignore this empirical and theoretical evidence that solar output and/or solar energy input to the Earth’s climate system varies significantly over periods of a few hundred years. Their obsession with human greenhouse gases has blinded them to possible natural influences on climate change that they should be investigating. This is not to say that human greenhouse gases have no effect, it is likely that they do have some effect, but evidence suggests that natural influences, like the Modern Solar Maximum[20] and ocean oscillations,[21] play a significant role also.

There is a large body of peer-reviewed papers on the subject of solar activity as a climate change driver, yet AR6 ignores most of them. A very comprehensive review of recent research on the effect of the Sun on Earth’s climate is presented in a recent paper by Ronan Connolly and 22 colleagues.[22] In the paper they cite 396 papers on the connection between the Sun and climate, as opposed to only 68 in AR6 WG1,[23] both AR6 WG1 and the paper by Connolly, et al. were first published in 2021. This illustrates how selective the IPCC authors were in what research was considered in their report.

There is no valid reason to assume that the Sun was constant in its effect on Earth’s climate from 1750 until today. The usual reasoning is that observed changes in solar output are too small, in terms of power delivered per square meter (W/m2) relative to changes caused by increasing greenhouse gases, but as Irving explains these two sources of change are not comparable because the frequency content of the two sources are different.

Summary

The goal of this post is not to convince anyone that solar variability is responsible for all or part of modern global warming, a subject that is well covered elsewhere.[24] The point is that the IPCC reports and the CMIP models do not consider or investigate this possibility.

It is true that exactly how solar variability occurs and how it affects climate are not known, but the Sun does vary, and the variations correlate with climate changes. It is unlikely that climate changes are a direct result of the change in insolation, the solar changes are amplified by Earth’s climate system somehow.

We also do not know how much solar output has varied since 1650, the middle of the devastatingly cold Little Ice Age and the onset of the Maunder Solar Grand Minimum. There are several possible reconstructions of solar output since then. Figure 3 shows one of them constructed from an ice core beryllium-10 isotope record by Steinhilber, et al. The major climatic periods since 0AD are noted on it, and the Solar Grand Minima are identified.

Figure 3. The Steinhilber, et al. (2009) TSI reconstruction from 10berylium isotopes. The solar grand minima are identified, as well as the major climatic periods since 0AD.

The absolute values of delta-TSI (the change in total solar irradiation), in W/m2, plotted in Figure 3 are based on one of many possible modern TSI reconstructions (PMOD) and may not be accurate, but their values relative to one another are reasonable. None of the modern satellite TSI reconstructions are well supported, and the debate over which one is the best is furious and ongoing. See the discussion here for an introduction to the debate. It is best to not consider the absolute value of the Y axis in Figure 3, and consider it a TSI index, no one really knows how much TSI has changed, even over the satellite era. Further, as we’ve seen, how TSI changes relate to climate changes quantitatively is also not known. All we know is that they generally change together.

In Figure 3 we can see that colder periods, like the Little Ice Age, have some solar peaks and some warmer periods, and the Medieval Warm Period has solar lows. None of the climatic periods identified in Figure 3 were uniformly cold or warm. What we call the Little Ice Age, had some hot periods, and the Medieval Warm Period had cold periods (see the section after figure 2 here for references). Further, the correlation between solar activity and climate is not exact, nor is it uniform and synchronous over the whole planet. This is probably because of the effects of convection and atmospheric and oceanic circulation that I examine in the next post. Climate change is complicated.

The beginning and end of the climate periods identified in figure 3 are approximate, and mostly a judgement call. All the climate periods start and end at different times in different places.

However, we do know that some solar proxy reconstructions correlate well with climate proxies since 1850 (see Table 1 here),[25] and that alone is justification for additional research. Solar variability can explain anywhere from zero to almost 100% of the warming since 1850, depending upon the datasets used.[26]

This is a very brief summary of the evidence that changes in solar activity affect climate. More comprehensive discussions of possible mechanisms and the evidence for them are available.[27] Suffice it to say that this is an area of research that is too often ignored and brushed away as unimportant, especially by the IPCC. The sometimes excellent correlations in the peer-reviewed literature between solar activity and climate change alone should be enough to spur research. The fact that the IPCC has ignored these correlations is evidence of bias.

A point we will make many times in this series is that the Earth is not a uniform single thermodynamic body. Its surface is constantly changing. Treating it as a simple thermodynamic body, and one that can be characterized by a global average temperature is a huge mistake. Next, in part 4, we will discuss the potential impact of long-term changes in convection patterns.

Download the bibliography here.



  1. (Lacis, Hansen, Russell, Oinas, & Jonas, 2013), (Lacis, Schmidt, Rind, & Ruedy, 2010), and (IPCC, 2021, p. 179) 


  2. (McKitrick & Christy, A Test of the Tropical 200- to 300-hPa Warming Rate in Climate Models, Earth and Space Science, 2018), (McKitrick & Christy, 2020), (Lewis, 2023), (IPCC, 2021, p. 990) 


  3. (Irvine, A Thought Experiment; Simplifying the Climate Riddle, 2023) and (Irvine, A comparison of the efficacy of green house gas forcing and solar forcing, 2014) 


  4. (IPCC, 2021, p. 961) 


  5. (IPCC, 2021, p. 925) 


  6. (IPCC, 2021, p. 959), (Hansen, et al., 2005), and (IPCC, 2013, pp. 664-667) 


  7. (Irvine, A Thought Experiment; Simplifying the Climate Riddle, 2023) and (Irvine, A comparison of the efficacy of green house gas forcing and solar forcing, 2014). Irvine provides estimates of the surface warming “efficacy” of greenhouse gas forcing versus solar forcing. 


  8. Also called the Schwabe solar cycle. 


  9. An isotherm is a plane of equal temperature, in this case 22° below the ocean surface. 


  10. (White, Dettinger, & Cayan, 2003). The change in radiation expected ocean temperature change is done with the Stefan-Boltzmann equation. The expected change in heat content assumes a solar cycle radiation change of about 0.1 W/m2


  11. (Wigley & Raper, 1990) 


  12. (Lean, 2017) 


  13. https://andymaypetrophysicist.com/2017/09/09/hadcru-power-and-temperature/ 


  14. (Connolly et al., 2021), (Soon W. , et al., 2023), (Scafetta N. , Empirical assessment of the role of the Sun in climate change using balanced multi-proxy solar records., 2023), and (Soon, Connolly, & Connolly, 2015). 


  15. (Behringer, 2010) and (May, Are fossil-fuel CO2 emissions good or bad?, 2022) 


  16. (Scafetta N. , Understanding the role of the sun in climate change, 2023c) and (Scafetta N. , Empirical assessment of the role of the Sun in climate change using balanced multi-proxy solar records., 2023) 


  17. (Connolly, et al., 2023), Table 1 


  18. (Stefani, Horstmann, Klevs, Mamatsashvili, & Weier, 2023) 


  19. (Scafetta & Bianchini, Overview of the Spectral Coherence between Planetary Resonances and Solar and Climate Oscillations, 2023b) and (Scafetta & Bianchini, The Planetary Theory of Solar Activity Variability: A Review, 2022) 


  20. (Vinós & May, The Sun-Climate Effect: The Winter Gatekeeper Hypothesis (I). The search for a solar signal, 2022) and (Usoskin, Solanki, & Kovaltsov, 2007) 


  21. (Vinós & May, The Winter Gatekeeper hypothesis (VII). A summary and some questions, 2022f), (Wyatt & Peters, A secularly varying hemispheric climate-signal propagation previously detected in instrumental and proxy data not detected in CMIP3 data base, 2012b), (Wyatt, Kravtsov, & Tsonis, Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability, 2012a), and (Wyatt & Curry, 2014). 


  22. (Connolly et al., 2021) 


  23. (Soon, Connolly, & Connolly, 2024, p. 60) 


  24. (Connolly et al., 2021), (Soon, Connolly, & Connolly, Re-evaluating the role of solar variability on Northern Hemisphere temperature trends since the 19th century, 2015), (Crok & May, 2023), (Hoyt & Schatten, 1997), and (Haigh, 2011) 


  25. (Connolly, et al., 2023), see Table 1. 


  26. (Connolly et al., 2021) 


  27. (Soon, Connolly, & Connolly, Re-evaluating the role of solar variability on Northern Hemisphere temperature trends since the 19th century, 2015), (Connolly et al., 2021), (Soon W. , et al., 2023), (Vinós, Climate of the Past, Present and Future, A Scientific Debate, 2nd Edition, 2022), (Hoyt & Schatten, 1997), and (Haigh, 2011). 

Climate Model Bias 1: What is a Model?

From Watts Up With That?

By Andy May

There are three types of scientific models, as shown in figure 1. In this series of seven posts on climate model bias we are only concerned with two of them.

The first are mathematical models that utilize well established physical, and chemical processes and principles to model some part of our reality, especially the climate and the economy.

The second are conceptual models that utilize scientific hypotheses and assumptions to propose an idea of how something, such as the climate, works.

Conceptual models are generally tested, and hopefully validated, by creating a mathematical model. The output from the mathematical model is compared to observations and if the output matches the observations closely, the model is validated. It isn’t proven, but it is shown to be useful, and the conceptual model gains credibility.

Figure 1. The three types of scientific models.

Models are useful when used to decompose some complex natural system, such as Earth’s climate, or some portion of the system, into its underlying components and drivers. Models can be used to try and determine which of the system components and drivers are the most important under various model scenarios.

Besides being used to predict the future, or a possible future, good models should also tell us what should not happen in the future. If these events do not occur, it adds support to the hypothesis. These are the tasks that the climate models created by the Coupled Model Intercomparison Project (CMIP)[1] are designed to do. The Intergovernmental Panel on Climate Change (IPCC)[2] analyzes the CMIP model results, along with other peer-reviewed research, and attempts to explain modern global warming in their reports. The most recent IPCC report is called AR6.[3]

In the context of climate change, especially regarding the AR6 IPCC[4] report, the term “model,” is often used as an abbreviation for a general circulation climate model.[5] Modern computer general circulation models have been around since the 1960s, and now are huge computer programs that can run for days or longer on powerful computers. However, climate modeling has been around for more than a century, well before computers were invented. Later in this report I will briefly discuss a 19th century greenhouse gas climate model developed and published by Svante Arrhenius.

Besides modeling climate change, AR6 contains descriptions of socio-economic models that attempt to predict the impact of selected climate changes on society and the economy. In a sense, AR6, just like the previous assessment reports, is a presentation of the results of the latest iteration of their scientific models of future climate and their models of the impact of possible future climates on humanity.

Introduction

Modern atmospheric general circulation computerized climate models were first introduced in the 1960s by Syukuro Manabe and colleagues.[6] These models, and their descendants can be useful, even though they are clearly oversimplifications of nature, and they are wrong[7] in many respects like all models.[8] It is a shame, but climate model results are often conflated with observations by the media and the public, when they are anything but.

I began writing scientific models of rocks[9] and programming them for computers in the 1970s and like all modelers of that era I was heavily influenced by George Box, the famous University of Wisconsin statistician. Box teaches us that all models are developed iteratively.[10] First we make assumptions and build a conceptual model about how some natural, economic, or other system works and what influences it, then we model some part of it, or the whole system. The model results are then compared to observations. There will typically be a difference between the model results and the observations, these differences are assumed to be due to model error since we necessarily assume our observations have no error, at least initially. We examine the errors, adjust the model parameters or the model assumptions, or both, and run it again, and again examine the errors. This “learning” process is the main benefit of models. Box tells us that good scientists must have the flexibility and courage to seek out, recognize, and exploit such errors, especially any errors in the conceptual model assumptions. Modeling nature is how we learn how nature works.

Box next advises us that “we should not fall in love with our models,” and “since all models are wrong the scientists cannot obtain a ‘correct’ one by excessive elaboration.” I used to explain this principle to other modelers more crudely by pointing out that if you polish a turd, it is still a turd. One must recognize when a model has gone as far as it can go. At some point it is done, more data, more elaborate programming, more complicated assumptions cannot save it. The benefit of the model is what you learned building it, not the model itself. When the inevitable endpoint is reached, you must trash the model and start over by building a new conceptual model. A new model will have a new set of assumptions based on the “learnings” from the old model, and other new data and observations gathered in the meantime.

Each IPCC report, since the first one was published in 1990,[11] is a single iteration of the same overall conceptual model. In this case, the “conceptual model” is the idea or hypothesis that humans control the climate (or perhaps just the rate of global warming) with our greenhouse gas emissions.[12] Various and more detailed computerized models are built to attempt to measure the impact of human emissions on Earth’s climate.

Another key assumption in the IPCC model is that climate change is dangerous, and, as a result, we must mitigate (reduce) fossil fuel use to reduce or prevent damage to society from climate change. Finally, they assume a key metric of this global climate change or warming is the climate sensitivity to human-caused increases in CO2. This sensitivity can be computed with models or using measurements of changes in atmospheric CO2 and global average surface temperature. The IPCC equates changes in global average surface temperature to “climate change.”

This climate sensitivity metric is often called “ECS,” which stands for equilibrium climate sensitivity to a doubling of CO2, often abbreviated as “2xCO2.”[13] Modern climate models, ever since those used for the famous Charney report in 1979,[14] except for AR6, have generated a range of ECS values from 1.5 to 4.5°C per 2xCO2. AR6 uses a rather unique and complex subjective model that results in a range of 2.5 to 4°C/2xCO2. More about this later in the report.

George Box warns modelers that:

“Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.”[15]Box, 1976

The Intergovernmental Panel on Climate Change or IPCC has published six major reports and numerous minor reports since 1990.[16] Here we will argue that they have spent more than thirty years polishing the turd to little effect. They have come up with more and more elaborate processes to try and save their hypothesis that human-generated greenhouse gases have caused recent climate changes and that the Sun and internal variations within Earth’s climate system have had little to no effect. As we will show, new climate science discoveries, since 1990, are not explained by the IPCC models, do not show up in the model output, and newly discovered climate processes, especially important ocean oscillations, are not incorporated into them.

Just one example. Eade, et al. report that the modern general circulation climate models used for the AR5 and AR6 reports[17] do not reproduce the important North Atlantic Ocean Oscillation (“NAO”). The NAO-like signal that the models produce in their simulation runs[18] is indistinguishable from random white noise. Eade, et al. report:

“This suggests that current climate models do not fully represent important aspects of the mechanism for low frequency variability of the NAO.”[19]Eade, et al., 2022

All the models in AR6, both climate and socio-economic, have important model/observation mismatches. As time has gone on, the modelers and authors have continued to ignore new developments in climate science and climate change economics, as their “overelaboration and overparameterization” has become more extreme. As they make their models more elaborate, they progressively ignore more new data and discoveries to decrease their apparent “uncertainty” and increase their reported “confidence” that humans drive climate change. It is a false confidence that is due to the confirmation and reporting bias in both the models and the reports.

As I reviewed all six of the major IPCC reports, I became convinced that AR6 is the most biased of all of them.[20] In a major new book twelve colleagues and I, working under the Clintel[21] umbrella, examined AR6 and detailed considerable evidence of bias.

From the Epilog[22] of the Clintel book:

“AR6 states that “there has been negligible long-term influence from solar activity and volcanoes,”[23] and acknowledges no other natural influence on multidecadal climate change despite … recent discoveries, a true case of tunnel vision.”

“We were promised IPCC reports that would objectively report on the peer-reviewed scientific literature, yet we find numerous examples where important research was ignored. In Ross McKitrick’s chapter[24] on the “hot spot,” he lists many important papers that are not even mentioned in AR6. Marcel [Crok] gives examples where unreasonable emissions scenarios are used to frighten the public in his chapter on scenarios,[25] and examples of hiding good news in his chapter on extreme weather events.[26] Numerous other examples are documented in other chapters. These deliberate omissions and distortions of the truth do not speak well for the IPCC, reform of the institution is desperately needed.”Crok and May, 2023

Confirmation[27] and reporting bias[28] are very common in AR6. We also find examples of the Dunning-Kruger effect,[29] in-group bias,[30] and anchoring bias.[31]

In 2010, the InterAcademy Council of the United Nations reviewed the processes and procedures of the IPCC and found many problems.[32] In particular, they criticized the subjective way that uncertainty is handled. They also criticized the obvious confirmation bias in the IPCC reports.[33] They pointed out that the Lead Authors too often leave out dissenting views or references to papers they disagree with. The Council recommended that alternative views should be mentioned and cited in the report. Even though these criticisms were voiced in 2010, I and my colleagues, found numerous examples of these problems in AR6, published eleven years later in 2021 and 2022.[34]

Although bias pervades AR6, this series will focus mainly on bias in the AR6 volume 1 (WGI) CMIP6[35] climate models that are used to predict future climate. However, we will also look at the models used to identify and quantify climate change impacts in volume 2 (WGII), and to compute the cost/benefit analysis of their recommended mitigation (fossil fuel reduction) measures in volume 3 (WGIII). As a former petrophysical modeler, I am aware how bias can sneak into a computer model, sometimes the modeler is aware he is introducing bias into the results, sometimes he is not. Bias exists in all models, since they are all built from assumptions and ideas (the “conceptual model”), but a good modeler will do his best to minimize it.

In the next six posts I will take you through some of the evidence of bias I found in the CMIP6 models and the AR6 report. A 30,000-foot look at the history of human-caused climate change modeling is given in part 2. Evidence that the IPCC has ignored possible solar influence on climate is presented in part 3. The IPCC ignores evidence that changes in convection and atmospheric circulation patterns in the oceans and atmosphere affect climate change on multidecadal times scales and this is examined in part 4.

Contrary to the common narrative, there is considerable evidence that storminess (extreme weather) was higher in the Little Ice Age, aka the “pre-industrial” (part 5). Next, we move on to examine bias in the IPCC AR6 WGII report[36] on the impact, adaptation, and vulnerability to climate change in part 6 and in their report[37] on how to mitigate climate change in part 7.

Download the bibliography here.


  1. https://wcrp-cmip.org/ 

  2. https://www.ipcc.ch/ 

  3. (IPCC, 2021) 

  4. IPCC is an abbreviation for the Intergovernmental Panel on Climate Change, a U.N. agency. AR6 is their sixth major report on climate change, “Assessment Report 6.” 

  5. There are several names for climate models, including atmosphere-ocean general circulation model (AOGCM, used in AR5), or Earth system model (ESM, used in AR6). Besides these complicated computer climate models there are other models used in AR6, some model energy flows, the impact of climate change on society or the global economy, or the impact of various greenhouse gas mitigation efforts. We only discuss some of these models in this report. (IPCC, 2021, p. 2223) 

  6. (Manabe & Bryan, Climate Calculations with a Combined Ocean-Atmosphere Model, 1969), (Manabe & Wetherald, The Effects of Doubling the CO2 Concentration on the Climate of a General Circulation Model, 1975) 

  7. (McKitrick & Christy, A Test of the Tropical 200- to 300-hPa Warming Rate in Climate Models, Earth and Space Science, 2018) and (McKitrick & Christy, 2020) 

  8. (Box, 1976) 

  9. Called petrophysical models. 

  10. (Box, 1976) 

  11. (IPCC, 1990) 

  12. “The Intergovernmental Panel on Climate Change (IPCC) assesses the scientific, technical and socioeconomic information relevant for the understanding of the risk of human-induced climate change.” (UNFCCC, 2020). 

  13. Usually, ECS means equilibrium climate sensitivity, or the ultimate change in surface temperature due to a doubling of CO2. but in AR6 sometimes they refer to “Effective Climate Sensitivity,” or the “effective ECS” which is defined as the warming after a specified number of years (IPCC, 2021, pp. 931-933). AR6, WGI, page 933 has a more complete definition. 

  14. (Charney, et al., 1979) 

  15. (Box, 1976) 

  16. See https://www.ipcc.ch/reports/ 

  17. CMIP5 and CMIP6 are the models used in AR5 and AR6 IPCC reports, respectively. 

  18. (Eade, Stephenson, & Scaife, 2022) 

  19. (Eade, Stephenson, & Scaife, 2022) 

  20. (May, Is AR6 the worst and most biased IPCC Report?, 2023c; May, The IPCC AR6 Report Erases the Holocene, 2023d) 

  21. https://clintel.org/ 

  22. (Crok & May, 2023, pp. 170-172) 

  23. AR6, page 67. 

  24. (Crok & May, 2023, pp. 108-113) 

  25. (Crok & May, 2023, pp. 118-126) 

  26. (Crok & May, 2023, pp. 140-149) 

  27. Confirmation bias: The tendency to look only for data that supports a previously held belief. It also means all new data is interpreted in a way that supports a prior belief. Wikipedia has a fairly good article on common cognitive biases. 

  28. Reporting bias: In this context it means only reporting or publishing results that favor a previously held belief and censoring or ignoring results that show the belief is questionable. 

  29. The Dunning-Kruger effect is the tendency to overestimate one’s abilities in a particular subject. In this context we see climate modelers, who call themselves “climate scientists,” overestimate their knowledge of paleoclimatology, atmospheric sciences, and atomic physics. 

  30. In-group bias causes lead authors and editors to choose their authors and research papers from their associates and friends who share their beliefs. 

  31. Anchoring bias occurs when an early result or calculation, for example Svante Arrhenius’ ECS (climate sensitivity to CO2) of 4°C, discussed below, gets fixed in a researcher’s mind and then he “adjusts” his thinking and data interpretation to always come close to that value, while ignoring contrary data. 

  32. (InterAcademy Council, 2010) 

  33. (InterAcademy Council, 2010, pp. 17-18) 

  34. (Crok & May, 2023) 

  35. https://wcrp-cmip.org/cmip-phase-6-cmip6/ 

  36. (IPCC, 2022) 

  37. (IPCC, 2022b) 

Top Climate Model Improved to Show ENSO Skill

From Science Matters

By Ron Clutz

Previous posts (linked at end) discuss how the climate model from RAS (Russian Academy of Science) has evolved through several versions. The interest arose because of its greater ability to replicate the past temperature history. The model is part of the CMIP program which is now going the next step to CMIP7, and is one of the first to test with a new climate simulation. Improvements to the latest model, INMCM60, show an enhanced ability to replicate ENSO oscillations in the Pacific ocean, which have significant climate impacts world wide.

This news comes by way of a new paper published in the Russian Journal of Numerical Analysis and Mathematical Modelling February 2024.  The title is ENSO phase locking, asymmetry and predictability in the INMCM Earth system model Seleznev et al. (2024) Excerpts in italics with my bolds and images from the article.

Abstract:

Advanced numerical climate models are known to exhibit biases in simulating some features of El Niño–Southern Oscillation (ENSO) which is a key mode of inter-annual climate variability. In this study we analyze how two fundamental features of observed ENSO – asymmetry between hot and cold states and phase-locking to the annual cycle – are reflected in two different versions of the INMCM Earth system model (state-of-the-art Earth system model participating in the Coupled Model Intercomparison Project).

We identify the above ENSO features using the conventional empirical orthogonal functions (EOF) analysis which is applied to both observed and simulated upper ocean heat content (OHC) data in the tropical Pacific. We obtain that the observed tropical Pacific OHC variability is described well by two leading EOF-modes which roughly reflect the fundamental recharge-discharge mechanism of ENSO. These modes exhibit strong seasonal cycles associated with ENSO phase locking while the revealed nonlinear dependencies between amplitudes of these cycles reflect ENSO asymmetry.

We also assess and compare predictability of observed and simulated ENSO based on linear inverse modeling. We find that the improved INMCM6 model has significant benefits in simulating described features of observed ENSO as compared with the previous INMCM5 model. The improvements of the INMCM6 model providing such benefits arediscussed. We argue that proper cloud parametrization scheme is crucial for accurate simulation of ENSO dynamics with numerical climate models

Introduction

El Niño–Southern Oscillation (ENSO) is the most prominent mode of inter-annual climate variability which originates in the tropical Pacific, but has a global impact [41]. Accurately simulating ENSO is still a challenging task for global climate modelers [3,5,15,25]. In the comprehensive study [35] large-ensemble climate model simulations provided by the Coupled Model Intercomparison Project phases 5 (CMIP5)and 6 (CMIP6) were analyzed. It was found that the CMIP6 models significantly outperform those fromCMIP5 for 8 out of 24 ENSO-relevant metrics, especially regarding the simulation of ENSO spatial patterns, diversity and teleconnections. Nevertheless, some important aspects of the observed ENSO are still not satisfactorily simulated by the most of state-of-the-art models [7,38,49]. In this study we are aimed at examination of how two such aspects – ENSO asymmetry and ENSO phase-locking to the annual cycle –are reflected in the INMCM Earth system model [44, 45].

The asymmetry between hot (El Nino) and cold (La Nina) states is a fundamental feature in the observed ENSO occurrences [39]. El Niño events are often stronger than La Niña events, while the latter ones tend to be more persistent [10]. Such an asymmetry is generally attributed to nonlinear feedbacks between sea surface temperatures (SSTs), thermocline and winds in the tropical Pacific [2,19,28]. The alternative conceptions highlight the role of tropical instability waves [1] and fast atmospheric processes associated with irregular zonal wind anomalies [24]. ENSO phase-locking is identified as the tendency of ENSO-events to peak in boreal winter.

Several studies [11,17,34] argue that the phase-locking is associated with seasonal changes in thermocline depth, ocean upwelling velocity, and cloud feedback processes. These processes collectively contribute to the coupling strength modulation between ocean and atmosphere, which, in the context of conceptual ENSO models [4,18], provides seasonal modulation of stability (in the sense of decay rate) of the “ENSO oscillator”. Another theory [20,42] supposes the phase-locking results from nonlinear interactions between the seasonal forcing and the inherent ENSO cycle. Both the asymmetry and phaselocking effects are typically captured by low-dimensional data-driven ENSO models [14, 21, 26, 29, 37].

In this work we identify the ENSO features discussed above via the analysis of upper ocean heat content (OHC) variability in the the tropical Pacific. The recent study [37] analyzed high-resolution reanalysis dataset of the tropical Pacific (10N – 10S, 120E – 80W) OHC anomalies in the 0–300 m depth layer using the standard empirical orthogonal function (EOF) decomposition [16]. It was found that observed OHC variability is effectively captured by two leading EOFs, which roughly describe the fundamental recharge-discharge mechanism of ENSO [18]. The time series of the corresponding principal components (PCs) demonstrate strong seasonal cycles, reflecting ENSO phase-locking, while the revealed inter-annual nonlinear dependencies between these cycles can be associated with ENSO asymmetry [37].

Here we apply similar analysis to the OHC data simulated by two different versions of INMCM Earth system model. The first is the INMCM5 model [45] from CMIP6, and the second is the perspective INMCM6 [44] model with improved parameterization of clouds, large-scale condensation and aerosols. Along with the traditional EOF decomposition we invoke the linear inverse modeling to assess and compare predictability of ENSO from observed and simulated data.

The paper is organized as follows. Sect. 2 describes the datasets we analyze: OHC reanalysis dataset and OHC data obtained from the ensemble simulations of global climate with two versions of INMCM model. Data preparation, including separation of the forced and internal variability, is also discussed. The ensemble EOF analysis is represented, which is used for identifying the meaningful processes contributing to observed and simulated ENSO dynamics. Sect. 3 presents the results we obtain in analyzing both observed and simulated OHC data. In Sect. 4 we summarize and discuss the obtained results, particularly regarding the significant benefits of new version of INMCM model (INMCM6) in simulating key features of observed ENSO.

Fig. 1: Two leading EOFs of the observed tropical Pacific upper ocean heat content (OHC) variability
Fig. 2: Two leading EOFs of the INMCM5 ensemble of tropical Pacific upper ocean heat content simulations
Fig. 3: The same as in Fig. 2 but for INMCM6 model simulations

The corresponding spatial patterns in Fig. 1 have clear interpretation. The first contributes to the central and eastern tropical Pacific, where most significant variations of sea surface temperature (SST) during El Niño/La Nina events occur [9]. The second predominates mainly in the western tropical Pacific and can be associated with the OHC accumulation and discharge before and during the El Niño events [48].

What we can see from Fig. 2 is that the two leading EOFs of OHC variability simulated by the INMCM5 model do not correspond to the observed ones. The corresponding time series and spatial patterns exhibit smaller-scale features, as compared to those we obtain from the reanalysys data, indicating their noisier spatio-temporal nature.

The two leading EOFs of the improved INMCM6 model (Fig. 3), by contrast, capture well both the spatial and temporal features of observed EOFs. In the next section we focus on furtheranalysis of these EOFs assuming that they contain the most meaningful information about ENSO dynamics.

Discussion

In this study we have analyzed how two different versions of the INMCM model [44,45] (state-of-the-art Earth system model participating in the Coupled Model Intercomparison Project, CMIP) simulate some features of El Niño–Southern Oscillation (ENSO) which is a key mode of the global climate. We identified the ENSO features via the EOF analysis applied to both observed and simulated upper ocean heat content(OHC) variability in the the tropical Pacific. It was found that the observed tropical Pacific OHC variability is captured well by two leading modes (EOFs) which reflect the fundamental recharge-discharge mechanism of ENSO involving a recharge and discharge of OHC along the equator caused by a disequilibrium between zonal winds and zonal mean thermocline depth. These modes are phase-shifted and exhibit the strong seasonal cycles associated with ENSO phase locking. The inter-annual dependencies between amplitudes of the revealed ESNO seasonal cycles are strongly nonlinear which reflects the asymmetry between hot (ElNino) and cold (La Nina) states of observed ENSO. We found that the INMCM5 model (the previous version of the INMCM model from CMIP6) poorly reproduces the leading modes of observed ENSO and reflect neither the observed ENSO phase locking nor asymmetry. At the same time, the perspective INMCM6 model demonstrates significant improvement in simulating these key features of observed ENSO. The analysis of ENSO predictability based on linear inverse modeling indicates that the improved INMCM6 model reflects well the ENSO spring predictability barrier and therefore could potentially have an advantage in long range weather prediction as compared with the INMCM5.

Such benefits of the new version of the INMCM model (INMCM6) in simulating observed ENSO dynamics can be provided by using more relevant parametrization of sub-grid scale processes. Particularly, the difference in the amplitude of OHC anomaly associated with ENSO between INMCM5 and INMCM6 shown in Fig.2-3 can be explained mainly by the difference in cloud parameterization in these models. In short, in INMCM5 El-Nino event leads to increase of middle and low clouds over central and eastern Pacific that leads to cooling because of decrease in surface incoming shortwave radiation.

While decrease in low clouds and increase in high clouds in INMCM6 over El-Nino region during positive phase of ENSO lead to further upper ocean warming [43]. This is consistent with the recent study [36] which argued that erroneous cloud feedback arising from a dominant contribution of low-level clouds may lead to heat flux feedback bias in the tropical Pacific, which play a key role in ENSO dynamics. Fast decrease in OHC in central Pacific after El-Nino maximum in INMCM6 can probably occur because of too shallow mixed layer in equatorial Pacific in the model, that leads to fast surface cooling after renewal of upwelling and further increase of tradewinds. Summarizing the above we can conclude that proper cloud parameterization scheme is crucial for accurate simulation of observed ENSO with numerical climate models.

Background on INMCM6

The INMCM60 model, like the previous INMCM48 [1], consists of three major components: atmospheric dynamics, aerosol evolution, and ocean dynamics. The atmospheric component incorporates a land model including surface, vegetation, and soil. The oceanic component also encompasses a sea-ice evolution model. Both versions in the atmosphere have a spatial 2° × 1° longitude-by-latitude resolution and 21 vertical levels up to 10 hPa. In the ocean, the resolution is 1° × 0.5° and 40 levels.

The following changes have been introduced into the model compared to INMCM48.

Parameterization of clouds and large-scale condensation is identical to that described in [4], except that tuning parameters of this parameterization differ from any of the versions outlined in [3], being, however, closest to version 4. The main difference from it is that the cloud water flux rating boundary-layer clouds is estimated not only for reasons of boundary-layer turbulence development, but also from the condition of moist instability, which, under deep convection, results in fewer clouds in the boundary layer and more in the upper troposphere. The equilibrium sensitivity of such a version to a doubling of atmospheric СО2 is about 3.3 K.

The aerosol scheme has also been updated by including a change in the calculation of natural emissions of sulfate aerosol [5] and wet scavenging, as well as the influence of aerosol concentration on the cloud droplet radius, i.e., the first indirect effect [6]. Numerical values of the constants, however, were taken to be a little different from those used in [5]. Additionally, the improved scheme of snow evolution taking into account refreezing and the calculation of the snow albedo [7] were introduced to the model. The calculation of universal functions in the atmospheric boundary layer in stable stratification has also been changed: in the latest model version, such functions assume turbulence at even large gradient Richardson numbers [8].

Strong El Nino Conditions Prevails at The End of January 2024

From Watts Up With That?

Reposted from gujaraweather.com

Ashok Patel

Enso Status on 10th February 2024

Ashok Patel’s Analysis & Commentary :

The classification of El Niño events, including the strength labels, is somewhat subjective and can vary among meteorological and climate agencies. There isn’t a strict rule defining the specific number of consecutive Oceanic Niño Index (ONI) values that must be 2.0°C or above to categorize an El Niño event as “Super Strong.”

In general, a strong El Niño event is often characterized by ONI values reaching or exceeding +2.0°C. A Super Strong El Niño would typically involve sustained ONI value of +2.0°C or more. Hence for ease of understanding and comparing the strength of various Strong El Nino events, I propose to define an El Nino as a Super Strong event if  three consecutive ONI index is +2.0°C or more.

A brief history of the past El Nino events with the number of consecutive ONI +2.0°C or above:

In the year 1965 the highest ONI index during that El Nino were SON +2.0°C, OND +2.0°C

In the year 1972-73 the highest ONI index during that El Nino were OND +2.1°C NDJ +2.1°C DJF

In the year 1982-83 the highest ONI index during that El Nino were SON +2.0°C, OND +2.2°C NDJ +2.2°C DJF +2.2°C

In the year 1997-98 the highest ONI index during that El Nino were ASO +2.1°C SON +2.3°C, OND +2.4°C NDJ +2.4°C DJF +2.2°C

In the year 2015-16 the highest ONI index during that El Nino were ASO +2.2°C SON +2.4°C, OND +2.6°C NDJ +2.6°C DJF +2.5°C JFM +2.1°C

ONI Data has been obtained from CPC – NWS – NOAA available here

There have been three Super Strong El Nino events from 1950 onwards till date. The first such event was 1982-83 Super Strong El Nino with 4 consecutive ONI +2.0°C or above with highest ONI of +2.2°C twice. The second Super Strong El Nino event was 1997-98 with five consecutive ONI +2.0°C or above with highest ONI of +2.4°C twice. The third Super Strong El Nino event was 2015-16 with six consecutive ONI +2.0°C or above with highest ONI of +2.6°C twice. The current forecast and analysis does not support the 2023-24 El Nino to become a Super Strong El Nino.

Indian Monsoon & Enso relationship for India:

Based on earlier more than 100 years weather Data for Indian Summer Monsoon, The Average Rainfall in an El Nino years is 94% of LPA while in La Nina Years it has been 106 % of LPA for the whole country. Monsoon Rainfall over India had been +94.4% of LPA at the end of 30th September 2023. El Nino or La Nina may affect the Monsoon differently for different Regions of India and warrants research for concrete co-relations for each region of India if any. Performance of Southwest Monsoon 2023 over the entire Country was much better than expected.

How ONI is determined:

The ONI is based on SST departures from average in the Niño 3.4 region, and is a principal measure for monitoring, assessing, and predicting ENSO. Defined as the three-month running-mean SST departures in the Niño 3.4 region. Departures are based on a set of further improved homogeneous historical SST analyses (Extended Reconstructed SST – ERSST.v5).

NOAA Operational Definitions for El Niño and La Niña, El Niño: characterized by a positive ONI greater than or equal to +0.5ºC. La Niña: characterized by a negative ONI less than or equal to -0.5ºC. By historical standards, to be classified as a full-fledged El Niño or La Niña episode, these thresholds must be exceeded for a period of at least 5 consecutive overlapping 3-month seasons.

CPC considers El Niño or La Niña conditions to occur when the monthly Niño3.4 OISST departures meet or exceed +/- 0.5ºC along with consistent atmospheric features. These anomalies must also be forecast to persist for 3 consecutive months.

The Climate Prediction Center (CPC) is a United States Federal Agency that is one of the NECP, which are a part of the NOAA

Latest Oceanic Nino Index Graph Shows
El Nino Conditions Are Prevailing At The End Of January 2024

The Table below shows the monthly SST of Nino3.4 Region and the Climate adjusted normal SST and SST anomaly from July 2021. Climate Base 1991-2020. ERSST.v5
Period Nino3.4 ClimAdjust
YRMONTemp. °C  Temp. °C ANOM   °C
2021726.927.29-0.39
2021826.3226.86-0.53
2021926.1626.72-0.55
20211025.7826.72-0.94
20211125.7626.7-0.94
20211225.5426.6-1.06
2022125.6126.55-0.95
2022225.8826.76-0.89
2022326.3327.29-0.97
2022426.7227.83-1.11
2022526.8327.94-1.11
2022626.9827.73-0.75
2022726.627.29-0.7
2022825.8826.86-0.97
2022925.6526.72-1.07
20221025.7326.72-0.99
20221125.826.7-0.9
20221225.7526.6-0.86
2023125.8426.55-0.71
2023226.326.76-0.46
2023327.1927.29-0.11
2023427.9627.830.14
2023528.427.940.46
2023628.5727.730.84
2023728.3127.291.02
2023828.2126.861.35
2023928.3226.721.6
20231028.4426.721.72
20231128.7226.72.02
20231228.6326.62.02
2024128.4226.551.87

Indications and analysis of various International Weather/Climate agencies monitoring ENSO conditions is depicted hereunder:

Summary by: Climate Prediction Center / NCEP  Dated 4th February 2024

ENSO Alert System Status: El Niño Advisory

El Niño conditions are observed.*

Equatorial sea surface temperatures (SSTs) are above average across the central and eastern Pacific Ocean.

The tropical Pacific atmospheric anomalies are consistent with El Niño.

El Niño is expected to continue for the next several seasons, with ENSO-neutral favored during April-June 2024 (73% chance).*

Note: These statements are updated once a month (2nd Thursday of each month) in association with the ENSO Diagnostics Discussion, which can be found by clicking here.

Recent (preliminary) Southern Oscillation Index values as per The Long Paddock – Queensland Government.

30 Days average SOI was 3.96 at the end of January 2024 and was -3.97 on 8th February 2024 as per The Long Paddock – Queensland Government and 90 Days average SOI was -4.64 on 8th February 2024. During January 2024 the SOI had become +3.96 and is -3.97 currently.

Southern Oscillation Index

As per BOM, Australia

The 30-day Southern Oscillation Index (SOI) for the period ending 31 January 2024 was 3.7 and was 0.7 on 4th February 2024 and is moving towards negative direction once again..
Sustained negative values of the SOI below −7 typically indicate El Niño while sustained positive values above +7 typically indicate La Niña. Values between +7 and −7 generally indicate neutral conditions.

As per BOM – Australia 6th February 2024
El Niño has peaked and is declining

ENSO Outlook

Climate model outlooks suggest El Niño has peaked and is declining, indicating a return to neutral in the southern hemisphere autumn 2024. The ENSO Outlook will remain at El Niño status until this event decays, or signs of a possible La Niña appear.


WUWT Editors’ Note

On February 8th, NOAA issued a  La Niña Watch

EL NIÑO/SOUTHERN OSCILLATION (ENSO)
DIAGNOSTIC DISCUSSION

issued by
CLIMATE PREDICTION CENTER/NCEP/NWS
8 February 2024

ENSO Alert System Status: El Niño Advisory / La Niña Watch

Synopsis: A transition from El Niño to ENSO-neutral is likely by April-June 2024 (79% chance), with increasing odds of La Niña developing in June-August 2024 (55% chance).

During January 2024, above-average sea surface temperatures (SST) continued across most of the equatorial Pacific Ocean [Fig. 1]. SST anomalies weakened slightly in the eastern and east-central Pacific, as indicated by the weekly Niño index values [Fig. 2]. However, changes were more pronounced below the surface of the equatorial Pacific Ocean, with area-averaged subsurface temperature anomalies returning to near zero [Fig. 3]. Although above-average temperatures persisted in the upper 100 meters of the equatorial Pacific, below-average temperatures were widespread at greater depths [Fig. 4]. Atmospheric anomalies across the tropical Pacific also weakened during January. Low-level winds were near average over the equatorial Pacific, while upper-level wind anomalies were easterly over the east-central Pacific. Convection remained slightly enhanced near the Date Line and was close to average around Indonesia [Fig. 5]. Collectively, the coupled ocean-atmosphere system reflected a weakening El Niño.

The most recent IRI plume indicates a transition to ENSO-neutral during spring 2024, with La Niña potentially developing during summer 2024 [Fig. 6]. Even though forecasts made through the spring season tend to be less reliable, there is a historical tendency for La Niña to follow strong El Niño events. The forecast team is in agreement with the latest model guidance, with some uncertainty around the timing of transitions to ENSO-neutral and, following that, La Niña. Even as the current El Niño weakens, impacts on the United States could persist through April 2024 (see CPC seasonal outlooks for probabilities of temperature and precipitation). In summary, a transition from El Niño to ENSO-neutral is likely by April-June 2024 (79% chance), with increasing odds of La Niña developing in June-August 2024 (55% chance; [Fig. 7]).

This discussion is a consolidated effort of the National Oceanic and Atmospheric Administration (NOAA), NOAA’s National Weather Service, and their funded institutions. Oceanic and atmospheric conditions are updated weekly on the Climate Prediction Center web site (El Niño/La Niña Current Conditions and Expert Discussions). Additional perspectives and analysis are also available in an ENSO blog. A probabilistic strength forecast is available here. The next ENSO Diagnostics Discussion is scheduled for 14 March 2024.

To receive an e-mail notification when the monthly ENSO Diagnostic Discussions are released, please send an e-mail message to: ncep.list.enso-update@noaa.gov.

Climate Prediction Center
5830 University Research Court
College Park, Maryland 20740

Gavin’s Plotting Trick: Hide the Incline

From Roy Spencer, PhD.

February 1st, 2024 by Roy W. Spencer, Ph. D.

Since Gavin Schmidt appears to have dug his heels in regarding how to plot two (or more) temperature times series on a graph having different long-term warming trends, it’s time to revisit exactly why John Christy and I now (and others should) plot such time series so that their linear trend lines intersect at the beginning.

While this is sometimes referred to as a “choice of base period” or “starting point” issue, it is crucial (and not debatable) to note it is irrelevant to the calculated trends. Those trends are the single best (although imperfect) measure of the long-term warming rate discrepancies between climate models and observations, and they remain the same no matter the base period chosen.

Again, I say, the choice of base period or starting point does not change the exposed differences in temperature trends (say, in climate models versus observations). Those important statistics remain the same. 

The only reason to object to the way we plot temperature time series is to Hide The Incline* in the long-term warming discrepancies between models and observations when showing the data on graphs.

[*For those unfamiliar, in the Climategate email release, Phil Jones, then-head of the UK’s Climatic Research Unit, included the now-infamous “hide the decline” phrase in an e-mail, referring to Michael Mann’s “Nature trick” of cutting off the end of a tree-ring based temperature reconstruction (because it disagreed with temperature observations), and spliced in those observations in order to “hide the decline” in temperature exhibited by the tree ring data.]

blogged on this issue almost eight years ago, and I just re-read that post this morning. I still stand by what I said back then (the issue isn’t complex).

Today, I thought I would provide a little background, and show why our way of plotting is the most logical way. (If you are wondering, as many have asked me, why not just plot the actual temperatures, without being referenced to a base period? Well, if we were dealing with yearly averages [no seasonal cycle, the usual reason for computing “anomalies”], then you quickly discover there are biases in all of these datasets, both observational data [since the Earth is only sparsely sampled with thermometers, and everyone does their area averaging in data-void infilling differently], and the climate models all have their own individual temperature biases. These biases can easily reach 1 deg. C, or more, which is large compared to computed warming trends.)

Historical Background of the Proper Way of Plotting

Years ago, I was trying to find a way to present graphical results of temperature time series that best represented the differences in warming trends. For a long time, John Christy and I were plotting time series relative to the average of the first 5 years of data (1979-1983 for the satellite data). This seemed reasonably useful, and others (e.g. Carl Mears at Remote Sensing Systems) also took up the practice and knew why it was done.

Then I thought, well, why not just plot the data relative to the first year (in our case, that was 1979 since the satellite data started in that year)? The trouble with that is there are random errors in all datasets, whether due to measurement errors and incomplete sampling in observational datasets, or internal climate variability in climate model simulations. For example, the year 1979 in a climate model simulation might (depending upon the model) have a warm El Nino going on, or a cool La Nina. If we plot each time series relative to the first year’s temperature, those random errors then impact the entire time series with an artificial vertical offset on the graph.

The same issue will exist using the average of the first five years, but to a lesser extent. So, there is a trade-off: the shorter the base period (or starting point), the more the times series will be offset by short-term biases and errors in the data. But the longer the base period (up to using the entire time series as the base period), the difference in trends is then split up as a positive discrepancy late in the period and a negative discrepancy early in the period.

I finally decided the best way to avoid such issues is to offset each time series vertically so that their linear trend lines all intersect at the beginning. This minimizes the impact of differences due to random yearly variations (since a trend is based upon all years’ data), and yet respects the fact that (as John Christy, an avid runner, told me), “every race starts at the beginning”.

In my blog post from 2016, I presented this pair of plots to illustrate the issue in the simplest manner possible (I’ve now added the annotation on the left):

Contrary to Gavin’s assertion that we are exaggerating the difference between models and observations (by using the second plot), I say Gavin wants to deceptively “hide the incline” by advocating something like the first plot. Eight years ago, I closed my blog post with the following, which seems to be appropriate still today: “That this issue continues to be a point of contention, quite frankly, astonishes me.”

The issue seems trivial (since the trends are unaffected anyway), yet it is important. Dr. Schmidt has raised it before, and because of his criticism (I am told) Judith Curry decided to not use one of our charts in congressional testimony. Others have latched onto the criticism as some sort of evidence that John and I are trying to deceive people. In 2016, Steve McIntyre posted an analysis of Gavin’s claim we were engaging in “trickery” and debunked Gavin’s claim.

In fact, as the evidence above shows, it is our accusers who are engaged in “trickery” and deception by “hiding the incline”.

Climate attribution method overstates “fingerprints” of external forcing

From Climate Etc.

by Ross McKitrick

I have a new paper in the peer-reviewed journal Environmetrics discussing biases in the “optimal fingerprinting” method which climate scientists use to attribute climatic changes to greenhouse gas emissions. This is the third in my series of papers on flaws in standard fingerprinting methods: blog posts on the first two are here and here.

Climatologists use a statistical technique called Total Least Squares (TLS), also called orthogonal regression, in their fingerprinting models to fix a problem in ordinary regression methods that can lead to the influence of external forcings being understated. My new paper argues that in typical fingerprinting settings TLS overcorrects and imparts large upward biases, thus overstating the impact of GHG forcing.

While the topic touches on climatology, for the most part the details involve regression methods which is what empirical economists like me are trained to do. I teach regression in my econometrics courses and I have studied and used it all my career. I mention this because if anyone objects that I’m not a “climate scientist” my response is: you’re right, I’m an economist which is why I’m qualified to talk about this.

I have previously shown that when the optimal fingerprinting regression is misspecified by leaving out explanatory variables that should be in it, TLS is biased upwards (other authors have also proven this theoretically). In that study I noted that when anthropogenic and natural forcings (ANTH and NAT) are negatively correlated the positive TLS bias increases. My new paper focuses just on this issue since, in practice, climate model-generated ANTH and NAT forcing series are negatively correlated. I show that in this case, even if no explanatory variables have been omitted from the regression, TLS estimates of forcing coefficients are usually too large. Among other things, since TLS-estimated coefficients are plugged into carbon budget models, this will result in a carbon budget being biased too small.

Background

In 1999 climatologists Myles Allen and Simon Tett published a paper in Climate Dynamics in which they proposed a Generalized Least Squares or GLS regression model for detecting the effects of forcings on climate. The IPCC immediately embraced the Allen&Tett method and in the 2001 3rd Assessment Report hailed it as the way to show a causal link between greenhouse forcing and observed climate change. It’s been relied upon ever since by the “fingerprinting” community and the IPCC. In 2021 I published a Comment in Climate Dynamics showing that the Allen & Tett method has theoretical flaws and that the arguments supporting its claim to be a valid method were false. I provided a non-technical explainer through the Global Warming Policy Foundation website. Myles Allen made a brief reply, to which I responded and then economist Richard Tol provided further comments. The exchange is at the GWPF website. My comment was published by Climate Dynamics in summer 2021, has been accessed over 21,000 times and its Altmetric score remains in the top 1% of all scientific articles published since that date. Two and a half years later Allen and Tett have yet to submit a reply.

Note: I just saw that a paper by Chinese statisticians Hanyue Chen et al. partially responding to my critique was published by Climate Dynamics. This is weird. In fall 2021 Chen et al submitted the paper to Climate Dynamics and I was asked to provide one of the referee reports, which I did. The paper was rejected. Now it’s been published even though the handling editor confirmed it was rejected. I’ve queried Climate Dynamics to find out what’s going on and they are investigating.

One of the arguments against my critique was that the Allen and Tett paper had been superseded by Allen and Stott 2001. While that paper incorporated the same incorrect theory from Allen and Tett 1999, its refinement was to replace the GLS regression step with TLS as a solution to the problem that the climate model-generated ANTH and NAT “signals” are noisy estimates of the unobservable true signals. In a regression model if your explanatory variables have random errors in them, GLS yields coefficient estimates that tend to be biased low.

This problem is well-known in econometrics. Long before Allen and Stott 2001, econometricians had shown that a method called Instrumental Variables (IV) could remedy it and yield unbiased and consistent coefficient estimates. Allen and Stott didn’t mention IV; instead they proposed TLS and the entire climatology field simply followed their lead. But does TLS solve the problem?

No one has been able to prove that it does except under very restrictive assumptions and you can’t be sure if they hold or not. If they don’t hold, then TLS generates unreliable results, which is why researchers in other fields don’t like it. The problem is that TLS requires more information than the data set contains. This requires the researcher to make arbitrary assumptions to reduce the number of parameters needing to be estimated. The most common assumption is that the error variances are the same on the dependent and explanatory variables alike.

The typical application involves regressing a dependent “Y” variable on a bunch of explanatory “X” variables, and in the errors-in-variables case we assume the latter are unavailable. Instead we observe “W’s” which are noisy approximations to the X’s. Suppose we assume the variances of the errors on the X’s are all the same and equal S times the variance of the errors on the Y variable. If this turns out to be true, so S=1, and we happen to assume S=1, TLS can in some circumstances yield unbiased coefficients. But in general we don’t know if S=1, and if it doesn’t, TLS can go completely astray.

In the limited literature discussing properties of TLS estimators it is usually assumed that the explanatory variables are uncorrelated. As part of my work on the fingerprinting method I obtained a set of model-generated climate signals from CMIP5 models and I noticed that the ANTH and NAT signals are always negatively correlated (the average correlation coefficient is -0.6). I also noticed that the signals don’t have the same variances (which is a separate issue from the error terms not having the same variances).

The experiment

In my new paper I set up an artificial fingerprinting experiment in which I know the correct answer in advance and I can vary several parameters which affect the outcome: the error variance ratio S; the correlation between the W’s; and the relative variances of the X’s. I ran repeated experiments based in turn on the assumption that the true value of beta (the coefficient connecting GHG’s to observed climate change) is 0 or 1. Then I measured the biases that arise when using TLS and GLS (GLS in this case is equivalent to OLS, or ordinary least squares).

These graphs show the coefficient biases using OLS when the experiment is run on simulated X’s with average relative variances (see the paper for versions where the relative variances are lower or higher).

The left panel is the case when the true value of beta = 0 (which implies no influence of GHGs on climate) and the right is the case when true beta=1 (which implies the GHG influence is “detected” and the climate models are consistent with observations). The lines aren’t the same length because not all parameter combinations are theoretically possible. The horizontal axis measures the correlation between the observed signals, which in the data I’ve seen is always less than -0.2. The vertical axis measures the bias in the fingerprinting coefficient estimate. The colour coding refers to the assumed value of S. Blue is S=0, which is the situation in which the X’s are measured without error so OLS is unbiased, which is why the blue line tracks the horizontal (zero bias) axis. From black to grey corresponds to S rising from 0 to just under 1, and red corresponds to S=1. Yellow and green correspond to S >1.

As you can see, if true beta=0, OLS is unbiased; but if beta = 1 or any other positive value, OLS is biased downward as expected. However the bias goes to zero as S goes to 0. In practice, you can shrink S by using averages of multiple ensemble runs.

Here are the biases for TLS in the same experiments:

There are some notable differences. First, the biases are usually large and positive, and they don’t necessarily go away even if S=0 (or S=1). If the true value of beta =1, then there are cases in which the TLS coefficient is unbiased. But how would you know if you are in that situation? You’d need to know what S is, and what the true value of beta is. But of course you don’t (if you did, you wouldn’t need to run the regression!)

What this means is that if an optimal fingerprinting regression yields a large positive coefficient on the ANTH signal this might mean GHG’s affect the climate, or it might mean that they don’t (the true value of beta=0) and TLS is simply biased. The researcher cannot tell which is the case just by looking at the regression results. In the paper I explain some diagnostics that help indicate if TLS can be used, but ultimately relying on TLS requires assuming you are in a situation in which TLS is reliable.

The results are particularly interesting when the true value of beta=0. A fingerprinting, or “signal detection” test starts by assuming beta=0 then constructing a t-statistic using the estimated coefficients. OLS and GLS are fine for this since if beta=0 the coefficient estimates are unbiased. But if beta=0 a t-statistic constructed using the TLS coefficient can be severely biased. The only cases in which TLS is reliably unbiased occur when beta is not zero. But you can’t run a test of beta=0 that depends on the assumption that beta is not zero. Any such test is spurious and meaningless.

Which means that the past 20 years worth of “signal detection” claims are likely meaningless unless steps were taken in the original articles to prove the suitability of TLS or verify its results with another nonbiased estimator.

I was unsuccessful in getting this paper published in the two climate science journals to which I submitted it. In both cases the point on which the paper was rejected was a (climatologist) referee insisting S is known in fingerprinting applications and always equals 1/root(n) where n is the number of runs in an ensemble mean. But S only takes that value if, for each ensemble member, S is assumed to equal 1. One reviewer conceded the possibility that S might be unknown but pointed out that it’s long been known TLS is unreliable in that case and I haven’t provided a solution to the problem.

In my submission to Environmetrics I provided the referee comments that had led to its rejection in climate journals and explained how I expanded the text to state why it is not appropriate to assume S=1. I also asked that at least one reviewer be a statistician, and as it turned out both were. One of them, after noting that statisticians and econometricians don’t like TLS, added:

“it seems to me that the target audience of the paper are practitioners using TLS quite acritically for climatological applications. How large is this community and how influential are conclusions drawn on the basis of TLS, say in the scientific debate concerning attribution?”

In my reply I did my best to explain its influence on the climatology field. I didn’t add, but could have, that 20 years’ worth of applications of TLS are ultimately what brought 100,000 bigwigs to Dubai for COP28 to demand the phaseout of the world’s best energy sources based on estimates of the role of anthropogenic forcings on the climate that are likely heavily overstated. Based on the political impact and economic consequences of its application, TLS is one of the most influential statistical methodologies in the world, despite experts viewing it as highly unreliable compared to readily available alternatives like IV.

Another reviewer said:

“TLS seems to generate always poor performances compared to the OLS. Nonetheless, TLS seems to be the ‘standard’ in fingerprint applications… why is the TLS so popular in physics-related applications?”

Good question! My guess is because it keeps generating answers that climatologists like and they have no incentive to come to terms with its weaknesses. But you don’t have to step far outside climatology to find genuine bewilderment that people use it instead of IV.

Conclusion

For more than 20 years climate scientists—virtually alone among scientific disciplines—have used TLS to estimate anthropogenic GHG signal coefficients despite its tendency to be unreliable unless some strong assumptions hold that in practice are unlikely to be true. Under conditions which easily arise in optimal fingerprinting, TLS yields estimates with large positive biases. Thus any study that has used TLS for optimal fingerprinting without verifying that it is appropriate in the specific data context has likely overstated the result.

In my paper I discuss how a researcher might go about trying to figure out whether TLS is justified in a specific application, but it’s not always possible. In many cases it would be better to use OLS even though it’s known to be biased downward. The problem is that TLS typically has even bigger biases in the opposite direction and there is no sure way of knowing how bad they are. These biases carry over to the topic of “carbon budgets” which are now being cited by courts in climate litigation including here in Canada. TLS-derived signal coefficients yield systematically underestimated carbon budgets.

The IV estimation method has been known at least since the 1960s to be asymptotically unbiased in the errors-in-variables case, yet climatologists don’t use it. So the predictable next question is why haven’t I done a fingerprinting regression using IV methods? I have, but it will be a while before I get the results written up and in the meantime the technique is widely known so anyone who wants to can try it and see what happens.

Keep Your Head, Others are Losing Theirs Over Climate

From Science Matters

By Ron Clutz

John Stossel’s interview with Bjorn Lomborg is featured in his article at Reason The Media’s Misleading Fearmongering Over Climate Change. Excerpts in italics with my bolds and added images.

“Over the last 20 years, because of temperature rises, we have seen about 116,000 more people die from heat. But 283,000 fewer people die from cold.”

United States Special Presidential Envoy for Climate John Kerry says it will take trillions of dollars to “solve” climate change. Then he says, “There is not enough money in any country in the world to actually solve this problem.”

Yes, they are projecting more than 100 Trillion US$.

Kerry has little understanding of money or how it’s created. He’s a multimillionaire because he married a rich woman. Now he wants to take more of your money to pretend to affect climate change.

Bjorn Lomborg points out that there are better things society should spend money on.

Lomberg acknowledges that a warmer climate brings problems. “As temperatures get higher, sea water, like everything else, expands. So we’re going to maybe see three feet of sea level rise. Then they say, ‘So everybody who lives within three feet of sea level, they’ll have to move!’ Well, no. If you actually look at what people do, they built dikes and so they don’t have to move.”

Rotterdam Adaptation Policy–Ninety years thriving behind dikes and dams.

People in Holland did that years ago. A third of the Netherlands is below sea level. In some areas, it’s 22 feet below. Yet the country thrives. That’s the way to deal with climate change: adjust to it.

“Fewer people are going to get flooded every year, despite the fact that you have much higher sea level rise. The total cost for Holland over the last half-century is about $10 billion,” says Lomberg. “Not nothing, but very little for an advanced economy over 50 years.”

For saying things like that, Lomberg is labeled “the devil.”

“The problem here is unmitigated scaremongering,” he replies. “A new survey shows that 60 percent of all people in rich countries now believe it’s likely or very likely that unmitigated climate change will lead to the end of mankind. This is what you get when you have constant fearmongering in the media.”

Some people now say they will not have children because they’re convinced that climate change will destroy the world. Lomborg points out how counterproductive that would be: “We need your kids to make sure the future is better.”

He acknowledges that climate warming will kill people.

“As temperatures go up, we’re likely to see more people die from heat. That’s absolutely true. You hear this all the time. But what is underreported is the fact that nine times as many people die from cold…. As temperatures go up, you’re going to see fewer people die from cold. Over the last 20 years, because of temperature rises, we have seen about 116,000 more people die from heat. But 283,000 fewer people die from cold.”

A 2015 study by 22 scientists from around the world found that cold kills over 17 times more people than heat. Source: The Lancet

That’s rarely reported in the news.

When the media doesn’t fret over deaths from heat,
they grab at other possible threats.

CNN claims, “Climate Change is Fueling Extremism.”

The BBC says, “A Shifting Climate is Catalysing Infectious Disease.

U.S. News and World Report says, “Climate Change will Harm Children’s Mental Health.”

Lomborg replies, “It’s very, very easy to make this argument that everything is caused by climate change if you don’t have the full picture.”

He points out that we rarely hear about positive effects of climate change, like global greening.

Spatial pattern of trends in Gross Primary Production (1982- 2015). Source: Sun et al. 2018.

“That’s good! We get more green stuff on the planet. My argument is not that climate change is great or overall positive. It’s simply that, just like every other thing, it has pluses and minuses…. Only reporting on the minuses, and only emphasizing worst-case outcomes, is not a good way to inform people.”

Synopsis of Lomborg’s Policy Recommendation (excerpted transcription)

If you’re a politician and you look at ten different problems, you’re natural inclination is to say, “Let’s give 1/10 to each one of them.” And economists would tend to say, “No, let’s give all of the money to the most efficient problem first and then to the second most efficient problem, and so on. I’m simply suggesting there’s a way that we could do much better with much less.

Of course if you feel very strongly about your particular area, when I come and say, “Actually, this is not a very efficient use of resources.” I get why people get upset. But for our collective good, for all the stuff that we do on the planet, we actually need to consider carefully where do we spend money well, compared to where do we just spend money and feel virtuous about ourselves.

If we spend way too much money ineffectively on climate, not only
are we not fixing climate, but we’re also wasting an enormous amount
of money that could have been spent on all these other things.

I’m simply trying to make that simple point, and I think most people kind of get that.  Remember, electricity is about a fifth of our total energy consumption. So, all everybody’s talking about is all the electricity, which is the easiest thing to switch over. But we don’t know anything about how we’re going to, know very, very little about how we’re going to deal with the other 4/5. This is energy that we use on things that are very, very hard to replace. So it’s a fertilizer that keeps 4 billion people alive. Making the fertilizer. It’s steel, cement, it’s industrial processes. Most of heating we use comes from fossil fuels, most transportation, that’s fossil fuels.

Know that if the U.S. went entirely net zero today and stayed that way for the rest of the century, consider how incredibly extreme this would be. First of all, you would not be able to feed everyone in the U.S. The whole economy would break down. You wouldn’t know how to get transportation. A lot of people would freeze. Some people would fry. There would be lots and lots of problems. But even if you did this and managed to do it, the net impact, if you run it through the U.N. climate model, is that you would reduce temperatures by the end of the century by 0.3 degrees Fahrenheit. We would almost not be able to measure it by the end of the century. It would have virtually no impact.

Look, again, we’re rich and so a lot of people feel like you can spend money on many different things. And that’s true. I’m making the argument that for fairly little money, we could do amazing good. If we spent $35 billion, not a trillion dollars, just $35 billion, which is not nothing. I don’t think, neither you or I have that amount of money. But, you know, in the big scheme of things, this is a rounding error. $35 billion could save 4.2 million lives in the poor part of the world, each and every year and make the poor world $1.1 trillion richer.

I think we have a moral responsibility to remember, that there are lots and lots of people, so mostly about 6 billion people out there, who don’t have this luxury of being able to think 100 years ahead and think about a little bit of a fraction of a degree, who wants to make sure that their kids are safe.
And so, the next money we spend should probably be on these very simple and cheap policies.

Testing A Constructal Climate Model

The past climate is being rewritten so fast that we literally don’t know what will happen yesterday …

From Watts Up With That?

Guest Post by Willis Eschenbach

ABSTRACT

A simple constructal model of the operation of the climate system was created by Dr. Adrian Bejan and several others. It posits that the climate system can be modeled very accurately by considering the climate as a giant heat engine turning solar power into mechanical motion. Further, it says that following the constructal law, the heat engine constantly evolves to maximize the heat flow from the tropics to the poles. In this analysis, I examine the inner workings of the model, implement a couple of improvements, and test it against the CERES satellite dataset. Sorry, no spoilers.

CONSTRUCTAL LAW

The Constructal Law, formulated by Professor Adrian Bejan in 1996, is a fundamental principle in physics and engineering that describes the natural tendency of all flow systems, whether inanimate or animate, to evolve and organize in a way that maximizes the flow of matter, energy, or information. This law recognizes that patterns and structures in nature, such as river networks, tree branches, and biological organisms, emerge and evolve to enhance their efficiency in the movement of resources. The constructal law explains things like the endlessly meandering nature of rivers seen in the image above. My previous posts on the Constructal Law are here.

In essence, the Constructal Law states that the design and development of flow systems, whether the branching of blood vessels in the human body, the structure of transportation networks, or even the layout of technology and information networks, are governed by the imperative to reduce flow resistance and facilitate the transfer of resources.

The Constructal Law, as applied to climate, says that natural climate systems, such as atmospheric and oceanic circulation patterns, evolve and organize in a way that maximizes the efficiency of heat and energy flow on Earth. This principle emphasizes that climate systems, like other flow systems, tend to develop structures and patterns that reduce flow resistance and promote the transfer of heat and energy.

In a series of three papers, “Thermodynamic optimization of global circulation and climate“, ” Constructal theory of global circulation and climate“, and “Climate change, in the framework of the constructal law“, Adrian Bejan and his co-authors show that the climate can be modeled as a heat engine. Following the Constructal Law, this climate heat engine evolves to maximize its mechanical power output. The authors say:

“In conclusion, the maximization of the mechanical power output is equivalent to the maximization of the heat current from the hot region to the cold region.”

I got to re-reading the final of those three papers the other day, and I realized that I could set up their model on my computer. Let me start with an overview of their model.

Figure 1. The conceptual model.

The top part shows the warm (tropical) and cold (poleward) areas of the global climate heat engine. These areas are marked AH and AL. for “Area High” and “Area Low” temperatures. They each have a corresponding temperature TH (temperature high) and TL (temperature low).

The lower part of the diagram shows the various heat currents. The far left downward pointing arrow is heat from the sun to the hot zone. The next arrow, pointing up, is heat radiated from the hot zone to space.

Then we have the horizontal arrow “q”, the heat current from the hot zone to the cold zone.

Finally, in the cold zone on the right, we have a downward-pointing solar arrow showing heat from the sun to the cold zone, and an upward-pointing radiation arrow showing heat radiated to space.

In short, the hot zone gets heat from the sun. Some is radiated back to space. The rest, the flow “q”, is transported to the cold zone. There, the flow “q” gets radiated back to space along with the heat that the cold zone gets from the sun.

And most important, the Constructal Law says that the system will constantly reorganize itself to maximize the heat flow “q”.

Next, here’s the math of the model, from the third of the papers linked above. Recall from Figure 1 above that “x” is the area fraction, the fraction of the globe occupied by the hot zone.

Daunting … so let me translate for those who like math. For those who don’t, no worries—just skip down to where it says “THEIR MODEL RESULTS“.

And for the three folks still reading this section, ignore equation (26) for now. Next, in the above set of equations, rho (ρ) is the albedo, and gamma (γ) is the “greenhouse factor”, the fraction of upwelling surface longwave radiation that is absorbed by the atmosphere. And at steady-state, the left-hand side of equations 24 and 25 is zero—there is no change of temperature with time.

With those as prologue, the first equation (23) describes the hot zone. It says that the hot zone gets heat from the sun. Some is radiated back to space. The rest, the flow “q”, is transported to the cold zone. So “q” is equal to hot zone solar heat input minus hot zone radiation to space. In short, it’s just a mathematical description of the bottom left part of Figure 1 above. Simple

The second equation (24) describes the cold zone. It says the cold zone gets heat from the sun, plus the flow “q” from the hot zone, and radiates it all to space. So “q” is equal to the cold zone output to space minus the cold zone solar input. This equation is a mathematical description of the bottom right-hand part of Figure 1 above.

The third equation (25) says that the flow “q” is equal to some constant “C” times the 3/2 power of the difference in temperature between the hot and cold zones.

The final equation (27) specifies that “q” is maximized.

There are four unknowns in the equations—temperatures of the hot and cold zones “TH” and “TL“, the heat flow “q”, and the area fraction “x”. Now, my math-fu is not strong enough to solve those four equations to determine the four unknowns. And unfortunately, the authors of the paper didn’t include the solution. Grrrr.

However, I’m a determined fellow. After some reflection, I realized that I could use a double optimization process to get the answers.

I wanted to determine the value of x (the size of the hot zone) which gives the largest value for “q”, the heat flow from the hot zone to the cold zone. But I only had three equations with four unknowns.

So I divided the problem up by assuming that I knew what “x” was. Using that, I could then use an optimization program to give me the values of TH, TL, and q for any given value of x.

And with that, I could use a second optimization program to give me the value of x that maximized q, the heat flow from the hot zone to the cold zone. See the Appendix below for the R code.

THEIR MODEL RESULTS

Here is their report of the first of their calculations. Using their same numbers, I get the same results that they show below.

Using their values, I was able to reproduce their results very accurately.

PROBLEMS WITH THEIR MODEL

However, there are a couple of issues with their values. First, as they note, their value for “x” puts the limits of the hot zone at about 57°N/S. But that’s not the case in the real world. Here’s the real-world data regarding the heat flow “q”.

Figure 2. How much heat is moved from the tropics to the poles (positive values), and how much heat is absorbed in the polar regions (negative values). The hot zone is the red to yellow part bordered by the black/white lines. The cold zone is shown in green to blue, outside the black/white lines.

You can see the similarity of this graphic with the model shown in Figure 1 above. However, in the real world, the hot zone fraction “x” is about 0.55 of the total surface. This corresponds with a hot zone extending to about 34°N/S. So that was the first problem—the hot zone extends to about 34°N/S, not 57°N/S.

The second problem is that their equation gives far too cold a result for the cold zone. They say it averages 258.4K, which is -14.75°C. But in the real world, the cold zone poleward of 57°N/S actually has an average temperature of about – 3°C, far from the minus 14°C they claim.

IMPROVING THEIR MODEL

So of course, being the eternal tinkerer, I had to see whether I could improve their model. The first thing I noticed was that they are using the same albedo and the same greenhouse factor for both the cold and hot zones. But in the real world, both the albedo and the greenhouse factor are very different for the two areas. As a result, their model is giving inaccurate results

Using individual albedo and greenhouse factors for the two areas made the model far more accurate. But there was still a problem. The hot temperatures it calculated were too hot and the cold temperatures were too cold to match the real world. Looking at the equations, I realized that this inter-temperature distance is controlled by the constant “C” in Equation (25). This is the “conductance”, a measure of how much heat flow is generated by a given temperature difference between the hot and cold zones. The value they were using for “C” was far too small, which meant it required a much greater temperature difference to get the same flow, resulting in a hot zone that’s too hot and a cold zone that’s too cold.

Once the factor “C” was increased, the results looked very good.

GROUND-TRUTHING THE MODEL

With that model up and running on my computer, I figured that I could test whether in fact, the climate system actually does operate as a gigantic heat engine that is continually evolving to maximize the tropical-polar heat flow. Here was my plan.

The constructal model says that given the albedo and greenhouse factors, for each value of “x” (the area of the hot zone) there will be a preferred temperature for the hot and cold zones. Further, the model says that the average final temperatures will be the ones that maximize “q”, the heat flow from the hot zone to the cold zone. I realized we could test those claims using the CERES data.

For each year, the average top of atmosphere net radiation CERES data gives us the observed value of “x” in the constructal model. As mentioned above, x is the fraction of the globe that is exporting heat on average. The CERES data also gives us the information needed to calculate rho (ρ), which is the albedo, and gamma (γ) which is the “greenhouse factor”.

The model says that if we know the albedo ρ, the greenhouse factor γ, and the hot zone area x, given those physical constraints the resulting hot and cold temperatures will be the ones that maximize the heat flow “q” from the hot zone to the cold zone.

Here is the performance of the constructal model. Recall that it has only one tuned parameter, C, that regulates how easily the heat flows from the hot zone to the cold zone. I’ll get back in a bit to why I think their value for C (.181) is far too low. In the meantime, these are the actual (blue/cyan) and modeled (red/orange) temperatures for the hot and cold zones of the planet.

Figure 3. Modeled and actual temperatures of the world’s hot and cold zones

I found this result to be most encouraging. Those model temperatures are calculated based solely on maximizing the heat “q” flowing from the hot zone to the cold zone, subject to the physical constraints of the albedo and the greenhouse factor. And although the conductance C is tuned, all that tunes is the temperature difference between the hot and cold zones. It does not tune the temperatures themselves. There was no guarantee that tuning the conductance would match the absolute temperatures of the hot and cold zones … but in the event, the match is excellent. I would say that that is very convincing evidence that the constructal model accurately portrays how the climate flow system actually works.

A SECOND TEST

But wait, as they say on TV, there’s more. Here are closeups of the actual and modeled variations in the yearly average temperatures of the hot and cold zones.

Figure 4. Modeled and actual annual average temperatures of the world’s hot and cold zones.

Not perfect, but not bad either. So not only does the constructal model give good long-term average temperatures. It also does a decent job of replicating the year-by-year variations in temperature.

And it’s doing all that using nothing more than the hot zone area “x”, the albedo “rho”, and the greenhouse factor “gamma” to calculate the temperatures that maximize “q”.

That’s very clear evidence that in the real world, various physical processes constantly evolve and act to increase the flow of heat from the tropics to the polar regions.

A FINAL TEST

Further evidence that the model is an accurate representation of how the climate heat engine really works is visible in both the size and the stability of the area of the hot zone. The model calculates the average of x, the hot zone fraction of the surface, as being 0.564. The actual CERES 22-year average value for x is 0.556. That’s less than a hundredth difference. Once again, the model is accurate.

Regarding stability, remember that x, the hot zone area fraction, is calculated by the model as the hot zone area that maximizes the heat flow “q”. Bear in mind that the hot zone fraction could vary from ~0.1 to ~0.9. And there’s no reason to assume ex-ante that it would remain stable over time.

However, under the constructal model, since the underlying constraints (annual average albedo and greenhouse fraction) are relatively stable we’d expect the hot zone fraction “x” to be pretty stable as well. In any case, here’s the actual record of the CERES data for “x”, the hot area fraction, along with the constructal model output of the same variable.

Figure 5. The “x” fraction, the amount of the earth’s surface that makes up the hot zone.

Clearly, the model is doing an excellent job of representing the real world.

In Figure 5, as in Figs. 3 and 4 above, it’s important to remember that the output (e.g. the modeled x fraction in Fig. 5 above) is not calculated directly from the input. In Figure 5, for example, the x fraction shown in red is not directly calculated from the albedo and greenhouse fraction figures.

Instead, it is the result of a maximization procedure. The x fraction shown in red in Figure 5 is the value of x that, given the physical constraints of albedo and greenhouse fraction, gives the greatest flow “q” from the hot zone to the cold zone.

TEMPERATURES

For temperatures, I’ve used the CERES surface upwelling longwave data converted using the Stefan-Boltzmann constant and monthly gridded emissivity values. I’ve checked the results and they are extremely similar to both the Berkeley Earth and the HadCRUT datasets. I use it because it is energy-balanced with the rest of the CERES energy flows.

CONDUCTANCE

I mentioned above that I’d explain why I think their value for “C”, the “conductance”, is too low. This conductance is a measure of how much heat flows between the two zones for some given temperature difference between the zones. In their model, they’ve modeled the heat transport via the atmosphere. And they’ve modeled the atmospheric heat transport as being driven by the buoyancy of the warmer, lighter tropical air.

And that is good as far as it goes. But it leaves out a couple of things. One is a main power source driving the Hadley cell circulation—the perennial line of thunderstorms along the inter-tropical convergence zone (ITCZ). These drive air vertically from the surface up to the upper troposphere, and occasionally even into the stratosphere. These thunderstorms turbocharge the Hadley cell circulation, allowing it to move much more heat polewards than if it were driven solely by the general tropical-extratropical temperature differences as the authors’ analysis assumes. Here’s a map of where the thunderstorms live.

Figure 6. The altitude of the cloud tops, day/night. High altitude cloud tops are the sign of the tropical thunderstorms driving deep tropical convection. The Inter-Tropical Convergence Zone (ITCZ), where the two atmospheric hemispheres converge, is marked by the band of thunderstorms around the world at 5°-10° north of the equator.

The second reason that I think their conductance value is too small is that a large amount of heat is physically moved polewards by the ocean currents. The Agulhas Current in the Indian Ocean and the Gulf Stream in the Atlantic Ocean are constantly transporting warm tropical waters polewards.

In the Pacific, the El Nino/La Nina pumping action periodically strips off the warm top layer of vast areas of the tropical Pacific Ocean and moves that warm water first eastwards and then towards both poles.

Because their model doesn’t include either thunderstorms or ocean currents, their estimate of the conductance is an order of magnitude too small.

CLIMATE SENSITIVITY

This constructal model points out some interesting things about climate sensitivity.

First, sensitivity is a function of changes in rho (albedo) and gamma (greenhouse fraction). But not a direct function. It is the result of physical processes that maximize “q” given the constraints of rho and gamma.

Next, the sensitivity is slightly different depending on whether the changes in albedo and greenhouse fraction are occurring in the hot zone, the cold zone, or both.

Finally, assuming that there is a uniform pole-to-pole increase of 3.7 W/m2 in downwelling radiation from changes in either albedo or greenhouse fraction, the constructal model shows a temperature increase of ~1.1°C. (3.7 W/m2 is the amount of radiation increase predicted to occur from a doubling of CO2.)

CONCLUSIONS

The CERES data shows that the constructal model of the climate system is very consistent with real-world observations. This model views the climate system as a heat engine that, following the constructal law, constantly acts and evolves to maximize the flow of heat from the warm zone of the planet to the cold zone.

This simple three-equation constructal climate model, given only information about the earth’s hot zone area and the albedo and greenhouse fractions in the earth’s hot and cold zones, is able to calculate the absolute temperatures of the earth’s hot and cold zones to within a degree or so … a result that I found quite surprising.

Anyhow, that’s what I did with my weekend. And meanwhile, back in the real world, the past climate is being rewritten so fast that we literally don’t know what will happen yesterday …

Best to all,

w.

APPENDIX

Here is the R code for the optimization programs. Read the linked paper for the full description of their method.

First, the inner optimization program that calculates TH, TL, and q when given x.

maxq=function(par2){

            theansmax=function(par){

                        th=par[1]

                        tl=par[2]

                        q=par[3]

                        (v1=x*((asin(x)+x*sqrt(1-x^2))/(2*pi*x))*(1-rhoh)-

                                                x*(1-gammah)*th^4-q)

                        (v2=(1-x)*((pi/2-asin(x)-x*sqrt(1-x^2))/(2*pi*(1-x)))*(1-rhoc)-

                                                (1-x)*(1-gammac)*tl^4+q)

                        (v3=1.8*(th-tl)^(3/2)-q)

                        sum(v1^2+v2^2+v3^2)

            }

            par=c(.7,.6,.1)

            x=par2

            (par=optim(par,theansmax)$par)

            par[3]

}

Next, the outer optimization program that calls the inner program.

(bestx=optim(par2,maxq,

control=list(fnscale = -1,reltol=1e-10),

method=”Brent”,

lower=.001,upper=.999)$par)

Next, some support functions:

surfaream = 5.100656e+14 #earth surface area in sq. m.

qtoq= function(q) q*((5.67e-8)*392.8^4*surfaream)

tunscale=function(tscale) tscale*392.8

xtolat=function(x) degrees(asin(x))

And to get the final output:

(x=bestx)

(nupar=optim(par,theansmax)$par)

q=nupar[3]

qtoq(nupar[3])

xtolat(x)

(th=ktoc(tunscale(nupar[1])))

(tl=ktoc(tunscale(nupar[2])))

Supercomputer climate model absurdity: ‘extreme global warming could eventually wipe out humans’

From Tallbloke’s Talkshop

September 27, 2023 by oldbrew 

The illogical conclusion of tail-wagging-dog climate theories fed into models based on them, with a side order of volcanoes. In any case a lot happened to Earth in the last 250 million years, including periods when CO2 was much higher than today – so whatever comes out of a supercomputer, natural evolution will continue.
– – –
Extreme global warming will likely wipe all mammals – including humans – off the face of the Earth in 250 million years, according to a new scientific study. Sky News reporting.

Temperatures could spiral to 70C (158F) and transform the planet into a “hostile environment devoid of food and water”, the research warns.

The planet would heat up to such an extent that many mammals would be unable to survive – and the Earth’s continents would merge to form one hot, dry, uninhabitable supercontinent.

The apocalyptic projections are from the first-ever supercomputer climate models.

They suggest the sun would become brighter, with tectonic movements unleashing huge amounts of carbon dioxide (CO2) into the air through volcanic eruptions.

The Earth would become so hot that only 8% to 16% of the projected supercontinent would be habitable.

Mammals, including humans, are better adapted to living in the cold, and are less able to deal with extreme heat.

‘Humans would expire’

The study’s lead author, Dr Alexander Farnsworth of the University of Bristol, said: “The newly emerged supercontinent would effectively create a triple whammy, comprising the continentality effect, hotter sun and more CO2 in the atmosphere, of increasing heat for much of the planet.

“The result is a mostly hostile environment devoid of food and water sources for mammals.

“Widespread temperatures of between 40C to 50C, and even greater daily extremes, compounded by high levels of humidity would ultimately seal our fate.

“Humans – along with many other species – would expire due to their inability to shed this heat through sweat, cooling their bodies.”

The authors of the research believe CO2 levels could rise from around 400 parts per million (ppm) today to more than 600 ppm by the time of the formation of the supercontinent – named Pangea Ultima.

This assumes, however, that humans stop burning fossil fuels – “otherwise we will see those numbers much, much sooner”, warned Professor Benjamin Mills, who calculated the future CO2 projections for the study.

Full report here.

Sun and Water Drive Climate, Not Us

From Science Matters

By Ron Clutz

One year time lapse of precipitable water (amount of water in the atmosphere) from Jan 1, 2016 to Dec 31, 2016, as modeled by the GFS. The Pacific ocean rotates into view just as the tropical cyclone season picks up steam.

Lately the media refers increasingly to how important is the water cycle in our climate system.  Unfortunately, as usual, the headlines confuse cause and effect.  For example, Climate change has a dramatic impact on the global water cycle, say researchers. from phys.org.  How perverse to position climate change as an agent rather than the effect from water fluxes in the ocean and atmosphere. The headline misleads entirely (written by scientists or journos?) as the beginning texts shows (in italics with my bolds).

For Christoph Schär, ETH Zurich’s Professor of Climate and Water Cycle, “global warming” is not quite accurate when it comes to describing the driver of climate change. “A better term would be ‘climate humidification,’” he explains. “Most of the solar energy that reaches the Earth serves to evaporate water and thereby drives the hydrological cycle.” Properly accounting for the implications of this is the most challenging task of all for climate modelers.

In order to build a global climate model, grid points spaced around 50 to 100 kilometers apart are used. This scale is too coarse to map small-scale, local thunderstorm cells. Yet it is precisely these thunderstorm cells—and where they occur—that drive atmospheric circulation, especially in the tropics, where solar radiation is highest.

The workaround, at present, is to add extra parameters to the model in order to map clouds. “But predicting future climate change is still pretty imprecise,” Schär says. “If we don’t know how many clouds are forming in the tropics, then we don’t know how much sunlight is hitting the earth’s surface—and hence we don’t know the actual size of the global energy balance.”

Even worse from NewScientist How we broke the water cycle and can no longer rely on rain to fall.  What hubris and how preposterous to claim our puny CO2 emissions have upset hydrology.  The lack of correlation is obvious to those who care to look:

The climatist paradigm is myopic and lopsided.  A previous post below provides a cure for those whose vision is impaired by the IPCC consensus view of climate reality.

Curing Radiation Myopia Regarding Climate

E.M. Smith provides an helpful critique of a recent incomplete theory of earth’s climate functioning in his Chiefio blog post So Close–Missing Convection and Homeostasis. Excerpts in italics with my bolds and added images.

It is Soooo easy to get things just a little bit off and miss reality. Especially in complex systems and even more so when folks raking in $Millions are interested in misleading for profit. Sigh.

Sabine Hosenfelder does a wonderful series of videos ‘explaining’ all sorts of interesting things in and about actual science and how the universe works. She is quite smart and generally “knows her stuff”. But… It looks like she has gotten trapped into the Radiative Model of Globull Warming.

The whole mythology of Global Warming depends on having you NOT think about anything but radiative processes and physics. To trap you into the Radiative Model. But the Earth is more complex than that. Much more complex. Then there’s the fact that you DO have some essential Radiative Physics to deal with, so the bait is there.   However…

It is absolutely essential to pay attention to convection in the lower atmosphere
and to the “feedback loops” or homeostasis in the system.

The system acts to restore its original state. There is NO “runaway greenhouse” or we would have never evolved into being since the early earth had astoundingly high levels of CO2 and we would have baked to death before getting out of our slime beds as microbes.

Figure 16. The geological history of CO2 level and temperature proxy for the past 400 million years. CO2 levels now are ~ 400ppm. Source: Davis, W. J. (2017).

OK, I’ll show you her video. It is quite good even with the “swing and a miss” at the end. She does 3 levels of The Greenhouse Gas Mythology so you can see the process evolving from grammar school to high school to college level of mythology. But then she doesn’t quite make it to Post-Doc Reality.

Where’s she wrong? (Well, not really wrong, but lacking…)

I see 2 major issues. First off, she talks about the “lower atmosphere warming”. Well, yes and no. It doesn’t “warm” in the sense of getting hotter, but it does speed up convection to move the added heat flow.

In English “heating” has 2 different meanings. Increasing temperature.
Increasing heat flow at a temperature.

We see this in “warm up the TV dinner in the microwave” meaning to heat it up from frozen to edible; and in the part where the frozen dinner is defrosting at a constant temperature as it absorbs heat but turns it into the heat of fusion of water. So you can “warm it up” by melting at a constant temperature of frozen water (but adding a LOT of thermal energy – “heat”) then later as increasing temperature once the ice is melted. It is very important to keep in mind that there are 2 kinds of “heating”. NOT just “increasing temperature”.

In the lower atmosphere, the CO2 window / Infrared Window is already firmly slammed shut. Sabine “gets that”. Yay! One BIG point for her! No amount of “greenhouse gas” is going to shut that IR window any more. As she points out, you get about 20 meters of transmission and then it is back to molecular vibrations (aka “heat”).

So what’s an atmosphere to do? It has heat to move! Well, it convects. It evaporates water.

Those 2 things dominate by orders of magnitude any sort of Radiative Model Physics. Yes, you have radiation of light bringing energy in, but then it goes into the ocean and into the dirt and the plants and even warms your skin on a sunny day. And it sits there. It does NOT re-radiate to any significant degree. Once “warmed” by absorption, heat trying to leave as IR hits a slammed shut window.

The hydrological cycle. Estimates of the observed main water reservoirs (black numbers in 10^3 km3 ) and the flow of moisture through the system (red numbers, in 10^3 km3 yr À1 ). Adjusted from Trenberth et al. [2007a] for the period 2002-2008 as in Trenberth et al. [2011].

So what does happen? Look around, what do you see? Clouds. Rain. Snow. (sleet hail fog etc. etc.)

Our planet is a Water Planet. It moves that energy (vibrations of atoms, NOT radiation) by having water evaporate into the atmosphere. (Yes, there are a few very dry deserts where you get some radiative effects and can get quite cold at night via radiation through very dry air, but our planet is 70% or so oceans, so those areas are minor side bars on the dominant processes). This water vapor makes the IR window even more closed (less distance to absorption). It isn’t CO2 that matters, it is the global water vapor.

What happens next?

Well, water holds a LOT of heat (vibration of atoms and NOT “temperature”) as the heat of vaporization. About 540 calories per gram (compared to 80 for melting “heat of fusion” and 1 for specific heat of a gram of water). Compare those numbers again. 1 for a gram of water. 80 for melting a gram of ice. 540 for evaporating a gram of water. It’s dramatically the case that evaporation of water matters a lot more than melting ice, and both of them make “warming water” look like an irrelevant thing.

Warming water is 1/80 as important as melting ice, and it is 1/540 th as important as evaporation of the surface of the water. Warming air is another order of magnitude less important to heat content.

So to have clue, one MUST look at the evaporation of water from the oceans as everything else is in the small change.

Look at any photo of the Earth from space. The Blue Marble covered in clouds. Water and clouds. The product of evaporation, convection, and condensation. Physical flows carrying all that heat (“vibration of atoms” and NOT temperature, remember). IF you add more heat energy, you can speed up the flows, but it will not cause a huge increase in temperature (and mostly none at all). It is mass flow that changes. The number of vibrating molecules at a temperature, not the temperature of each.

In the end, a lot of mass flow happens, lofting all that water vapor with all that heat of vaporization way up toward the Stratosphere. This is why we have a troposphere, a tropopause (where it runs out of steam… literally…) and a stratosphere.

What happens when it gets to the stratosphere boundary? Well, along the way that water vapor turns into water liquid very tiny drops (clouds) and eventually condenses to big drops of water (rain) and some of it even freezes (hail, snow, etc.). Now think about that for a minute. That’s 540 calories per gram of heat (molecular vibration NOT temperature, remember) being “dumped” way up high in the top of the troposphere as it condenses, and another 80 / gram if if freezes. 620 total. That’s just huge.

This is WHY we have a globe covered with rain, snow, hail, etc. etc. THAT is all that heat moving. NOT any IR Radiation from the surface. Let that sink in a minute. Fix it in your mind. WATER and ICE and Water Vapor are what moves the heat, not radiation. We ski on it, swim in it, have it water our crops and flood the land. That’s huge and it is ALL evidence of heat flows via heat of vaporization and fusion of water.

It is all those giga-tons of water cycling to snow, ice and rain, then falling back to be lofted again as evaporation in the next cycle. That’s what moves the heat to the stratosphere where CO2 then radiates it to space (after all, radiation toward the surface hits that closed IR window and stops.) At most, more CO2 can let the Stratosphere radiate (and “cool”) better. It can not make the Troposphere any less convective and non-radiative.

Then any more energy “trapped” at the surface would just run the mass transport water cycle faster. It would not increase the temperature.

More molecules would move, but at a limit on temperature. Homeostasis wins. We can see this already in the Sub-Tropics. As the seasons move to fall and winter, water flows slow dramatically. I have to water my Florida lawn and garden. As the seasons move to spring and summer, the mass flow picks up dramatically. Eventually reaching hurricane size. Dumping up to FEET of condensed water (that all started as warm water vapor evaporating from the ocean). It is presently headed for about 72 F today (and no rain). At the peak of hurricane season, we get to about 84 or 85 F ocean surface temperature as the water vapor cycle is running full blast and we get “frog strangler” levels of rain. That’s the difference. Slow water cycle or fast.

IF (and it is only an “if”, not a when) you could manage to increase the heat at the surface of the planet in, say, Alaska: At most you would get a bit more rain in summer, a bit more snow in winter, and MAYBE only a slight possible, of one or two days that are rain which could have been snow or sleet.

Then there’s the fact that natural cycles swamp all of that CO2 fantasy anyway. The Sun, as just one example, had a large change of IR / UV levels with both the Great Pacific Climate Shift (about 1975) and then back again in about 2000. Planetary tilt, wobble, eccentricity of the orbit and more put us in ice ages (as we ARE right now, but in an “interglacial” in this ice age… a nice period of warmth that WILL end) and pulls us out of them. Glacials and interglacials come and go on various cycles (100,000 years, 40,000 years, and 12,000 year interglacials – ours ending now, but slowly). The simple fact is that Nature Dominates, and we are just not relevant. To think we are is hubris of the highest order.

See Also  Bill Gray: H20 is Climate Control Knob, not CO2

Figure 9: Two contrasting views of the effects of how the continuous intensification of deep cumulus convection would act to alter radiation flux to space. The top (bottom) diagram represents a net increase (decrease) in radiation to space
Footnote

There are two main reasons why investigators are skeptical of AGW (anthropogenic global warming) alarm. This post intends to be an antidote to myopic and lop-sided understandings of our climate system.

  1. CO2 Alarm is Myopic: Claiming CO2 causes dangerous global warming is too simplistic. CO2 is but one factor among many other forces and processes interacting to make weather and climate.

Myopia is a failure of perception by focusing on one near thing to the exclusion of the other realities present, thus missing the big picture. For example: “Not seeing the forest for the trees.”  AKA “tunnel vision.”

2. CO2 Alarm is Lopsided: CO2 forcing is too small to have the overblown effect claimed for it. Other factors are orders of magnitude larger than the potential of CO2 to influence the climate system.

Lop-sided refers to a failure in judging values, whereby someone lacking in sense of proportion, places great weight on a factor which actually has a minor influence compared to other forces. For example: “Making a mountain out of a mole hill.”