Tag Archives: CMIP6 models

Climate Model Bias 6: WGII

From Watts Up With That?

By Andy May

The previous parts of this series investigated model bias in the CMIP6 models and in their interpretation in AR6 WGI. This part looks at model bias in AR6 WGII, Climate Change 2022: Impacts, Adaptation, and Vulnerability.[1] The IPCC WGII report uses the possible future climate projections from the WGI report to project the future impact of climate change on society. It uses socio-economic models to accomplish this. As we saw in the previous parts of this series, the WGI report is biased and ignores possible natural contributions to recent observed global warming from changes in the Sun, cloud cover, and the meridional transport of energy.

The WGI/CMIP6 models, rather arbitrarily, assign all warming since 1750 to human influences, particularly CO2 emissions.[2] WGII accepts this controversial conclusion. It uses projected CO2 emissions combined with the WGI/CMIP6 models to predict future temperature and projected knock-on effects to other climate components, like precipitation, to model the future impact on human civilization.

WGII states that:

“Human-induced climate change, including more frequent and intense extreme events, has caused widespread adverse impacts and related losses and damages to nature and people, beyond natural climate variability.”[3]AR6 WGII, page 9

This is only true if we accept their assumption about the range of natural climate variability, but as we saw in the previous parts of this series, their assumptions about natural warming, especially the impact of solar variability, are very controversial. Further, whether climate change is natural or human-caused, someone, somewhere, is nearly always going to be adversely affected by a change in climate, while others will benefit from the same change. How widespread is “widespread?”

WGII liberally discusses the potential negative impact of climate change,[4] and they discuss the potential benefits of their recommended adaptation and mitigation policies, but the report rarely mentions the well documented potential benefits of global warming and additional atmospheric CO2[5] The fact that WGII only considers the problems of climate change and not the benefits, reveals their bias and invalidates their analysis. Even when mentioning a benefit, they find something negative in it. For example, they mention that elevated CO2 benefits woody plants, but that woody plants can cause an increase in atmospheric carbon.[6]

As Brian O’Neill writes, while many studies anticipate problems in the future, they also predict a future where humanity is better educated, better fed, longer lived, healthier, with less poverty, and less conflict. This is simply continuing a trend that has been underway for many decades.[7] O’Neill reports that currently there are 700-800 million people at risk of hunger globally. By 2050, even including the possible effects of 2°C of warming, that number will fall to 250 million.[8]

Currently the world’s economy is growing between 2 and 3% per year[9] and this is not expected to change much in the future. Looking ahead at a possible 2.5°C of warming in the next century or so, economists anticipate between a positive net climate change impact of about 2% and negative net impact of about 2.5% on global GDP. It is significant that the sign of the net economic impact due to climate change is not known. The average impact for 2.5°C of warming is a negative 1.3% for the average person.[10] In the next 80 years global GDP would be expected to grow between 487% and 1,000%, so a negative 1.3% due to climate change is unlikely to be noticed. Richard Tol writes that the uncertainty in the estimates of the impact of climate change on total economic welfare is very large and if we take this uncertainty into account, the impact of climate change does not significantly deviate from zero until 3.5°C of warming.[11]

Emissions and impact scenarios

The future cannot be predicted. So, the concept of “scenarios” was developed in the 1960s by Herman Kahn, a military strategist with the RAND Corporation.[12] The idea is to develop a “business as usual” forecast that assumes no unusual events occur over the planning period. Then you vary something and compute an alternative forecast that shows the difference between the baseline, business-as-usual, forecast and your model. It is just a learning tool and like all models, used to investigate the possible impact of policy changes, regulations, or tactical decisions in wars or battles. We are not supposed to believe any of the forecasts, it is just the relative values between various assumptions that are important. Scenario analysis is widely used to do cost-benefit analysis. However, since WGII only incorporates the costs and leaves out the benefits, their cost-benefit analysis is invalid.

It is very important to remember that the projections used in WGII assume that there will be no natural warming or cooling between now and 2100. If there are natural forces acting on climate, then the greenhouse gas-based projections they rely upon will be wrong and their projected impacts on human civilization must be wrong as well. The AR6 scenarios of temperature change relative to 1850 to 1900 are shown in figure 1.

Figure 1. The temperature projected to 2100. Source: (IPCC, 2022, p. 16).

Hausfather and Peters[13] have called the higher scenarios, SSP3-7.0 and SSP5-8.5 (as well as their AR5 equivalent RCP8.5) unlikely, but since this view is contested,[14] AR6 WGII takes no position on which of the scenarios in figure 1 is most likely.[15] This is unfortunate since the difference in the scenarios in 2100, only 76 years from today, is over three degrees. The combination of the uncertainty in the projected warming and in the potential impact of the warming is extremely large.

Roger Pielke Jr. and Justin Ritchie tell us that the ancestor of the SSP5-8.5 scenario in figure 1 originated in the first IPCC report in 1990. In 1990, with what was known then, it was a reasonable “business-as-usual” scenario. It predicted a large increase in coal consumption and a CO2 concentration of 1,200 PPM in 2100. Today that emissions scenario is reached in SSP5-8.5, but with what we know today it is not “business-as-usual,” in fact it is an implausible future, that is becoming more impossible with each passing year.[16] To be fair, the IPCC does not call SSP5-8.5 business-as-usual, that label is used by others, presumably because that is what it is called in the first report in 1990.[17]

Marcel Crok reports in the book that he and I edited, The Frozen Climate Views of the IPCC, that the unlikely, and now implausible, SSP5-8.5 and its predecessor RCP8.5 are mentioned in AR6 41.5% of the time according to Roger Pielke Jr., much more than the more likely SSP2-4.5 or RCP4.5 scenarios (mentioned 17% of the time). The latter two scenarios more closely match recent observations.[18] Thus, WGII often uses the biased and too hot WGI models as input to maximal and implausible emissions scenarios to do their modeled climate impact projections.

Ignoring the Good News

While using implausible scenarios and biased climate model results in assessing the impacts of climate change is unwise, ignoring the positive impacts of climate change and focusing only on the bad may well be worse. The whole idea of using scenarios is to investigate the full range of possible outcomes, not cherry-pick the model input to manufacture a desired outcome, a problem often called reporting bias. It is this part of the WGII procedure that cost them credibility.

Marcel Crok shows us that U.S. major and all landfalling hurricanes have been declining since 1900.[19] Globally, there is no trend in cyclones and hurricanes.[20] There is also no trend in accumulated global cyclone energy.[21] AR6 WGI finds that since 1950 there has been an increase in the number of hot days and heatwaves,[22] but as figure 1 in part 2 shows the world was cooling in 1950. At least in the United States, records show that peak hot days and heatwaves were in the 1930s.[23] AR6 WGI also finds that there is “low confidence in general statements to attribute changes in flood events to anthropogenic climate change.”[24] The idea that extreme weather is increasing globally is very controversial.

It is worth noting that AR6 WGII states that they have high confidence that some extreme weather is increasing as a result of climate change, including extreme rainfall events, more frequent and stronger cyclones/hurricanes, and that recent devastating floods were made more likely due to climate change.[25] This appears to be directly contradicted by what is stated in AR6 WGI, but WGII cleverly sidesteps the contradiction by specifying “Some extreme weather…” and “devastating floods in western Europe…” Thus, to make their point, they cherry pick locations and events and avoid discussing global impacts that have not changed or are decreasing.[26] In any given year, extreme weather events are increasing somewhere, that is the nature of weather. Their assertion is contradicted by the work of Zhongwei Yan, Philip Jones, and Anders Moberg already mentioned in part 5.[27]

Finally, both WGI and WGII completely ignore evidence that global warming and additional CO2 have many benefits. Bjorn Lomborg reports that human welfare will likely increase 450% in the 21st century and damages due to climate change might reduce this to 434%,[28] which will be hard for most people to detect. Lomborg also finds that non-climate-related deaths, due to earthquakes, tsunamis, volcanoes, etc. have fallen only slightly in the past 100 years, but climate-related deaths have fallen a staggering 99%. Part of this is that cold-related deaths are much more common than heat-related deaths, and as the world warms, cold-related deaths fall more than heat-related deaths increase.[29]

Cherry picking

The authors of AR6 WGII were particularly guilty of selecting papers to discuss that supported their assumptions and ignoring papers that refuted or disagreed with them. In a classic case they discussed Grinsted, et al.,[30] which claims to be able to attribute some U.S. hurricane losses to human-caused global warming. Grinsted is the only paper, out of many[31] that was able to attribute hurricane losses to human-caused or human-enhanced hurricane activity. However, Roger Pielke Jr. has found that the paper is flawed and has requested that it be retracted.[32]

Even though the paper is likely flawed and is contradicted by many other studies, it is used to support the idea that some U.S. hurricane losses can be “partly attributed to anthropogenic climate change” in AR6 WGII.[33] To be fair, they do mention one of the many studies that disagree with Grinsted. However, they also mention one other paper, Estrada et al.,[34] that they imply supports attribution to human-caused climate change, but the paper does not say that. Estrada, et al. say that their results are ambiguous, and that in 2005 2-12% of normalized losses “could be attributable to climate change.” So, they chose one year, and only considered the United States, and maybe 2-12% of the damage was due to climate change. In Estrada’s conclusions they note:

“Increases in wealth and population alone cannot account for the observed trend in hurricane losses. The remaining trend in itself does not prove the existence of a climate change signal, as it could be due to causes not considered here.”Estrada, Botzen, and Tol, Nature Geoscience, 2015

In other words, they detect a trend in normalized hurricane damage that cannot be fully explained by increasing wealth and population and it is possible that this excess is due to climate change. Estrada, et al. explain that prominent ocean oscillations, such as the Atlantic Multidecadal Oscillation (AMO) can account for some of the excess hurricane damage observed. Also, data problems prior to 1940 could produce a spurious upward trend in damage. So, Estrada, et al.’s analysis uncovered a small excess trend in damage that might be explainable by climate change but could also be caused by other factors. Not very convincing.

AR6 WGII leaves the reader with the idea it is two against one, when actually one of the pro-attribution studies is inconclusive and they ignored a large number of studies that found no connection between hurricane damage and climate change. WGII does make the following statement, which partially absolves them:

“Climate change explains a portion of long-term increases in economic damages of hurricanes (limited evidence, low agreement).”[35]IPCC AR6 WGII, page 1978

They are saved by the “limited evidence, low agreement” bit, but somehow that part is always left out of the press releases and news media.

WGII Model Bias, Summary

Just as WGI ignored the potential impact of solar variability and changes in meridional transport, WGII ignored the potential benefits of warming and additional atmospheric CO2. This invalidates the report. By ignoring the well-documented benefits of global warming and additional CO2, they clearly cannot assess the impact of climate change or our vulnerability to climate changes. It makes their report useless for policy making or cost-benefit analysis.

It is hard to decide exactly how to characterize this problem in AR6 WGII, it could be described as reporting bias, since they ignored so many studies that report warming and CO2 benefits. It could also be described as confirmation bias given their stated assumption that warming and additional CO2 is a bad thing. But, either way, they failed to honestly report the current state of the existing literature on the subject.

Next, we look at model bias in WGIII.

Download the bibliography here.


  1. (IPCC, 2022) 

  2. (IPCC, 2021, p. 67) 

  3. (IPCC, 2022, p. 9) 

  4. (IPCC, 2022, pp. 44-70) 

  5. (May, Are fossil-fuel CO2 emissions good or bad?, 2022g), (Idso, 2013), (Zhu, Piao, & Myneni, 2016), (Tol R. S., 2018) , (Tol R. , Correction and Update: The Economic Effects of Climate Change, 2014b), and (O’Neill, 2023) 

  6. (IPCC, 2022, p. 264) 

  7. (O’Neill, 2023) 

  8. (O’Neill, 2023) 

  9. (International Monetary Fund, 2022) 

  10. (Tol R. S., 2018) 

  11. (Tol R. S., 2018) 

  12. (Pielke & Ritchie, 2021) 

  13. (Hausfather & Peters, 2020) 

  14. (IPCC, 2022, p. 136) 

  15. (Pielke & Ritchie, 2021) 

  16. (Pielke & Ritchie, 2021) and (Hausfather & Peters, 2020) 

  17. (IPCC, 1990, pp. 55-56) 

  18. (Crok & May, 2023, pp. 122-126), (Hausfather & Peters, 2020), and (Pielke Jr, Burgess, & Ritchie, 2021) 

  19. (Crok & May, 2023, p. 142) 

  20. (Weinkle, Maue, & Pielke Jr., 2012) and see Dr. Maue’s site https://climatlas.com/tropical/ 

  21. (Crok & May, 2023, p. 147), also see Dr. Maue’s site https://climatlas.com/tropical/ 

  22. (IPCC, 2021, p. 82) 

  23. (Crok & May, 2023, p. 146) 

  24. (IPCC, 2021, p. 1569) 

  25. (IPCC, 2022, p. 588) 

  26. (Lomborg, Welfare in the 21st century: Increasing development, reducing inequality, the impact of climate change, and the cost of climate policies,, 2020), (Lomborg, We’re Safer From Climate Disasters Than Ever Before, 2021), and (Pielke Jr., 2021) 

  27. (Yan, et al., 2001) 

  28. (Lomborg, Welfare in the 21st century: Increasing development, reducing inequality, the impact of climate change, and the cost of climate policies,, 2020) 

  29. (Dixon, et al., 2005) 

  30. (Grinsted, Ditlevsen, & Christensen, 2019) 

  31. For a list see: (Crok & May, 2023, p. 153) 

  32. (Pielke Jr., Apples, Oranges, and Normalized Hurricane Damage, 2024) 

  33. (IPCC, 2022, p. 1978) 

  34. (Estrada, Botzen, & Tol, 2015) 

  35. (IPCC, 2022, p. 1978) 

What Period of Warming Best Correlates with Climate Sensitivity?

From Roy Spencer, PhD.

February 6th, 2024 by Roy W. Spencer, Ph. D.

When computing temperature trends in the context of “global warming” we must choose a region (U.S.? global? etc.) and a time period (the last 10 years? 50 years? 100 years?) and a season (summer? winter? annual?). Obviously, we will obtain different temperature trends depending upon our choices. But what significance do these choices have in the context of global warming?

Obviously, if we pick the most recent 10 years, such a short period can have a trend heavily influenced by an El Nino at the beginning and a La Nina at the end (thus depressing the trend) — or vice versa.

Alternatively, if we go too far back in time (say, before the mid-20th Century), increasing CO2 in the atmosphere cannot have much of an impact on the temperatures before that time. Inclusion of data too far back will just mute the signal we are looking for.

One way to investigate this problem is to look at climate model output across many models to see how their warming trends compare to those models’ diagnosed equilibrium climate sensitivities (ECS). I realize climate models have their own problems, but at least they generate internal variability somewhat like the real world, for instance with El Ninos and La Ninas scattered throughout their time simulations.

I’ve investigated this for 34 CMIP6 models having data available at the KNMI Climate Explorer website which also have published ECS values. The following plot shows the correlation between the 34 models’ ECS and their temperature trends through 2023, but with different starting years.

The peak correlation occurs around 1945, which is when CO2 emissions began to increase substantially, after World War II. But there is a reason why the correlations start to fall off after that date.

The CMIP6 Climate Models Have Widely Differing Aerosol Forcings

The following plot (annotated by me, source publication here) shows that after WWII the various CMIP6 models have increasingly different amounts of aerosol forcings causing various amounts of cooling.

If those models had not differed so much in their aerosol forcing, one could presumable have picked a later starting date than 1945 for meaningful temperature trend computation. Note the differences remain large even by 2015, which is reaching the point of not being useful anyway for trend computations through 2023.

So, what period would provide the “best” length of time to evaluate global warming claims? At this point, I honestly do not know.

Cyclone Jasper & BOM Forecasting – Getting to the Truth

From Jennifer Marohasy

By jennifer

A lie will travel halfway around the world while the truth is lacing her boots, so the adage goes. And so, the showmen will continue to misrepresent natural disasters that befalls northern Australia while finding something popular to say at the very moment everyone’s attention is focused on that event.

For sure, the rainfall associated with Cyclone Jasper was extraordinary, but not unprecedented for the Cairns catchment. For sure, it is difficult to forecast weather and climate, but the skill of new systems based on artificial intelligence (AI) show great improvement, while the Australian Bureau of Meteorology remains wedded to its General Circulation Models.

Contrary to various popular claims, including by my colleague Peter Ridd*, the Bureau uses the same supercomputer and the same general circulation model to forecast rainfall whether considering the next three-hours, the next three months or the next three decades. It uses a simulation developed by the UK Met Office known as ACCESS-S2, that is also one of the Intergovernmental Panel on Climate Change’s CMIP6 models.

All the general circulation models are underpinned by the assumption that carbon dioxide drives climate change, and all these same models are focused on large scale processes making it difficult to accurately forecasts what matters to real people – local climate, especially extreme rainfall known to be associated with cyclones, and also seasonal rainfall deficient associated with drought.

The Bureau claims it can accurately forecast temperature to within 2 degrees Celsius on any day. This may be of intense political interest, but it is of little real value to the Australian community. Being able to accurately forecast rainfall would be much more meaningful.

There are four types of precipitation-forming processes, including cyclonic rotation (low pressure), though the IPCC’s models are all based on surface-heating (convection). It is perhaps for this reason that these models while accurately simulating general global patterns of rainfall remain unable to capture high intensity events over small areas including rainfall associated with cyclones. Some of these problems can be overcome through downscaling, but even then, it is unclear why the elevation of mountains is mostly underestimated while their spatial extent is overestimated. Fundamentally, and at the core of the problem, is that clouds formation occurs at a scale much smaller than the resolvable grid scale used within all the general circulation models.

The Bureau was able to accurately forecast the trajectory of Cyclone Jasper as it presented as a large and slow-moving weather system, but once the structure of the cyclone began to break down, and this occurred from December 9, ACCESS-S2 struggled to accurately forecast both direction and intensity. Worst, when the heaviest rainfall began eight days later, on December 17 in Cairns, the Bureau was unable to capture the extent of the downpour because its automatic weather recording system at Cairns airport failed. The same problem was experience during the Lismore flooding of April 2022, meaning that the true intensity of the rainfall from these events is not even properly documented.

I first became interested in rainfall forecasting using artificial intelligence in January 2011, following the flooding of Brisbane. My colleague John Abbot used artificial neural networks, a form of AI, for share market trading, and so successfully he bought a red corvette with the winnings one day. That same sports car was drowned in the 2011 Brisbane flooding.

Over the next five years to 2017, John Abbot and I successfully published a dozen research papers on our new technique using AI for monthly rainfall forecasting. We published in the best international peer-reviewed journals and as book chapters following AI conferences.

Our very first paper about forecasting monthly rainfall – for 17 locations in Queensland 12 months in advance – was published in Atmospheric Research which is an Elsevier journal sponsored by the Chinese Academy of Sciences.

Back 12 years ago, when we were pioneering the technique, the Chinese were very interested, and prepared to publish us, but not our own Bureau of Meteorology who scoffed at the idea that AI could be used for weather or climate forecasting.

“But the climate is on a new and dangerous trajectory”, said Oscar Alves who then headed-up the long-range forecasting unit at the Bureau in Melbourne. At the time the Bureau was using a statistical technique for its medium-term forecasts, while busy developing its own general circulation model known as POAMA. POAMA was subsequently used for operational forecasts from June 2013, before being replaced by ACCESS-S1 in August 2018. POAMA proved a disaster and was replaced without even a media release announcing the change over. It took the Bureau 20 years to develop POAMA, and they pulled it after just five years and so many disastrous forecasts that were never acknowledged.

John Abbot and I spent an afternoon with Alves, back in August 2011. John Abbot and I wanted to collaborate. We were convinced back then that AI could significantly improve the skill of the Bureau’s rainfall forecasts. He had no interest in learning anything new. Alves now heads up Bureau’ Earth System Modelling unit.

All these years later and the Bureau is still refusing to consider the value of AI for forecasting rainfall extremes whether the consequence of a cyclone or a drought.

Meanwhile Google is now using AI for weather forecasting with their GraphCast, run on a desktop, outperforming all the GCMs run on supercomputers.

Google’s GraphCast works from the same principles John Abbot and I used: recurrent cycles that can be found in weather and climate data – as long as the data hasn’t been remodelled to fit the human-caused global warming theory. And the Chinese are now working on AI systems that can forecast both the intensity of rainfall during cyclonic events, as well as trajectory.

I have no doubt that the rainfall forecasts for North Queensland following landfall by Cyclone Jasper would have been far superior if ten years ago the Bureau had began to invest in AI technology, had began to develop some capacity in this very different technique.

There has been commentary suggesting Cyclone Jasper resulted in unprecedented rainfall in the headwaters of the Barron River causing flooding of Cairns, particularly of the Northern Beaches. There is a long continuous rainfall record for Kuranda indicating that while December 2023, and the rainfall associated with Jasper was extraordinary, there are higher totals for previous year going back to 1911.

The Australian emergency management minister, Murray Watt, has ordered a review of the weather warning systems used by the Bureau of Meteorology, while claiming it will become increasingly difficult to predict the weather because of climate change. He should be calling for much more than this, and not using the excuse of ‘climate change’ for both the failed 3-day rainfall forecast and also the failed warning system after the heavy rains began to fall.

For the last twenty years various international working groups associated with the IPCC and World Meteorological Organisation have been making submissions and predictions regarding the likely effects of global warming on tropical cyclones. These reports have indicated the maximum intensity of cyclones is unlikely to significantly increase, certainly not beyond 10-20 percent. A ‘Statement on Tropical cyclones and Climate Change’ in 2006 by Dr G. B. Love the Permanent Representative for Australia indicated that rainfall intensity could increase, due to the increasing water vapour content of the atmosphere. Twenty years later, the data shows that both the intensity and number of cyclones has been declining.

The extraordinary rainfall associated with the flooding of Lismore and surrounds in early 2022, may have been exacerbated by the increase in water vapour content from the explosion of the volcano Hunga Tunga in January of that year. It could be that the Hunga Tonga eruption has also caused a depletion of ozone in the stratosphere, after temporarily increasing the water vapour content. There has been no overall increase in water vapour content of the lower troposphere associated with increasing atmospheric levels of carbon dioxide.

The general circulation models have difficulty simulating the local impact of volcanic ash on rainfall intensity and global temperatures, and this is a problem because aerosols can supercharge the atmosphere making rainfall more intense.

Importantly, and contrary to recent popular commentary, there are no two separate parts to the Bureau: one making operational weather forecasts and one concerning itself with climate change. Since June 2013, forecasts for the next three-hours and next three-months have relied on a general circulation model and since 2018 specifically on ACCESS, with some adds-on to provide more resolution. There is an urgent need for the skill of this general circulation model to be properly assessed. This could be done as a matter of urgency and through a comparison of forecast versus predicted rainfall for the Cairns catchment as it fell through December 2023, particularly after cyclone Jasper made landfall.

There is also a need for the Bureau to quantify the skill more generally of ACCESS against the skill of the new AI weather and climate forecasting systems including Google’s GraphCast and the Pangu-Weather AI model. Pangu AI can predict both the direction of cyclones and their likely impact, particularly their capacity to generate intense rainfall over a small area.

There has been much talk about the unprecedented.

What would be both unprecedented and welcome would be for the Bureau to start doing KPI’s and measure predictions against actual rainfall totals. This needs to be done for the three-hour ‘rain burst’ events associated with low pressure systems and also their longer seasonal rainfall forecasts.

The Bureau forecast that this summer would be an exceptionally dry one for the same regions that have now flooded, using the same ACCESS general circulation model. The drought forecast has also caused unnecessary hardship, with farmers selling livestock at discounted prices as so many anticipated being unable to feed their stock.

Ends.

John Abbot and Jennifer Marohasy outside the Administrative Appeals Tribunal in Brisbane, February 2023.

And just filing this here, by Peter Ridd and republished from The Australian, click here.

* Recent criticism of the Bureau of Meteorology for failing to predict the recent spate of extreme weather is unfair, is ultimately counter-productive and misses far more serious failings of the BOM.

Weather prediction is difficult. At best one can hope only to improve probabilities. And the weather hardest to predict is extreme events associated with storms. These systems are extremely “nonlinear”, to use the parlance of meteorology.

When there are large quantities of moisture in the lower levels of the atmosphere, the air need be lifted only slightly to trigger a violent updraft.

It is a huge slow-motion explosion where the fuel is the invisible water vapour turning into cloud. The amounts of energy involved can be huge – think Hiroshima atom bomb – and a tiny perturbation can set them off. It is often stated that a butterfly flapping its wings could trigger the storm, at least theoretically.

This is one of the least predictable phenomena on Earth. At best, weather prediction can indicate only that such storms are likely at a rough time and place. Perhaps the BOM can get the final warnings out a little faster, but a storm can morph into a supercell in a few minutes.

BOM’s performance in predicting the ultimate landfall of Tropical Cyclone Jasper was nothing short of brilliant. For days before it crossed the coast, the bureau predicted it would end up near Cairns. And that is where it went. The cyclone did minimal damage, but the rain cell associated with it sat stationary around Cairns for days, causing flooding. If the cell had moved, even slowly, Cairns would have been just extremely wet rather than breaking records. But that detail is beyond prediction.

The result of unjustified expectations of prediction accuracy will result in the bureau being forced to cover itself and issue warnings whenever there is a minute possibility of extreme weather. The predictions will become meaningless.

The BOM has a truly superb observation network of rain radars, rain gauges and flood levels. Millions of people use these, especially in country areas, for everything from bringing in the washing to gauging when it will be possible to drive across a flooded creek. This network gives us remarkable ability to see what is happening. Thirty years ago, we were almost blind compared with today.

So give the BOM a break, at least on this matter. But there are two BOMs. There is the operational weather BOM, which does the daily forecasts and measurements, and then there is the climate change part of the BOM. And that is where the criticism should be levelled.

The climate models used by the BOM and many other groups regularly are used to predict, with certainty, the end of the world because of “global boiling”. But those models are little better than a guess. We have no idea what caused historical climate change such as the Little Ice Age of a few centuries ago and the hot climate of the Egyptian period. Climate models fail on this. The bureau’s failure to acknowledge model weaknesses is unscientific. Uncertainties must be stated. If the BOM proclaims its predictions for the year 2100 are excellent, it can hardly complain when people get upset when its forecast for this afternoon turns into a dud.

Another major problem within the bureau is the section dealing with long-term temperature measurements. Most long-term measurements have been modified (homogenised), almost always making past temperatures cooler.

The BOM does not dispute it has done this, but there is a huge argument about whether it has done it in a justifiable way, and BOM has failed to release all its data about these temperature adjustments. This is inexcusable and breeds concerns about the bureau’s scientific integrity.

There is also the habit of the BOM to associate every extreme, or record-breaking, weather event with climate change. In fact, record events are inevitable every year because of the huge scale of the observation network.

But the climate section of the BOM uses record events for political purposes.

Should we have an inquiry into the BOM? Yes. But the good guys of the BOM short-term weather forecasting department need to stand up against the anti-science catastrophists in their climate department. Otherwise they deserve to be tarred with the same brush.

Peter Ridd is a physicist, adjunct fellow with the Institute of Public Affairs and chairman of the Australian Environment Foundation.

New Study Finds Most Of Antarctica Has Cooled By Over 1°C Since 1999…W. Antarctica Cooled 1.8°C

During the second half of the twentieth century, the West Antarctic Ice Sheet (WAIS) has undergone significant warming at more than twice the global mean and thus is regarded as one of the most rapidly warming regions on Earth. However, a reversal of this trend was observed in the 1990s, resulting in regional cooling. In particular, during 1999–2018, the observed annual average surface air temperature had decreased at a statistically significant rate, with the strongest cooling in austral spring.

From NoTricksZone

By Kenneth Richard on 6. November 2023

Significant 21st century cooling in the Central Pacific, Eastern Pacific, and nearly all of Antarctica “implies substantial uncertainties in future temperature projections of CMIP6 models.” – Zhang et al., 2023

New research indicates West Antarctica’s mean annual surface temperatures cooled by more than -1.8°C (-0.93°C per decade) from 1999-2018. In spring, the West Antarctic Ice Sheet (WAIS) cooling rate reached -1.84°C per decade.

Not only has the WAIS undergone significant cooling in the last two decades, most of the continent also cooled by more than 1°C. See, for example, the ~1°C per decade cooling trend for East Antarctica (2000 to 2018) shown in Fig. ES1.

Of 28 CMIP6 models, none captured a cooling trend – especially of this amplitude – for this region. This modeling failure “implies substantial uncertainties in future temperature projections of CMIP6 models.”

Image Source: Zhang et al., 2023

The post-1999 cooling trend has not just been confined to Antarctica. Sea surface temperatures (SSTs) in the Eastern and Central Pacific (south of 25°N) also cooled from 1999-2018 relative to 1979-1997. This cooling encompasses nearly half of the Southern Hemisphere’s SSTs.

Image Source: Zhang et al., 2023

The 1999-2018 mean annual surface temperature cooling of the Antarctic continent and nearly half of the Southern Hemisphere’s SSTs do not support the claims that surface warming is driven by human emissions of greenhouse gases (GHGs). After all, if the widespread cooling cannot be explained by the increase in GHG forcing, why would the same concentrations of GHGs explain the areas with warming temperatures?


Do CMIP5 models skillfully match actual warming?

From Climate Etc.

by Nic Lewis

Why matching of CMIP5 model-simulated to observed warming does not indicate model skill

A well-known Dutch journalist, Maarten Keulemans of De Volkskrant, recently tweeted an open letter to the Nobel-prizewinning physicist Professor Clauser in response to his signing of the Clintel World Climate Declaration that “There is no climate emergency”, asking for his response to various questions. One of these was:

The CLINTEL Declaration states that the world has warmed “significantly less than predicted by (the) IPCC”. Yet, a simple check of the models versus observed warming demonstrates that “climate models published since 1973 have generally been quite skillful predicting future warming”, as Zeke Hausfather’s team at Berkeley Earth recently analysed.

The most recent such analysis appears to be that shown for CMIP5 models in a tweet by Zeke Hausfather, reproduced in Figure 1. While the agreement between modeled and observed global mean surface temperature (GMST) warming over 1970–2020 shown in the Figure 1 looks impressive, it is perhaps unsurprising given that modelers knew when developing and tuning their models what the observed warming had been over most of this period.

Figure 1. Zeke Hausfather’s comparison of global surface temperature warming in CMIP5 climate models with observational records. Simulations based on the intermediate mitigation RCP4.5 scenario of global human influence on ERF through emissions of greenhouse gases, etc. were used to extend the CMIP5 Historical simulations beyond 2005.

It is well-known that climate models have a higher climate sensitivity than observations indicate. Figure 2 compares equilibrium climate sensitivity (ECS) diagnosed in CMIP5 models and in the latest generation, CMIP6, models with the corresponding observational estimate on the same basis in Lewis (2022) of 2.16°C and (likely range 1.75–2.7°C). Only one model has an ECS below the estimate in Lewis (2022), and most models have ECS values exceeding the upper bound of its likely range. CMIP6 models are generally even more sensitive than CMIP5 models, with half of them having ECS values above the top of the 2.5–4°C likely range given in the IPCC’s 2021 Sixth Assessment Report: The Physical Science Basis (AR6 WG1).

Figure 2.  Red bars: equilibrium climate sensitivity in CMIP5 and CMIP6 models per Zelinka et al. (2020) Tables S1 & S2 estimated by the standard method (ordinary least squares regression over years 1–150 of abrupt4xCO2 simulations). Blue line and blue shaded band: best estimate and likely (17%-83% probability) range for ECS in Lewis (2022), derived from observational evidence over the ~150 year historical period but adjusted to correspond to that estimated using the aforementioned standard method  for models.

So, how is it possible that Hausfather gets an apparently good match between models and observations in the period 1970-2020? Does it imply that the models correctly represent the effects of changes in “climate forcers”, such as the atmospheric concentration of greenhouse gases and aerosols, on GMST, and accordingly that their climate sensitivities are correct?

The key question is this. Matching by CMIP5 climate models, in aggregate, with observed GMST changes would only be evidence that models correctly represent the effects of changes in “climate forcers”, such as the atmospheric concentration of greenhouse gases and aerosols, on GMST if resulting changes in their combined strength in models matched best estimates of the actual changes in those forcers. The standard measure of strength of changes in climate forcers, in terms of their effect on GMST, is their “effective radiative forcing” (ERF), which measures the effect on global radiative flux at the top of the Earth’s atmosphere once it and the land surface have adjusted to the changes in climate forcers (see IPCC AR6 WG1 Chapter 7, section 7.3)

It is therefore important to compare changes in total ERF as diagnosed in CMIP5 models during their Historical and RCP4.5 scenario simulations over 1970–2020 with the current best estimates of their actual changes, which I will take to be those per IPCC AR6 WG1 Annex III, extended from 2019 to 2020 using the almost identical Climate Indicator Project ERF time series.

Historical and RCP4.5 ERF (referred to as “adjusted forcing”) in CMIP5 models was diagnosed in Forster at al. (2013), for the 20 models with the necessary data. I take the mean ERF for that ensemble of models[1] as representing the ERF in the CMIP5 models used in Figure 1.

Figure 3 compares the foregoing estimates of mean ERF in CMIP5 models with the best estimates given in IPCC AR6. Between the early 1980s and the late 2000s CMIP5 and AR6 ERF estimates agreed quite closely, but they diverged both before and (particularly) after that period. The main reason for their divergence since 2007 appears to be that aerosol ERF, which is negative, is now estimated to have become much smaller over that period than was projected under the RCP4.5 scenario. Updated estimates of aerosol ERF also appears likely to account for about half of their lesser divergence prior to 1983, with the remainder mainly attributable to differences in ERF changes for land use and various other forcing agents.

Figure 3. Effective radiative forcing (ERF) over 1970–2020 as estimated in CMIP5 models (mean across 19 models) and the best estimate given in the IPCC Sixth Assessment Scientific Report (AR6 WG1). The ERF values are relative to their 1860–79 means.

The IPCC AR6 best estimate of the actual  ERF change between 1970 and 2020 is 2.53 Wm−2. The linear trend change over 1970–2020 given by ordinary least squares regression is 2.66 Wm−2, while the change between the means of the first and last decades in the period, scaled to the full 50 year period, is 2.59 Wm−2.

By comparison, the mean ERF change for CMIP5 models between 1970 and 2020 is 1.67 Wm−2. The linear trend change over 1970–2020 is 1.92 Wm−2, and the scaled change between the first to last decades’ means is 1.76 Wm−2.

It is evident that the AR6 estimate of the actual 1970–2020 ERF change is far greater than that in CMIP5 models. Based on the single years 1970 and 2020, the AR6-to-CMIP5 model ERF change ratio is 1.51. Based on linear trends that ratio is 1.39, while based on first and last decades’ means it is 1.46. The last of these measures is arguably the most reliable, since single year ERF estimates may be somewhat unrepresentative, and due to intermittent volcanism the ERF has large deviations from a linear relationship to time. As there is some uncertainty I will take the ratio as being in the range 1.4 to 1.5.

So, CMIP5 models matched the observed 1970–2020 warming trend, but the estimated actual change in ERF was 1.4 to 1.5 times greater than that in CMIP5 models. On the assumption that both the CMIP5 model ERF estimates and the IPCC AR6 best estimates of ERFs are accurate, it follows that:

  • CMIP5 models are on average 1.4 to 1.5 times as sensitive as the real climate system was to greenhouse gas and other forcings over 1970–2020[2]; and
  • CMIP5 models would have over-warmed by 40–50% if their ERF change over that period had  been in line with reality.

It seems clear that the ERF change in CMIP5 models over 1970–2020 was substantially less than the IPCC AR6 best estimate, and that CMIP5 models substantially overestimated the sensitivity of the climate system during that period to changes in ERF. Moreover, the divergence is increasing: the ratio of AR6 to CMIP5 model ERF changes is slightly higher if the comparison is extended to 2022.

In conclusion, Maarten Keulemans’ claim that “a simple check of the models versus observed warming demonstrates that “climate models published since 1973 have generally been quite skillful predicting future warming” is false.

Contrary to the impression given by Zeke Hausfather’s rather misleading graph, CMIP5 models have not been at all skillful in predicting future warming; they have matched the illustrated 1970–2020 observed warming (which was past rather than future warming until the late 2000s, when CMIP5 models were still being tuned) due to their over-sensitivity being cancelled out by their use of ERF that increased much less than the IPCC’s latest best estimates of the actual ERF increase.

Nic Lewis               5 September 2023


[1] ex FGOALS-s2, the Historical and RCP simulations of which were subsequently withdrawn from the CMIP5 archive.

[2] There are some caveats to the conclusion that CMIP5 models were oversensitive by a factor of 1.4 to 1.5 times:

  • the ensemble of CMIP5 models used in Forster et al. (2013) might not have been a representative subset of the entire set of CMIP5 model. However, there appears to be little or no evidence suggesting that is the case;
  • despite their careful compilation, the AR6 best estimates of the evolution of ERF might be inaccurate;
  • the CMIP5 model forcings derived by Forster et al. (2013) might be inaccurate. There are reasons to suspect that their method might produce ERF estimates that are up to about 10% lower than the methods used for IPCC AR6. However, Forster et al. present some evidence in favour of the accuracy of their method. Moreover, the agreement in Figure 2 between the CMIP5 and AR6 ERF time series between 1983 and 2007 (with divergences before and after then largely attributed to differences in particular forcing agents) is further evidence suggesting that the Forster et al. (2013) CMIP5 ERF estimates are fairly accurate; and
  • due to the heat capacity of the ocean mixed layer, GMST is more closely related to average ERF exponentially-decayed over a few years rather than to ERF in the same year. Using exponentially-decayed ERFs would somewhat reduce the 1.4 low end estimate given above for the ratio of AR6 to CMIP5 model ERF 1970–2020 increase estimates, perhaps by ~10%.

SITYS: Climate models do not conserve mass or energy

From Roy Spencer, PhD.

August 21st, 2023 by Roy W. Spencer, Ph. D.

See, I told you so.

One of the most fundamental requirements of any physics-based model of climate change is that it must conserve mass and energy. This is partly why I (along with Danny Braswell and John Christy) have been using simple 1-dimensional climate models that have simplified calculations and where conservation is not a problem.

Changes in the global energy budget associated with increasing atmospheric CO2 are small, roughly 1% of the average radiative energy fluxes in and out of the climate system. So, you would think that climate models are sufficiently carefully constructed so that, without any global radiative energy imbalance imposed on them (no “external forcing”), that they would not produce any temperature change.

It turns out, this isn’t true.

Back in 2014 our 1D model paper showed evidence that CMIP3 models don’t conserve energy, as evidenced by the wide range of deep-ocean warming (and even cooling) that occurred in those models despite the imposed positive energy imbalance the models were forced with to mimic the effects of increasing atmospheric CO2.

Now, I just stumbled upon a paper from 2021 (Irving et al., A Mass and Energy Conservation Analysis of Drift in the CMIP6 Ensemble) which describes significant problems in the latest (CMIP5 and CMIP6) models regarding not only energy conservation in the ocean but also at the top-of-atmosphere (TOA, thus affecting global warming rates) and even the water vapor budget of the atmosphere (which represents the largest component of the global greenhouse effect).

These represent potentially serious problems when it comes to our reliance on climate models to guide energy policy. It boggles my mind that conservation of mass and energy were not requirements of all models before their results were released decades ago.

One possible source of problems are the model “numerics”… the mathematical formulas (often “finite-difference” formulas) used to compute changes in all quantities between gridpoints in the horizontal, levels in the vertical, and from one time step to the next. Miniscule errors in these calculations can accumulate over time, especially if physically impossible negative mass values are set to zero, causing “leakage” of mass. We don’t worry about such things in weather forecast models that are run for only days or weeks. But climate models are run for decades or hundreds of years of model time, and tiny errors (if they don’t average out to zero) can accumulate over time.

The 2021 paper describes one of the CMIP6 models where one of the surface energy flux calculations was found to have missing terms (essentially, a programming error). When that was found and corrected, the spurious ocean temperature drift was removed. The authors suggest that, given the number of models (over 30 now) and number of model processes being involved, it would take a huge effort to track down and correct these model deficiencies.

I will close with some quotes from the 2021 J. of Climate paper in question.

“Our analysis suggests that when it comes to globally integrated OHC (ocean heat content), there has been little improvement from CMIP5 to CMIP6 (fewer outliers, but a similar ensemble median magnitude). This indicates that model drift still represents a nonnegligible fraction of historical forced trends in global, depth-integrated quantities…”

“We find that drift in OHC is typically much smaller than in time-integrated netTOA, indicating a leakage of energy in the simulated climate system. Most of this energy leakage occurs somewhere between the TOA and ocean surface and has improved (i.e., it has a reduced ensemble median magnitude) from CMIP5 to CMIP6 due to reduced drift in time-integrated netTOA. To put these drifts and leaks into perspective, the time-integrated netTOA and systemwide energy leakage approaches or exceeds the estimated current planetary imbalance for a number of models.

“While drift in the global mass of atmospheric water vapor is negligible relative to estimated current trends, the drift in time-integrated moisture flux into the atmosphere (i.e., evaporation minus precipitation) and the consequent nonclosure of the atmospheric moisture budget is relatively large (and worse for CMIP6), approaching/exceeding the magnitude of current trends for many models.”

Arctic Ice: A History of Failed Predictions

From Watts Up With That?

The narrative surrounding Arctic sea ice has been one of consistent warning, punctuated by a series of prediction failures that have spanned decades. Scientists have long been forecasting the demise of Arctic summer ice, but their deadlines have continually passed, leaving us with a track record of failed predictions. The latest claim is no different, suggesting that it’s too late now to save Arctic summer ice. But as we’ve seen, the timeline for these forecasts can shift considerably and unpredictably.

In the recent study led by Prof Seung-Ki Min of Pohang University, South Korea, and Prof Dirk Notz, of the University of Hamburg, Germany, they assert that the Arctic will be ice-free in September in the coming decades. However, it’s worth noting that projections of this nature have been made before and subsequently revised. The once-dreaded ‘first ice-free summer’ was initially predicted to be in 2012, but then fluctuated back and forth for years. This kind of time-hopping has led to considerable skepticism and has undermined the credibility of such predictions.

https://www.nature.com/articles/s41467-023-38511-8

Abstract

The sixth assessment report of the IPCC assessed that the Arctic is projected to be on average practically ice-free in September near mid-century under intermediate and high greenhouse gas emissions scenarios, though not under low emissions scenarios, based on simulations from the latest generation Coupled Model Intercomparison Project Phase 6 (CMIP6) models. Here we show, using an attribution analysis approach, that a dominant influence of greenhouse gas increases on Arctic sea ice area is detectable in three observational datasets in all months of the year, but is on average underestimated by CMIP6 models. By scaling models’ sea ice response to greenhouse gases to best match the observed trend in an approach validated in an imperfect model test, we project an ice-free Arctic in September under all scenarios considered. These results emphasize the profound impacts of greenhouse gas emissions on the Arctic, and demonstrate the importance of planning for and adapting to a seasonally ice-free Arctic in the near future.https://www.nature.com/articles/s41467-023-38511-8

The key takeaway here is that these are projections, models based on certain conditions and parameters. The crux of the matter is the unpredictability of natural phenomena and the myriad factors influencing them.

The new study claims that 90% of the melting is a result of human-caused global heating, but the remaining 10% are natural factors such as variation in the sun’s intensity and emissions from volcanoes. The researchers can’t pinpoint a specific year for the first ice-free summer due to this natural variability in the climate system.

https://www.theguardian.com/environment/2023/jun/06/too-late-now-to-save-arctic-summer-ice-climate-scientists-find

The inherent complexity of the Earth’s climate system, combined with the inability to account for every single variable that influences the melting of Arctic ice, puts these predictions on shaky ground. As has been proven repeatedly over the years, alarmist deadlines for an ice-free Arctic have come and gone, leaving us to ponder the credibility of these predictions. In the realm of science, it’s crucial to distinguish between what we know and what we assume.

Decades of failed predictions about the end of Arctic sea ice should prompt us to view these new findings with a critical eye. As we continue to study and learn about the Earth’s complex climate system, it’s essential to strike a balance between caution, skepticism, and the willingness to reassess our models and predictions.

HT/Hans Erren and strativarius

Introducing the Realitometer

From Watts Up With That?

By Christopher Monckton of Brenchley

A third of a century has passed since 1990, when IPCC made its first predictions of global warming. Over the 400 months since January 1990, IPCC’s original predictions of 0.2-0.5 C warming per decade over the following century (below) have proven grossly excessive.

In 1990, IPCC also predicted that at midrange doubling the CO2 in the air would cause 3 C global warming – the same warming as the predicted warming from a century of anthropogenic emissions from all sources. In 2021, IPCC predicted that warming by doubled CO2 would be 2 to 5 C, with a best estimate of 3 C, ten times the decadal predictions in 1990.

The Realitometer, which will be published each month, shows the real-world global warming per century equivalent since January 1990 from the satellite monthly temperature dataset of the University of Alabama in Huntsville, compared with IPCC’s range of predictions and with the midrange 3.9 C centennial-equivalent warming predicted in the CMIP6 models.

The Realitometer shows a mere 1.33 C/century equivalent real-world warming over a third of a century. The CMIP6 models’ midrange 3.9 C prediction is thus proving to be a shocking 293% overshoot compared with real-world warming. IPCC’s 2-5 C predictions are 150% to 375% of real-world warming. Yet since then IPCC has not reduced its predictions, first made in 1990, to bring them somewhere within range of observed reality.

IPCC based its predictions in 1990 on four emissions scenarios A-D. Scenario A was the “business-as-usual” scenario. It assumed substantial growth in annual emissions compared with 1990. Scenario B predicted no growth in annual emissions by now compared with 1990. In reality, annual emissions have grown by more than half since 1990. Emissions to 2021, the last full year for which figures are available, track the business-as-usual scenario A exactly. Yet the warming predicted under Scenario A is simply not occurring.

The Realitometer relies on satellite-measured temperature anomalies because the data are not contaminated, as the terrestrial datasets are, by the urban heat-island effect, the direct warming by emission of heat from cities, which is insufficiently corrected for.

UAH v.6 rather than other satellite datasets (RSS v.4, NOAA v.4 and Washington U v.1) is used because, as Andy May has pointed out in a distinguished column here, UAH alone has corrected the spurious data in the older NOAA-11 to NOAA-14 satellite instruments and because, after that correction, the UAH data conform far more closely than any of the other satellite datasets to the radiosonde data, an independent yardstick.

Month by inexorable month, the Realitometer will show just how absurdly exaggerated were and are the official predictions of global warming on which easily-manipulated governments – in Western nations only – have predicated their economy-destroying net-zero policies. Those policies are based on the notion that at midrange there will be almost three times as much global warming as has been occurring. Yet not one mainstream news medium has reported just how startlingly large the ratio of prediction to reality is proving to be.

Dueling ITCZs

From Watts Up With That?

Guest Post by Willis Eschenbach

Inspired by a comment about modeled rainfall by Dr. Richard Betts over in the Twitterverse, I decided today to look at how well the climate models are able to hindcast historical rainfall amounts and patterns.

I already had the satellite rainfall data from the Tropical Rainfall Measuring Mission (TRMM). So I went over to KNMI and got the Climate Model Intercomparison Project 6 (CMIP6) climate model rainfall results of the 38 different models in their database.

Let me start with a look at the TRMM satellite data. It extends from 40°N to 40°S. The two graphs below are the same, but the top one is Pacific centered and the bottom one is Atlantic centered.

Figure 1. 18-year average, TRMM annual rainfall, Dec. 1997 – Mar 2015

Of interest in this is the line of rain just north of the Equator in both the Pacific and the Atlantic Oceans. This marks the average location of the Inter-Tropical Convergence Zone (ITCZ). It is a line of semi-permanent thunderstorms located where the northern and southern halves of the atmosphere come together. It forms the ascending part of the great Hadley cell circulation, which rises just north of the equator, moves polewards on both sides, descends over the 30° N/S desert belts, and returns to the ITCZ just north of the Equator. Here’s a cross-section of the Hadley cell circulation.

Figure 2. Cross-section of the ITCZ and the northern and southern Hadley cells.

With that as a prologue, consider the following Pacific-centered maps of some of the model results.

Figure 3. Rainfall model output, CMIP6 models

I’m sure you can see the problem. There are two ITCZs in the model output, one above and one below the Equator.

Now, this is not just a huge problem that’s only found in the modern models. It’s been a problem since there have been climate models. It even has its own name. Here is a comment from 2013 in PNAS:

The double-Intertropical Convergence Zone (ITCZ) problem, in which excessive precipitation is produced in the Southern Hemisphere tropics, which resembles a Southern Hemisphere counterpart to the strong Northern Hemisphere ITCZ, is perhaps the most significant and most persistent bias of global climate models.

That was ten years ago, the problem was old and well-recognized back then, and they still haven’t been able to fix it.

And we’re supposed to totally destroy our current energy source and power the world on unicorn methane based on these garbage Tinkertoy™ climate models? Really? They can’t even hindcast the past!

More to the point, they can’t replicate the Hadley cells, a most basic feature of the global circulation, but they are supposed to be able to predict the future a hundred years out?

Laughable, but also tragic in that governments are passing laws and shafting the poor based on this nonsense.

The problems continue. Here are the monthly rainfall observations from the TRMM, along with the modeled monthly rainfall, for the area 40°N to 40°S.

Figure 4. TRMM (red) and modeled (colored) monthly rainfall values, 40°N/S, Dec 1997 – Mar 2015

Again, you can see the problems. Not only is there no overlap between models and observations, but the models are far from agreeing with each other.

Well, how about the trends? There’s a slight upwards trend in the TRMM data, but what about the models? Here’s a “violin plot” of the model trends per decade, along with the TRMM trend over the period.

Figure 5. Violin plot of the model trends in millimeters per decade, along with a yellow/black line representing the TRMM trend. The width of the violet area at any point represents the proportion of models with trends of the value shown on the vertical (Y) axis. For those familiar with a “density plot”, a violin plot is just two of them back to back.

Again, problems. Not only are the various model trends quite different from each other, but they also don’t even agree as to sign. 17% of them are less than zero, the rest above. Also, the TRMM trend is larger than all but two of the model trends.

Conclusion:

Anyone who seriously believes one word that the models say about rainfall is either a climate alarmist or a fool … but I repeat myself.

My best to all,

w.

As Always: I ask that when you comment, you quote the exact words you are referring to so we can all be clear about your subject and who said it.