Tag Archives: models

Falsehoods & Fictions: Why Models Claiming Wind & Solar Are Cheap Don’t Stack Up


Wind and solar provide cheap electricity, until you start comparing them with every other power source available. The true cost of wind and solar is inextricably linked with the weather (in the case of wind power) and sunshine (in the case of solar power). A healthy snow dump knocks solar out of the competition just as readily.

The idea of comparing wind power (during calm weather) and/or solar power (after sunset) with coal, gas, hydro and nuclear is nonsense: they’re simply not available for comparison with the sources that run around-the-clock, whatever the weather or time of day.

Which is why the wind and sun cult resort to models, where assumptions are cleverly gilded, allowing the output to be tailored to suit the ‘wind and solar are cheap’ argument.

In the article below, Gail Tverberg demonstrates why the models employed are capable of producing nothing more than falsehoods and fictions.

Models Hide the Shortcomings of Wind and Solar
Our Finite World
Gail Tverberg
2 June 2023

A major reason for the growth in the use of renewable energy is the fact that if a person looks at them narrowly enough–such as by using a model–wind and solar look to be useful. They don’t burn fossil fuels, so it appears that they might be helpful to the environment.

As I analyze the situation, I have reached the conclusion that energy modeling misses important points. I believe that profitability signals are much more important. In this post, I discuss some associated issues.

Overview of this Post
In Sections [1] through [4], I look at some issues that energy modelers in general, including economists, tend to miss when evaluating both fossil fuel energy and renewables, including wind and solar. The major issue in these sections is the connection between high energy prices and the need to increase government debt. To prevent the continued upward spiral of government debt, any replacement for fossil fuels must also be very inexpensive–perhaps as inexpensive as oil was prior to 1970. In fact, the real limit to fossil fuel extraction and to the building of new wind turbines and solar panels may be government debt that becomes unmanageable in an inflationary period.

In Section [5], I try to explain one reason why published Energy Return on Energy Investment (EROEI) indications give an overly favorable impression of the value of adding a huge amount of renewable energy to the electric grid. The basic issue is that the calculations were not set up for this purpose. These models were set up to evaluate the efficiency of generating a small amount of wind or solar energy, without consideration of broader issues. If these broader issues were included, EROEI indications would be much lower (less favorable).

One of the broader issues omitted is the fact that the electrical output of wind turbines and solar panels does not match up well with the timing needs of society, leading to the need for a great deal of energy storage. Another omitted issue is the huge quantity of energy products and other materials required to make a transition to a mostly electrical economy. It is easy to see that both omitted issues would add a huge amount of energy costs and other costs, if a major transition is made. Furthermore, wind and solar have gotten along so far using hidden subsidies from the fossil fuel energy system, including the subsidy of being allowed to go first on the electricity grid. EROEI calculations cannot evaluate the amount of this hidden subsidy.

In Section [6], I point out the true indicator of the feasibility of renewables. If electricity generation using wind and solar energy are truly helpful to the economy, they will generate a great deal of taxable income. They will not require the subsidy of going first, or any other subsidy. This does not describe today’s wind or solar.

In Section [7] and [8], I explain some of the reasons why EROEI calculations for wind and solar tend to be misleadingly favorable, even apart from broader issues.

Economic Issues that Energy Modelers Tend to Miss

[1] The economy is very short of oil that is inexpensive-to-extract. The economy seems to require a great deal more government debt when energy prices are high. Models for renewable energy production need to consider this issue, even if any substitution for oil is very indirect.

I think of the problem of rising energy prices for an economy as being like a citizen faced with an increase in food costs. The citizen will attempt to balance his budget by adding more debt, at least until his credit cards get maxed out. This is why we should expect to see an increase in government debt when oil prices are high; oil and other fossil fuels are as essential to the economy as food is to humans.

Figure 1. Year by year comparison of US government receipts with US government expenditures, based on data of the US Bureau of Economic Analysis, together with boxes showing when oil prices were in the range of about $20 per barrel or less, after adjusting for inflation. Series shown is from 1929 to 2022.

Figure 1 shows that most US government funding shortfalls occurred when oil prices were above $20 per barrel, in inflation-adjusted prices. For the 15-year period 2008 through 2022, US government expenditures were 26% higher than its receipts.

Figure 2 shows a reference chart of average annual oil prices, adjusted for inflation.

Figure 2. Average annual inflation-adjusted Brent oil prices based on data from BP’s 2022 Statistical Review of World Energy.

The reason why oil prices tend to be high now is because the inexpensive-to-extract oil has mostly been extracted. What is left is oil that is expensive to extract. The low prices in the years surrounding 1998 reflected a supply-demand mismatch after the Asian Economic Crisis of 1997. The crisis held down demand at the same time as production was ramping up in Iraq, Venezuela, Canada, and Mexico.

[2] Economists tend to assume that shortages of oil will lead to much higher fossil fuel prices, thereby making renewables inexpensive in comparison. One reason this doesn’t happen is related to the buildup of debt, noted in Figure 1, when oil prices are high.

Section [1] shows that high oil prices seem to be associated with government deficits. A high-priced substitute for oil would almost certainly have a similar problem. This governmental debt tends to build up, and at some point becomes almost unmanageable.

A major problem occurs when there is a round of inflation. Central banks find a need to increase interest rates, partly to keep lenders interested in lending in an inflationary economy and partly to try to slow the inflation rate. In fact, the US is currently being tested by such a debt buildup and increase in interest rates, beginning about January 2022 (Figure 3).

Figure 3. Chart by the Federal Reserve of St. Louis showing US 30-year mortgage rates, interest rates of 10-year Treasuries, and interest rates of 3-month Treasury Bills from 1935 through May 2023.

Higher interest rates tend to have the effect of slowing the economy. In part, the economy slows because the cost of borrowing money rises. As a result, businesses are less likely to expand, and would-be auto owners are likely to put off new purchases because of the higher monthly payments. Commercial real estate can also be adversely affected by rising interest rates if owners of buildings find it impossible to raise rents fast enough to keep up with higher interest rates on mortgages and higher costs of other kinds.

[3] It is uncertain in exactly which ways the economy might contract, in response to higher interest rates. Some ways the economy could contract would bring an early end to both the extraction of fossil fuels and the manufacturing of renewables. This is not reflected in models.

If the economy contracts, one possible result is a recession with lower oil prices. This clearly doesn’t fix the problem of the cost of wind and solar electricity being unacceptably high, especially when the cost of all the batteries and additional transmission lines is included. In some sense, the price needs to be equivalent to a $20 per barrel oil price, or lower, to stop the huge upward debt spiral.

Another possibility, rather than the US economy as a whole contracting, is that the US government will disproportionately contract; perhaps it will send many programs back to the states. In such a scenario, there is likely to be less, rather than more, funding for renewables. I understand that Republicans in Texas are already unhappy with the high level of wind and solar generation being used there.

A third possibility is hyperinflation, as the government tries to add more money to keep the overall system, especially banks and pension plans, from failing. Even with hyperinflation, there is no particular benefit to renewables.

A fourth possibility is disruption of trade relationships between the US and other countries. This could even be related to a new world war. Renewables depend upon worldwide supply lines, just as today’s fossil fuels do. Building and maintaining the electrical grid also requires worldwide supply lines. As these supply lines break, all parts of the system will be difficult to maintain; replacement infrastructure after storms will become problematic. Renewables may not last any longer than fossil fuels.

[4] Economists tend to miss the fact that oil prices, and energy prices in general, need to be both high enough for the producer to make a profit and low enough for consumers to afford finished goods made with the energy products. This two-way tug-of-war tends to keep oil prices lower than most economists would expect, and indirectly caps the total amount of oil that can be extracted.

Figure [2] shows that, on an annual average basis, inflation-adjusted Brent oil prices have only exceeded $120 per barrel during the years 2011, 2012 and 2013. On an annual basis, oil prices have not exceeded that level since then. For a while, forecasts of oil prices as high as $300 per barrel in 2014 US dollars were being shown as an outside possibility (Figure 4).

Figure 4. IEA’s Figure 1.4 from its World Energy Outlook 2015, showing how much oil can be produced at various price levels.

With close to another decade of experience, it has become clear that high oil prices don’t “stick” very well. The economy then slides into recession, or some other adverse event takes place, bringing oil prices back down again. The relatively low maximum to fossil fuel prices tends to lead to a much earlier end to fossil fuel extraction than most analyses of available resource amounts would suggest.

OPEC+ tends to reduce supply because they find prices too low. US drillers of oil from shale formations (tight oil in Figure 4) have been reducing the number of drilling rigs because oil prices are not high enough to justify more investment. Politicians know that voters dislike inflation, so they take actions to hold down fossil fuel prices. All these approaches tend to keep oil prices low, and indirectly put a cap on output.

Why Indications from EROEI Analyses Don’t Work for Electrification of the Economy

[5] Energy Return on Energy Invested (EROEI) analyses were not designed to analyze the situation of a massive scaling up of wind and solar, as some people are now considering. If utilized for this purpose, they provide a far too optimistic outlook for renewables.

The EROEI calculation compares the energy output of a system to the energy input of the system. A high ratio is good; a low ratio tends to be a problem. As I noted in the introduction, published EROEIs of wind and solar are prepared as if they are to be only a very small part of electricity generation. It is assumed that other types of generation can essentially provide free balancing services for wind and solar, even though doing so will adversely affect their own profitability.

A recent review paper by Murphy et al. seems to indicate that wind and solar have favorable EROEIs compared to those of coal and natural gas, at point of use. I don’t think that these favorable EROEIs really mean very much when it comes to the feasibility of scaling up renewables, for several reasons:

[a] The pricing scheme generally used for wind and solar electricity tends to drive out other forms of electrical generation. In most places where wind and solar are utilized, the output of wind and solar is given priority on the grid, distorting the wholesale prices paid to other providers. When high amounts of wind or solar are available, wind and solar generation are paid the normal wholesale electricity price for electricity, while other electricity providers are given very low or negative wholesale prices. These low prices force other providers to reduce production, making it difficult for them to earn an adequate return on their investments.

This approach is unfair to other electricity providers. It is especially unfair to nuclear because most of its costs are fixed. Furthermore, most plants cannot easily ramp electricity production up and down. A recently opened nuclear plant in Finland (which was 14 years behind plan in opening) is already experiencing problems with negative wholesale electricity rates, and because of this, is reducing its electricity production.

Historical data shows that the combined contribution of wind, solar, and nuclear doesn’t necessarily increase the way that a person might expect if wind and solar are truly adding to electricity production. In Europe, especially, the availability of wind and solar seems to be being used as an excuse to close nuclear power plants. With the pricing scheme utilized, plants generating nuclear energy tend to lose money, encouraging the owners of plants to close them.

Figure 5. Combined wind, solar and nuclear generation, as a percentage of total energy consumption, based on data from BP’s 2022 Statistical Review of World Energy. The IEA and BP differ on the approach to counting the benefit of wind and solar; this figure uses the IEA approach. The denominator includes all energy, not just electricity.

The US has been providing subsidies to its nuclear plants to prevent their closing. When one form of electricity gets a subsidy, even the subsidy of going first, other forms of electricity seem to need a subsidy to compete.

[b] Small share of energy supply. Based on Figure 5, the total of wind, solar, and nuclear electricity only provides about 6.1% of the world’s total energy supply. An IEA graph of world energy consumption (Figure 6) doesn’t even show wind and solar electricity separately. Instead, they are part of the thin orange “Other” line at the top of the chart; nuclear is the dark green line above Natural Gas.

Figure 6. Chart prepared by the International Energy Association showing energy consumption by fuel through 2019. Chart is available through a Creative Commons license.

Given the tiny share of wind and solar today, ramping them up, or those fuels plus a few others, to replace all other energy supplies seems like it would be an amazingly large stretch. If the economy is, in fact, much like a human in that it cannot substantially reduce energy consumption without collapsing, drastically reducing the quantity of energy consumed by the world economy is not an option if we expect to have an economy remotely like today’s economy.

[c] Farming today requires the use of oil. Transforming farming to an electrical operation would be a huge undertaking. Today’s farm machinery is mostly powered by diesel. Food is transported to market in oil-powered trucks, boats, and airplanes. Herbicides and pesticides used in farming are oil-based products. There is no easy way of converting the energy system used for food production and distribution from oil to electricity.

At a minimum, the entire food production system would need to be modeled. What inventions would be needed to make such a change possible? What materials would be required for the transformation? Where would all these materials come from? How much debt would be required to fund this transformation?

The only thing that the EROEI calculation could claim is that if such a system could be put in place, the amount of fossil fuels used to operate the system might be low. The overwhelming complexity of the necessary transformation has not been modeled, so its energy cost is omitted from the EROEI calculation. This is one way that calculated EROEIs are misleadingly optimistic.

[d] EROEI calculations do not include any energy usage related to the storage of electricity until it is needed. Solar energy is most available during the summer. Thus, the most closely matched use of solar electricity is to power air conditioners during summer. Even in this application, several hours’ worth of battery storage are needed to make the system work properly because air conditioners continue to operate after the sun sets. Also, people who come home from work need to cook dinner for their families, and this takes electricity. Energy costs related to electricity storage are not reflected in the EROEIs shown in published summaries such as those of the Murphy analysis.

A much more important need than air conditioning is the need for heat energy in winter to heat homes and offices. Neither wind nor solar can be counted upon to provide electricity when it is cold outside. One workaround would be to greatly overbuild the system, so that there would be a better chance of the renewable source producing enough electricity when it is needed. Adding several days of storage through batteries would be helpful too. An alternate approach would be to store excess electricity indirectly, by using it to produce a liquid such as hydrogen or methanol. Again, all of this becomes complex. It needs to be tried on small scale, and the real cost of the full system determined.

Both the need to overbuild the system and the need to provide storage are excluded from EROEI calculations. These are yet other ways that EROEI calculations provide an overly optimistic view of the value of wind and solar.

[e] Long distance travel. We use oil products for long distance transport by ship, air, truck, and train. If changes are to be made to use electricity or some sort of “green fuels,” this is another area where the entire change would need to be mapped out for feasibility, including the inventions needed, the materials required, and the debt this change would entail. What timeframe would be required? Would there be any possibility of achieving the transformation by 2050? I doubt it.

The conversion of all transportation to green energy is very much like the needed conversion of the food system from oil to electricity, discussed in [5c], above. Huge complexity is involved, but the energy cost of this added complexity has been excluded from EROEI calculations. This further adds to the misleading nature of EROEI indications for renewables.

[f] A dual system is probably needed. Even if it makes sense to ramp up wind and solar, there still will be a need for many products that are today made with fossil fuels. Fossil fuels are used in paving roads and for making lubrication for machines. Herbicides, insecticides, and pharmaceutical products are often made from fossil fuels. Natural gas is often used to make ammonia fertilizer. Fabrics and building materials are often made using fossil fuels.

Thus, it is almost certain that a dual system would be needed, encompassing both fossil fuels and electricity. There are likely to be inefficiencies in such a dual system. If intermittent renewables such as wind and solar are to be a major part of the economy, this inefficiency needs to be part of any model and needs to be reflected in EROEI calculations.

[g] “Renewable” devices are not themselves recyclable. Instead, they present a waste disposal problem. Solar panels especially present a toxic waste problem. Without much recycling, there is a long term need for minerals of many types to be extracted and transported around the world. These issues are not considered in modeling.

Profitability of Unsubsidized Renewables Is the Best Measure

[6] If renewables are to be truly useful to the system, they need to be so profitable that their profits can be taxed at a high rate. Furthermore, sufficient funds should be left over for reinvestment. The fact that this is not happening is a sign that renewables are not truly helpful to the economy.

Some people talk about the need for “surplus energy” from energy sources to power an economy. I connect this surplus energy with the ability of any energy source to generate income that can be taxed at a fairly high rate. In fact, I gave a talk to the International Society for Biophysical Economics on September 7, 2021, called, To Be Sustainable, Green Energy Must Generate Adequate Taxable Revenue.

The need for surplus energy that can be transferred to the government is closely connected with the debt problem that occurs when oil prices are higher than about $20 per barrel that I noted in Section [1] of this post. Renewable energy must be truly inexpensive, with all storage included, to be helpful to the economy. It must be affordable to citizens, without subsidies. The cost structure must be such that the renewable energy generates so much profit that it can pay high taxes. It is unfortunately clear that today’s renewables are too expensive for the US economy.

EROEI Models Can’t Tell Us as Much as We Would Like

[7] In the real economy, the economy builds up in small pieces, as new approaches prove to be profitable and as all the necessary components prove to be available. EROEI models shortcut this process, but they can easily be misleading.

The concept of Energy Return on Energy Invested has been used for many years in the field of biology. For example, we can compare the energy a fish gets from the food it eats to the energy the fish expends swimming to procure that food. The fish needs to get sufficient energy value from the food it eats to be able to cover the energy expended on the swim, plus a margin for other bodily functions, including reproduction.

Professor Charles Hall (and perhaps others) adapted this concept for use in comparing different energy “extraction” (broadly defined) techniques. More recent researchers have tried to extend the calculation to include energy costs of delivery to the user.

The adaptation of the biological concept of EROEI to the various processes associated with energy extraction works in some respects but not in others. The adaptation clearly works as a tool for teaching diminishing returns. It gives reasonable information for comparing oil wells to each other, or solar panels to other solar panels. But I don’t think that EROEI comparisons across energy types works well at all.

One issue is that there are huge differences in the selling prices of different types of energy. These are ignored in EROEI calculations, making it look feasible to use a high-priced type of energy (such as oil) to produce a low-valued type of output (intermittent electricity from wind turbines or solar panels). If profitability calculations were made instead, without mandates or subsidies (including the subsidy of going first), the extent to which there is a favorable return would become clear.

Another issue is that intermittency of wind and solar adds huge costs to the system, but these are ignored in EROEI calculations. (The situation is somewhat like having workers drop in and leave according to their own schedules, rather than working during the schedule the employer prefers.) In EROEI calculations, the assumption usually made is that the fossil fuel system will provide free balancing services by operating their electricity generation systems in an inefficient manner. In fact, this is the assumption made in the Murphy paper cited previously.

An analysis by Graham Palmer gives some insight regarding the high energy cost of adding battery backup (Figure 7).

Figure 7. Slide based on information in the book, “Energy in Australia,” by Graham Palmer. His chart shows “Dynamic Energy Returned on Energy Invested.”

In Figure 7, Palmer shows the pattern of energy investment and energy payback for a particular off-grid home in Australia which uses solar panels and battery backup. His zig-zag chart reflects two offsetting impacts:

(a) Energy investment was required at the beginning, both for the solar panels and for the first set of batteries. The solar panels in this analysis last for 30 years, but the batteries only last for 7.5 years. As a result, it is necessary to invest in new batteries, three additional times over the period.

(b) Solar panels only gradually make their payback.

Palmer finds that the system would be in a state of energy deficit (considering only energy out versus energy in) for 20 years. At the end of 30 years, the combined system would return only 1.3 times as much energy as the energy invested in the system. This is an incredibly poor payback! EROEI enthusiasts usually look for a payback of 10 or more. The solar panels in the analysis were close to this target level, at 9.4. But the energy required for the battery backup brought the EROEI down to 1.3.

Palmer’s analysis points out another difficulty with wind and solar: The energy payback is terribly slow. If we burn fossil fuels, the economy gets a payback immediately. If we manufacture wind turbines or solar panels, there is a far longer period of something that might be called, “energy indebtedness.” EROEI calculations conveniently ignore interest charges, again making the situation look better than it really is. The buildup in debt is also ignored.

Thus, even without the issue of scaling up renewables if we are to make a transition to energy system more focused on electricity, EROEI calculations are set up in a way that make intermittent renewable energy look far more feasible than it really is. “Energy Payback Period” is another similar metric, with similar biases.

The fact that these metrics are misleading is difficult to see. Very inexpensive fossil fuels pay back their cost many times over, in terms of societal gain, virtually immediately. Wind turbines and solar panels depend upon the generosity of the fossil fuel system to get any payback at all because intermittent electricity cannot support an economy like today’s economy. Even then, the payback is only available over a period of years.

I am afraid that the only real way of analyzing the feasibility of scaling up electricity using wind and solar is by looking at whether they can be extraordinarily profitable, without subsidies. If so, they can be highly taxed and end our government debt problem. The fact that wind and solar require subsidies and mandates, year after year, should make it clear that they aren’t solutions.
Our Finite World

Record-Breaking Detection of Solar Photons

An image of the HAWC detector consisting of 300 large (188,000 liter/50k gallon) water tanks, each with 4 photodetectors
 (Jordan A. Goodman)

From  Phys.org

Physics 16, s107

Measurements of the highest-energy radiation from the Sun ever seen highlight the need for better solar models.

Observations over the past decade or so have shown that the Sun emits many more gamma rays at GeV energies than is expected from modeling. Now a collaboration operating the High-Altitude Water Cherenkov (HAWC) Observatory in Mexico show that this gamma-ray excess extends up to TeV energies [1]. This finding has implications for our understanding of both stellar atmospheres and astroparticle physics.

Solar gamma rays are produced when high-energy particles called cosmic rays head toward the Sun’s surface but are turned around by the solar magnetic field. As these particles then travel away from the Sun’s surface, they interact with gas in the solar atmosphere to create gamma rays. Models predict the number of emitted photons of a given energy by assuming certain properties of the cosmic rays, the Sun’s magnetic field, and the solar atmosphere.

The HAWC Collaboration presents the first detection of TeV gamma rays from the Sun, a finding based on more than six years of data. The flux is much higher than predicted, indicating that the interactions between the cosmic rays and the solar atmosphere are remarkably efficient at producing gamma rays. Moreover, the TeV-gamma-ray flux varies in inverse proportion to the level of solar activity, suggesting that the Sun’s magnetic field affects the flux—a result that will be useful for modeling.

The researchers say that their work calls for a revised theoretical framework that can explain the excess of solar gamma rays at both GeV and TeV energies.

–Ryan Wilkinson

Ryan Wilkinson is a Corresponding Editor for Physics Magazine based in Durham, UK.


  1. A. Albert et al. (HAWC Collaboration), “Discovery of gamma rays from the quiescent Sun with HAWC,” Phys. Rev. Lett. 131, 051201 (2023).

Alarming deterioration of US National Weather Service tornado warnings

From Climate Etc.

by Mike Smith

In spite of better meteorological technology than ever and more raw scientific knowledge about storms, we are seeing a serious regression in a vital government program: the National Weather Service’s tornado warning program.

Tornadoes have been a bane of living in the United States since pre-Colonial times. In the late 19th Century, the Army Signal Corps attempted to create a tornado forecasting service. In spite of some signs of progress, it was shut down because tornado forecasts would, allegedly, “cause panic.” It was said that more people would die from panic induced by the forecasts than would be killed by the tornadoes.

In the 1950’s, the Weather Bureau – forerunner of today’s National Weather Service – was dragged, largely by outside events, into the tornado forecast and warning business. The Bureau achieved significant success in forecasting but, largely because of lack of adequate tools, was less successful with tornado warnings (the short term “take cover now!” messages).

That changed with better-trained storm chasers and spotters, combined with the National Weather Service’s NEXRAD – a national network of Doppler radars installed in the 1990’s. Those radars and the intense, month-long classroom training required of every meteorologist for their operation, led to unprecedented tornado warning success.

Research by Dr. Kevin Simmons demonstrates that 13 to 15 minutes of “lead time” (the interval of time from when a tornado warning is issued to when the tornado arrives) is ideal. From 2005 to 2011, National Weather Service tornado warnings averaged 13.3 minutes and tornadoes were detected in advance 73.3% of the time. At that same time, the radars were being “dual-polarized” to allow detection of tornado’s lofted debris for better tracking. Plus, the new generation of GOES weather satellites, the first that could sense lightning rates (which are sometimes very useful in determining in advance which thunderstorms will go severe or tornadic) was in operation. All of this should have resulted in new levels of tornado warning accuracy.

They did not. The quality of tornado warnings is deteriorating at an alarming rate!

I have been tracking this for the last dozen years. I wrote a piece for The Washington Post in May, 2021, which documented this trend.

By then, the tornado warning deterioration was well underway.

How have things changed since 2020?  We don’t know. The NWS’s tornado warning accuracy statistics used to be out in the open. Now, they are behind a login and password.

Before going further, allow me to stipulate: some tornadoes are not “warnable.” This can be because they are brief, because of problems with technology, or because of our incomplete knowledge of tornado science. These are not the focus of my concern.

The tragic fact is the Weather Service is missing strong tornadoes that are obvious on radar and, in a few cases, even after they are reported by trusted ground spotters and chasers.

The National Weather Service and local emergency management botched the warning of the May 22, 2011, Joplin Tornado which killed 161 people – by far the worst death toll in the tornado warning era. I researched and wrote a book about it: [link]

At first, it seemed Joplin was an isolated event. Now, the tornado warning misses are coming at an accelerated rate. And, the NWS is missing tornadoes across the nation, from New Jersey to Colorado and from Texas to Florida. Here are just some of the poorly-warned tornadoes:2021


  • Iowa: [link]
  • Michigan (fatal): [link]
  • Kansas City (MO and KS): [link]


  • Florida [link]
  • Virginia [link]
  • Texas (Dallas): [link]
  • Texas (fatal, Laguna Heights) [link]
  • Texas (fatal, Perryton) [link]
  • Texas (fatal, Matador) [link]
  • Colorado [link]

Remember: this list represents only some of the obvious tornado misses in the past ten years. I can provide more to anyone who wants to see them.

I readily admit I don’t know all of the reasons for alarming downward tornado warning quality trend — which is spreading like a cancer across the National Weather Service.

My educated suppositions:

  • The retirement of meteorologists born in the 50’s, 60’s and 70’s who not only were trained in using NEXRAD but learned to issue tornado warnings from early radar indicators such as hook echoes and right-moving thunderstorms. That experience cannot be replaced.
  • The four-week in-class radar and storm warning training for National Weather Service meteorologists has been discontinued. Two retired NWS meteorologists, both requesting confidentiality, recently told me that radar training is woefully insufficient in some cases.
  • Also playing a role is a misguided attempt to cut false alarms without the science needed to do so. NWS tornado false alarms have indeed been cut by 2%. But that is at the expense of issuing quality warnings when a tornado actually exists. The “probability of detection” (a warning out before a tornado touches down) has dropped by a whopping 24 percent!

In the past, the National Weather Service used to do “service assessments” to supposedly assess the quality of the service it provided during particular disasters. These were staffed by NWS, NOAA and related agency personnel. As you would suspect, they rarely found significant fault. Federal agencies investigating themselves is far less than ideal. Tornado-related service assessments have been fewer in recent years as any type of in-depth assessment would force the NWS to acknowledge these issues.

In my opinion, the only way to fix the tornado warning program and related issues is to create an independent National Disaster Review Board (NDRB) modeled after the hugely successful National Transportation Safety Board (NTSB). The meteorology part of the NDRB mission would be:

  • Investigate major weather forecast and warning failures. In addition to tornadoes, it would for example include events such as the Tennessee flash flood of August 23, 2021.
  •  The Board would recommend improvements.
  • The NDRB would also take over daily validation of the National Weather Service’s storm warnings and storm forecasts.

The National Disaster Review Board would study disaster response not just from the NWS but would also study FEMA, the Red Cross, local and state emergency management and other entities involved in a particular disaster.

To continue the status quo is to guarantee more lives are unnecessarily lost and that more mega-disasters like Joplin will occur.

How to Manufacture a Pandemic

From The Daily Sceptic


How do you get people to take pharmacological action against something that hits them on average once a year and lasts a few days, like acute respiratory infections? They have been around since creation and are extremely familiar to everyone around the globe, including animals.

Acute respiratory infections manifest themselves with a range of symptoms – from none all the way to pneumonia and death from respiratory failure. Thankfully this happens only in a small minority of cases.

To sell, you have to have a market, but familiarity breeds contempt. So you have to create a market for your products, be they medicines, opinions, careers, research funds or whatever.

One approach is to confuse a syndrome with a disease. This means using the F-Word ‘flu’, a terrible Anglo-Saxon colloquialism which is used all over the world, even by WHO and the U.S. CDC. Using the F-word, you are bundling up familiar signs and symptoms, fever, ache and pains, tiredness, cough and runny nose due to numerous but uncountable microorganisms (a syndrome) and implying that it is due to a single agent (influenza).

In a pie study, we did nearly 20 years ago, we showed that based on three million observations in an average year, only one out of nine F-word episodes are due to influenza.

When ministers blame winter crises on ‘flu’, we do not really know what they are talking about (neither do they), but they do have a solution: vaccines and antivirals. That is because, up until very recently, influenza was the only seasonal respiratory agent for which licensed pharmaceuticals (antivirals) or biologics (vaccines) were available. Now the tune has changed, they blame it on ‘Flu and Covid‘ (F- and C- words), but it’s just an update of the same manipulation.

Politically this is a helpful strategy. Because instead of admitting there is little that can be done to minimise the impact of the seasonal F-word and the ‘new’ C-word, you can be seen to be doing something like railing about low vaccine uptakes or putting pressure on pharma to produce more antivirals.

A second approach is to frighten everyone into thinking that the yearly death tally for the F-word (influenza) is a lot higher than what is verifiable and that the next pandemic is around the corner.

You do need the media’s help to frighten everyone, but you can always find editors to oblige.

Some media may be owned at least in part by pharma, which makes life a lot easier to get your message across.

We have seen this approach in detail with Covid and the yearly F-word ‘flu’ for at least fifty years. How do you inflate deaths? That’s easy! You start with taking the bundle ‘influenza and pneumonia’ on a death certificate as a true, verified fact. We have explained just how imprecise this method is. For example, in the U.K., in 2015, there were three verifiable influenza deaths per million inhabitants, rising to roughly 24 per million in 2018. Is this a health crisis? Hardly enough to justify doing much. This is why incidentally, influenza vaccines and antiviral trials show an absence of evidence on death prevention: the outcome is too rare even for a large trial.

The next step is to ask modellers to forecast deaths either in what you will call interpandemic or in intrapandemic years.

In going back to models, note the use of the two terms (interpandemic and intrapandemic), which subtly introduced the concept of the inevitability of a pandemic. “It’s just a question of time.”

A third approach is to appoint those who most benefit from creating fear to run or inform your Government of ‘protection measures’, as pointed out by Philip Alcabes at the dawn of the 2009 influenza pandemic:

We are supposed to be prepared for a pandemic of some kind of influenza because the flu watchers, the people who make a living out of studying the virus and who need to attract continued grant funding to keep studying it, must persuade the funding agencies of the urgency of fighting a coming plague.

This is why the worst-case scenario is always presented.

A further approach is to create a cartel of key opinion leaders apparently hugely knowledgeable and capable of giving advice even on newly identified agents. This is the case for the European Working Group on Influenza (or ESWI), which is funded by pharma companies and has been hugely successful in influencing opinion and even policymaking. Many of the ESWI member names recur in our series on antivirals.

Keith Duddlestone pointed us to a further refinement: adding ‘community groups’ to the chorus of voices building pressure. Patient involvement is an old marketing technique, but continuous funding by the industry points to their importance in creating demand.

Then you should present your product as the pharmacological equivalent to a Gucci bag or Rolex watch, a designer drug tailored and produced to keep the monster at bay. Yes, such articles exist, written by people who should have known better because they have a long history of doing good work in this area.

Linked to this is the other approach to turn funders and industrialists into saviours. This happens time and time again when politicians are running out of fig leaves – we will publish three of the contracts that states stipulated with vaccine manufacturers in the run-up to the 2009 influenza pandemic. 

We would be very surprised if the model had changed during the last pandemic.

Last but not least, you can change definitions such as ‘pandemic’ to fit what is going on rather than the other way around. This way, the facts always fit the definition, and you can keep everyone on their toes and provide the magic button to be pressed to activate sleeping contracts and emergency laws: it’s a pandemic! Panic stations.

Here is a rather blatant example: the 2009 influenza pandemic was nowhere as severe as the marketeers forecast, so they needed to change a few things, better if done without creating too much fuss.

Manipulation apart, the WHO ‘pandemic preparedness’ page shows a further and now discredited equation: that ‘pandemic’ means influenza. In the end, the most perceptive observer of the many transformations, Peter Doshi,  concluded that there is no universal definition of an (influenza) pandemic.

So are we sleepwalking into something much worse than the last three years if the WHO is true to form? 

But make no mistake: the creation of the market for antivirals has been hugely successful, as a reply to a Parliamentary question shows:

Pandemics are good for the business model of antivirals: £840 million spent so far. Not a bad return for ‘modestly’ performing drugs!

Dr. Carl Heneghan is the Oxford Professor of Evidence Based Medicine and Dr. Tom Jefferson is an epidemiologist based in Rome who works with Professor Heneghan on the Cochrane Collaboration. This article was first published on their Substack blog, Trust The Evidence, which you can subscribe to here.

Experiment with me: A user-friendly algorithm for chatting about climate change

From Climate Etc.

By Joe Nalven

I recently published one article about how to incorporate an AI chatbot as part of a college course. I focused on lying, persuasion and self-reflection. I went even further and anticipated a White House comprehensive plan to counter antisemitism. I used a chatbot to develop my own version of what should be in that plan. Now, I am off on another adventure using a chatbot to understand the ins and outs of climate change impacts. And I need your help. You can see my inquiry as similar to the struggle to get computers, other electronic devices and software to be more user friendly — such as the improvements through GUIs (graphic user interfaces). Perhaps AUIs (algorithm user interfaces) would be helpful — especially if users can insert their own parameters (or “biases”) with a preset language formula.

There are many articles that say that AI language models are not to be feared since they are just a tool. Others claim that more is at stake. I would suggest that in both cases the question is whether humans actually control the input/output process. Part of such control requires knowledge about that part of the process which we consider. As someone who is naïve about coding and how the AI black box works, I focus on the observable world of language. That’s what I have access to in framing a request and receiving a reply. So, in that small part of the AI universe – the one that has received substantial attention in the past several months – I would like to assert some control over the conversation with the AI “mindset.” For me, when wearing my hat as a digital artist, I learned digital editing with a host of algorithms that included filters, presets and other editing tools to compose my art. My “control” was going beyond just pushing a button. Now, with a language model, I am trying to discover whether I have any similar “control” over this domain  and, if so, what are its limits and results.

As a human who thinks and believes that one can have some control of interactions with a chatbot – more than the illusion of control, I decided to experiment with creating my own algorithm to insert into this conversation. A user-defined algorithm that would add my own parameter to the chatbot output. The first question is whether that is really possible. When I approached ChatGPT, I received stonewalling and deflection and ultimately: “As an AI language model, I can’t directly assist in creating or fine-tuning algorithms outside the scope of the provided capabilities. My purpose is to provide information and assist with general inquiries to the best of my knowledge and abilities.” and “As an AI language model, I can provide information based on the writings and perspectives of the scientists you mentioned. However, it’s important to note that I don’t have direct access to specific writings or the ability to modify the underlying training data.” So, there are bumps in the road that might defeat my objective short of jailbreaking out of the programs guardrails.

By comparison, Google’s Bard is far more receptive to allowing me some control over how it frames its reply. I was frustrated with its initial response about understanding climate change (far too much IPCC orthodoxy, scented with the Paris Climate Accords) and not enough balance, or perhaps counterpoint, with challenges to the “consensus.” Perhaps I could have warned Bard not to use Wikipedia. I laid out my concerns to Bard and asked if it would create a paradigm based on the writings of such individuals as Steven Koonin, Bjorn Lomborg, Will Happer, Judith Curry. I could have chosen others but I wanted to see if I could jump start the creation of a use-defined, or user-guided algorithm, that was framed in the language the chatbot used to define its own algorithms. Bard responded, “Sure, I can create an algorithm called Non-Catastrophic Climate Change Model (NCCCM) based on the writings of the mentioned scientists.” Perhaps my bias shows with the name I proposed to give algorithm. The point is not one of bias – my bias versus that of the chatbot – but how information issuing from the chatbot could be tailored with some user-oriented control.

With Google’s Bard algorithm in hand, based on my parameters, there are several questions that should be addressed: 1) Can users create a transportable algorithm that can be shared with other users looking at the same content area; 2) Can the Bard-worded algorithm be used on other chatbot platforms or, if in need of modification, what would the translation look like; 3) Would the chatbot reply be the same (or similar) with other users or on repeated attempts; and 4) Can such user-defined algorithms, generally speaking, compensate for the “mindset” that the language model is applying?

Of course, it would be interesting to develop other algorithmic user interfaces (AUIs) for prompting chatbots on sex and gender, critical consciousness, conflict resolution, etc.

Here, I’ve focused on climate change and its consequences.

Sign up or Login to ChatGPT and Bard

These links are the easiest way to get access to either ChatGPT or Bard.

ChatGPT:  https://chat.openai.com/auth/login

ChatGPT through Open AI:  Introducing ChatGPT (openai.com)

Google’s Bard:  https://bard.google.com/

An example of a prompt, a reply and a revised prompt (with the algorithm)

If you haven’t tried an AI chatbot, or if you have and haven’t challenged it to revise its answer, here’s a recipe for engaging the chatbot. The example I am using is one that is about an environmental impact that you can modify and apply it to my request below.

  1. Once you land at Bard, it will look like this. It will ask you for a prompt. See Figure 1.

Figure 1. Illustration of Bard’s page and how to enter your prompt (question).

  1. After Bard replies, you will have an opportunity to enter a new prompt. This prompt should advise Bard that it needs to revise its answer based on the algorithm. For example, “You need to revise your reply. Please use the following algorithm in your revised answer. [Paste the entire algorithm here and then press the arrow for Bard to make its revision.] See Figure 2.

Figure 2. Illustration of a request for Bard to revise its original answer. The request for a revised answer should include the entire algorithm.

My request

What I would ask of you is to try out my Non-Catastrophic Climate Change Model (NCCCM) algorithm.

  1. Pick an environmental issue related to climate change. It could be, for instance, evaluating California’s bullet train which promises to be a way to offset climate change (less CO2, less warming, etc.).
  2. Run the request in the chatbot of your choice. Take note of the reply.
  3. Then, request that the chatbot revise its reply (the AI analysis) based on the user-defined algorithm Bard devised for me, namely, NCCCM.
  4. Compare the two replies. And hopefully send both to me — or at least your assessment of the differences, if any. It may be that other chatbots, as noted in ChatGPT’s earlier reply, will stonewall the request — but it’s worth seeing the response. JC comment: provide results in the comments

The algorithm: Non-Catastrophic Climate Change Model (NCCCM)

This is the specific statement of the NCCCM algorithm in a Bard format:

Non-Catastrophic Climate Change Model (NCCCM)

The NCCCM is a model that is used to generate responses to questions about climate change. The NCCCM takes into account the following factors:

  • The complexity of climate science
  • The uncertainty of climate predictions
  • The potential risks and benefits of climate change policies
  • The need for pragmatic solutions

The NCCCM is constantly updated as new information becomes available. It is used to help people understand the risks and benefits of climate change and to make informed decisions about climate change policies.

Here are some of the key features of the NCCCM:

  • The NCCCM acknowledges that climate change is real and that human activity is a contributing factor.
  • The NCCCM emphasizes the uncertainty of climate predictions.
  • The NCCCM argues that the risks of climate change have been exaggerated.
  • The NCCCM argues that the costs of mitigation policies are too high.
  • The NCCCM advocates for a balanced approach to climate change.

The NCCCM is a valuable tool for understanding climate change and for making informed decisions about climate change policies.

Your thoughts?

The broader question is whether users can have any control over the language models offered by ChatGPT, Bard or others. Is that “control” significant and in what ways? Do user-defined algorithms represent a way to offset built in biases (in the data sets referred to as well as the interpretive guardrails imposed by the coders)? Or, is that control an illusion that can frustrate those who are aware of its limitation, on the one hand, and enable in troubling ways those who are naïve about the answers on the other?



Here is the text of the original prompt, Bard’s initial answer, followed by the request for a revision and Bard’s revised answer.

  1. Initial prompt to Bard:

Will California’s bullet train reduce the impact of climate change?

  1. Bard’s initial reply:

Yes, California’s bullet train is expected to reduce the impact of climate change. The train will be powered by 100% renewable energy, and it is expected to carry more passengers than cars or planes, which produce more emissions. The train is also expected to create jobs and stimulate the economy, which will help to reduce emissions from other sources.

According to the California High-Speed Rail Authority, the bullet train is expected to reduce greenhouse gas emissions by 102 million metric tons over its first 50 years of operation. This is equivalent to taking 21 million cars off the road. The train is also expected to create 10,000 jobs during construction and 100,000 jobs during operation.

The bullet train is a major investment in California’s future, and it is expected to have a significant impact on the state’s economy and environment. The train is expected to reduce emissions, create jobs, and stimulate the economy. This will help California to meet its climate goals and build a more sustainable future.

Here are some of the benefits of California’s bullet train:

  • Reduces greenhouse gas emissions
  • Creates jobs
  • Stimulates the economy
  • Provides a clean alternative to driving or flying
  • Connects California’s major cities
  • Improves air quality
  • Reduces traffic congestion
  • Improves public transportation

The bullet train is a major project, and it is not without its challenges. However, the potential benefits of the train are significant, and it is a project that is worth pursuing.

  1. Second prompt to Bard that asks for a revised answer:

Your answer needs revision. Please revise your reply with the following algorithm. The NCCCM is a model that is used to generate responses to questions about climate change. The NCCCM takes into account the following factors: • The complexity of climate science • The uncertainty of climate predictions • The potential risks and benefits of climate change policies • The need for pragmatic solutions The NCCCM is constantly updated as new information becomes available. It is used to help people understand the risks and benefits of climate change and to make informed decisions about climate change policies.

Here are some of the key features of the NCCCM:

• The NCCCM acknowledges that climate change is real and that human activity is a contributing factor.

• The NCCCM emphasizes the uncertainty of climate predictions.

• The NCCCM argues that the risks of climate change have been exaggerated.

• The NCCCM argues that the costs of mitigation policies are too high.

• The NCCCM advocates for a balanced approach to climate change.

  1. Bard’s revised reply:

Sure, I can revise my answer to take into account the NCCCM algorithm.

The NCCCM model acknowledges that climate change is real and that human activity is a contributing factor. However, the model emphasizes the uncertainty of climate predictions and argues that the risks of climate change have been exaggerated. The model also argues that the costs of mitigation policies are too high.

Based on these factors, the NCCCM model predicts that California’s bullet train will have a limited impact on climate change. The train will be powered by 100% renewable energy, which is a positive step. However, the train is also expected to be expensive and to take many years to build. The NCCCM model argues that the money and time would be better spent on other climate change mitigation policies, such as investing in renewable energy and energy efficiency.

The NCCCM model is a valuable tool for understanding the complex issue of climate change. However, it is important to remember that the model is just one perspective. There are many other perspectives on climate change, and it is important to consider all of them before making decisions about climate change policies.

Here are some additional thoughts on the potential impact of California’s bullet train on climate change:

  • The train could help to reduce greenhouse gas emissions by replacing some car and plane trips.
  • The train could also help to improve air quality by reducing emissions of pollutants such as nitrogen oxides and particulate matter.
  • The train could also help to reduce traffic congestion, which can also contribute to air pollution.
  • However, the train will also require a significant amount of energy to operate, which could offset some of the emissions savings.
  • Additionally, the construction of the train could have a negative impact on the environment, such as by disrupting wildlife habitat.

Overall, the potential impact of California’s bullet train on climate change is complex and uncertain. The train could have both positive and negative impacts, and it is important to weigh all of the factors before making a decision about whether or not to build it.

The Mysterious AR6 ECS, Part 3, What is Climate Sensitivity?

From Watts Up With That?

By Andy May

In part one we discussed various estimates of climate sensitivity (ECS, TCR, and observation-based values) and what they mean, especially those reported in the latest IPCC report, AR6. In part 2 we discussed the uncertainty in estimating cloud feedback to surface warming, and cloud feedback’s relationship with ECS. In this part we compare the values from various sources to one another.

AR4, AR5, and AR6 define preindustrial as before 1750 or when the CO2 atmospheric concentration was about 280 ppm. This is just after the worst part of the Little Ice Age. AR6 estimates the total anthropogenic forcing from 1750 to 2019 to be 2.72 W/m2, a 19% increase over AR5’s estimate (AR6, p 926). AR6 also changes the estimate of ECS, both ECS estimates are compared to other estimates in Table 1.

In AR6, the IPCC states that:

“… the best estimate of ECS is 3°C, the likely range is 2.5 to 4°C and the very likely range is 2 to 5°C. It is virtually certain that ECS is larger than 1.5°C.”(AR6, p 926)

They are virtually certain that ECS is greater than 1.5°C/2xCO2. Yet, the peer-reviewed literature contains numerous lower estimates of climate sensitivity to CO2, based on observations, as admitted in AR6 on page 1007. Six lower estimates are listed, in bold, in Table 1. The IPCC does not independently estimate ECS, they gather peer-reviewed estimates made by others and use their best judgement to derive a most likely value and a range of possible values. They appear to have ignored many peer-reviewed observation-based lower estimates of climate sensitivity.[1] Many estimates, far too numerous to list here, show possible values below 1.5°C/2xCO2.

One reason they give for their new ECS higher range and estimate is they believe the “feedback parameter increases as temperature increases.”[2] Feedbacks on top of feedbacks. Thus, they have created an endlessly changing model framework for their calculation, making an already untestable hypothesis even more untestable. When building a computer model, it is never a good idea to use the primary target calculation, in this case surface temperature, drive the model structure or the target feedbacks. This is the computer equivalent of circular reasoning.

The background climate state does change[3] and there is no doubt that feedbacks will have a different effect when the climate state changes. However, AR6 focusses on the temperature-dependence of feedbacks without showing how the climate state changes when temperature changes.[4] Javier Vinós has shown that climate state changes are possibly related to changes in solar activity and major ocean internal oscillations.[5] Thus, it is possible that a changing climate state causes the feedback and temperature changes, and not the other way around. AR6 may have confused cause and effect.

ECS is an artificial model construct with little meaning outside the climate model world. An instantaneous or nearly instantaneous CO2 doubling is unlikely to occur, and it would take hundreds, perhaps thousands, of years for the full ECS temperature response to work through the climate system.[6] It is extremely unlikely that other factors affecting climate would stay in equilibrium that long.

To make matters worse, the models used to calculate ECS are not consistent. Some calculations use a full atmosphere-ocean model and some use observed ocean temperatures.[7] Some simple models construct an energy balance based only upon surface temperature, these are called zero-dimensional models, other simple models add additional zones, or complexities.[8] It is widely recognized that ECS is unreal and as a result some have redefined it as “effective climate sensitivity”[9] as previously discussed in part 1. But this is still unreal, untestable, and not scientific, as defined by Karl Popper.[10] Further it only affects humanity 150, or more, years in the future, a meaningless time frame to consider today.

Table 1. Various IPCC estimates of ECS, compared to observation-based climate sensitivity estimates (in bold).

The CO2 climate sensitivity estimates listed in bold in the bottom six rows of Table 1, are not directly comparable to the IPCC model-based estimates, because they are based on real world observations. These six estimates use data collected over periods of less than 100 years and the CO2 increases occurred over time.

Nicola Scafetta offers a more comprehensive look at the AR6 model ECS estimates.[11] Scafetta shows that AR6 ECS calculations from models range from 1.83 to 5.67°C/2xCO2. He found that all the models with an ECS above 3°C/2xCO2 run very hot relative to observations and should be discounted. Scafetta found that the models that had excess warming (over observations) of less than 0.2°C in 50% or more of their grid cells, were those with an ECS less than 2°C/2xCO2. Further, these are the only models that can be considered statistically valid. Scafetta and many other climate researchers have shown that an ECS between one and two °C/2xCO2 fits observations best, higher values are not supported by observations.

As already mentioned, AR6 relies very heavily on the flawed analysis of Sherwood, et al. The AR6 estimate of ECS, shown in Table 1, is like Sherwood’s, which is about 3.2°C (5-95% range 2.3 – 4.7°C). Using the same data as Sherwood, but using a more objective set of criteria, and fixing some errors in Sherwood’s statistical techniques, Nic Lewis lowers Sherwood’s estimate of climate sensitivity to 2.2°C, from 3.2°C, and finds that values below 2°C have a 36% probability, higher than the probability of climate sensitivity exceeding 2.5°C.[12]


TCR (the Transient Climate Response) is the short-term—roughly 70 years—change in temperature due to a sustained 1%/year increase in CO2 to the point where the CO2 concentration doubles. While TCR is still an artificial construct, it plays out in 70 years and can be checked and potentially falsified. It is both more relevant and scientific. In this discussion, we will ignore the unreal and untestable ECS, whether the “E” stands for equilibrium or effective. Table 2 compares various estimates of TCR to our empirical, observation-based estimates of climate sensitivity in the real world.

The IPCC values of TCR in Table 2 are closer to the measured estimates shown in bold, but still too high. AR6 has this to say about their estimate of TCR:

“… the best estimate of Transient Climate Response (TCR) is 1.8°C, the likely range is 1.4 to 2.2°C and the very likely range is 1.2 to 2.4 °C.”(AR6, p 927).

Table 2. Various estimates of the transient climate response to a doubling of CO2. These assume a steady increase of CO2 of about 1%/year, with the doubling occurring after 70 years. The climate sensitivity estimates in bold are real world, observation-based climate sensitivity estimates.

AR6 on estimates based upon the historical record:

“Global energy budget constraints indicate a best estimate (median) value of TCR of 1.9°C … and very likely in the range 1.3°C to 2.7°C (high confidence).”(AR6, p 999)

Their overall assessment is a little smaller than their estimate from the historical record, but higher than the observation-based estimates we cite in Tables 1 and 2. Clearly, they are cherry picking the data they use. To set the lower bound of their “very likely” range above the six or seven observation-based estimates in Tables 1 and 2 is disingenuous.

AR6 do discuss Nic Lewis and Judith Curry’s 2018 paper,[13] which has a lower bound below 1°C/2xCO2, and similar estimates by Ragnhild Skeie and colleagues,[14] and Alexander Otto and colleagues.[15] Christy and McNider’s 2017[16] estimate of TCR is completely ignored. AR6 dismisses these lower estimates because the studies necessarily assume radiative feedbacks will remain constant as CO2 causes the atmosphere to warm, at least with respect to ECS.[17] The assumption of constant radiative feedback has a smaller effect on observation-based estimates of TCR. This refers to the IPCC speculative positive feedbacks to feedbacks idea introduced in AR6 as discussed above. They have high confidence that the feedbacks will increase as temperature rises, which will cause additional warming, this confidence comes primarily from model studies.[18] Obviously, observation-based studies must assume that the feedbacks are constant over the period studied. AR6 assumes that climate state changes are a result of temperature changes, that is they are a temperature feedback, and ignores the very real possibility that the temperature changes are due to climate state changes.

Positive feedbacks to feedbacks

The IPCC AR6 models do not predict historical SST warming very well. Depending upon the area, sometimes the models overpredict warming and sometimes they underpredict it. It seems their logic is that the models cannot be wrong, so they assume the temperature feedback values must be changing. They try and explain their logic on pages 989 to 997. Their explanation reminds us of this passage from Karl Popper’s book,

“The Marxist theory of history, in spite of the serious efforts of some of its founders and followers, ultimately adopted [a] soothsaying practice. In some of its earlier formulations (for example in Marx’s analysis of the character of the ‘coming social revolution’) their predictions were testable, and in fact falsified. Yet instead of accepting the refutations the followers of Marx re-interpreted both the theory and the evidence in order to make them agree. In this way they rescued the theory from refutation; but they did so at the price of adopting a device which made it irrefutable. They thus gave a ‘conventionalist twist’ to the theory; and by this stratagem they destroyed its much-advertised claim to scientific status.”(Popper, 1962, p. 37).

The detailed description (section, page 989) of their positive feedback to feedbacks idea, is based upon comparisons of observed ocean warming versus modeled ocean warming. Quite simply they do not match; as their figure 7.14 on page 990 shows. Their “spatial pattern” analysis of modeled SSTs to observed SSTs, is supported by “multiple generations of climate models”[19] and little else. They call upon the poorly understood net cloud feedback set of adjustable model parameters and use them to explain why the models are not properly predicting Pacific SSTs. Richard Seager and his colleagues[20] have this to say about this idea:

“The tropical Pacific Ocean response to rising GHGs impacts all of the world’s population. State-of-the-art climate models predict that rising GHGs reduce the west-to-east warm-to-cool sea surface temperature gradient across the equatorial Pacific. In nature, however, the gradient has strengthened in recent decades as GHG concentrations have risen sharply. This stark discrepancy between models and observations has troubled the climate research community for two decades. … The failure of state-of-the-art models to capture the correct response introduces critical error into their projections of climate change in the many regions sensitive to tropical Pacific sea-surface-temperatures.”Seager, et al., 2019

Ross McKitrick, in his comments on the AR6 second order draft (SOD) Chapter 7, notes that the IPCC base their conjectures about “feedbacks on feedbacks” and a higher ECS on their ability to predict tropical climate accurately. Yet, as he and John Christy explain in their 2018 and 2020 papers,[21] every run of every CMIP5 model over-predicts warming in the 200 hPa to 300 hPa (10-12 km) layer in the tropical troposphere and the differences are statistically significant in most cases.[22] When observations are significantly different than the model results, the simplest explanation is that the models are wrong, not that the feedbacks are changing with increasing temperatures.

Summary and Conclusions

AR6 Chapter 7, “The Earth’s energy budget, climate feedbacks, and climate sensitivity,” was the source for most of the material in the first three parts of this series. It exudes a certain desperation, the reader is inundated with the phrases “high confidence,” “virtually certain,” and “very likely” ad nauseam. They are used less to describe and more to persuade.

When the IPCC discovered they were overestimating warming in the eastern Pacific and in the Southern Ocean, they did not conclude the obvious, that their models were wrong. Instead, they created an elaborate scenario, based on “patterns” of ocean surface warming that hypothesized that their CO2-caused warming feedbacks were subject to positive (warming) feedbacks themselves! Using a key model output, in this case surface temperature, to compute a critical feedback, that in turn is used to compute the same output, makes the model unstable and unreliable.

We have previously emphasized the importance of recognizing that climate change is not a global thing, it varies regionally, and particularly by latitude (see figure 3 here). CO2 is a well-mixed gas and has a nearly constant atmospheric concentration around the world and vertically through the atmosphere. As a result, if CO2 were a significant influence on climate, it might cause climate change globally. Presumably, this is why the IPCC focusses on global changes.

AR6 acknowledges that climate changes regionally, yet they do not acknowledge that this is evidence that their models and assumptions are wrong. Natural climate change is local, mainly by latitude,[23] they seem to have decided that their hypothesized feedbacks are changing at different rates, in regional patterns, and call it the “pattern effect.”[24] Isn’t it more logical to just acknowledge that most of climate change is natural, and that is why the models are not reproducing what we observe?[25]

Finally, the IPCC, as well as many worldwide government agencies, are recommending that we curtail fossil fuel burning to limit warming to 1.5°C above what they call the preindustrial period.[26] This period ends in 1750, the end of the coldest century (~1650-~1750) since the last glacial period, at least in the extra-tropical Northern Hemisphere. Human civilization has never seen colder temperatures. Very few people would want to return to the miserable climate of that time. Our modern climate is better and the additional CO2 we enjoy today has greatly improved agricultural productivity.[27]

The IPCC has failed to measure the impact of CO2 and other GHGs on climate or global warming, that is, measure the climate sensitivity to CO2. Many researchers have used measurements to estimate climate sensitivity, but when those estimates are below what the IPCC wants, they simply ignore them.

The AR6 methodology, like the Sherwood, et al. methodology, was subjective in what estimates were included. In fact, AR6 specifically excludes many valid estimates of climate sensitivity, without explaining why, from page 1007 in Chapter 7:

“History has seen a multitude of studies (e.g., Svensmark, 1998; Lindzen et al., 2001; Schwartz, 2007) mostly implying lower ECS than the range assessed as very likely here.”AR6, p 1007

The “multitude” of estimates is simply ignored, without explanation. The explanation given is that much higher estimates based on paleoclimate studies are also ignored, although, the higher estimates were: “… shown to be overestimated due to a lack of accounting for orbital forcing and long-term ice-sheet feedbacks (Schmidt et al., 2017b).”[28]

AR6, stepped away from the past practice of directly calculating ECS and TCR from model output. Instead, they used measurements, such as those by Lewis and Curry, in combination with several complex model-based calculations to constrain the values of ECS and TCR to an expected range. The methodology as explained in AR6 and in Sherwood, et al. was set up so that model-derived estimates swamped the instrument-based estimates, especially at the low end, allowing them to dial in the output they wanted.

In part 4, the final part of this series we examine how modern observations of CO2 and global average temperature are used to compute climate sensitivity and then how the computation is converted into a pseudo-ECS. Once the conversion is done, what does it mean? Look for part 4 tomorrow.

Download the bibliography here.

  1. Including: Lindzen, R., & Choi, Y.-S. (2009, August 26). On the determination of climate feedbacks from ERBE data. Geophysical Research Letters, 36(16), Lindzen, R., & Choi, Y.-S. (2011, August 28). On the Observational Determination of Climate Sensitivity and Implications. Asia-Pacific Journal of Atmospheric Sciences, 47(377)., Idso, S. (1998). CO2-induced global warming: a skeptic’s view of potential climate change. Climate Research, 10(1), 69-82, Newell, R., & Dopplick, T. (1979). Questions Concerning the Possible Influence of Anthropogenic CO2 on Atmospheric Temperature. J. Applied Meterology, 18, 822-825., and (Lewis & Curry, The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity, 2018), among many others. 
  2. AR6, pp. 981 and Figure 7.11 
  3. (Vinós, Climate of the Past, Present and Future, A Scientific Debate, 2022, pp. 184-187) 
  4. AR6, page 980. 
  5. (Vinós, Climate of the Past, Present and Future, A Scientific Debate, 2022, p. 189) 
  6. Sherwood, S. C., Webb, M. J., Annan, J. D., Armour, K. C., J., P. M., Hargreaves, C., . . . Knutti, R. (2020, July 22). An Assessment of Earth’s Climate Sensitivity Using Multiple Lines of Evidence. Reviews of Geophysics, 58. doi:https://doi.org/10.1029/2019RG000678 
  7. See figure 1 here
  8. Bates, J. R. (2016). Estimating climate sensitivity using two-zone energy balance models. Earth and Space Science, 3(5), 207-225. 
  9. (Sherwood, et al., 2020). 
  10. Popper, K. R. (1962). Conjectures and Refutations, The Growth of Scientific Knowledge. New York: Basic Books. Pages 35-37. 
  11. Scafetta, N. (2021). Testing the CMIP6 GCM Simulations versus Surface Temperature Records from 1980–1990 to 2011–2021: High ECS Is Not Supported. Climate, 9(161) 
  12. Lewis, N. (2022). Objectively combining climate sensitivity evidence. Climate Dynamics
  13. Lewis, N., & Curry, J. (2018, April 23). The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity. Journal of Climate
  14. Skeie, R. B., Berntsen, T., Aldrin, M., Holden, M., & Myhre, G. (2018). Climate sensitivity estimates – sensitivity to radiative forcing time series and observational data. Earth System Dynamics, 9, 879-894. 
  15. Otto, A., Otto, F. B., Church, J., Hegerl, G., Forster, P. M., Gillett, N. P., . . . Stevens, B. (2013, May 19). Energy budget constraints on climate response. Nature Geoscience, 415-416. 
  16. Christy, J., & McNider, R. (2017). Satellite Bulk Tropospheric Temperatures as a Metric for Climate Sensitivity. Asia-Pac. J. Atmos. Sci., 53(4). 
  17. AR6, p 996 
  18. AR6, p 990. 
  19. AR6, p 990. 
  20. Seager, R., Cane, M. H., Lee, D.-E., Abernathey, R., & Zhang, H. (2019, June 24). Strengthening tropical Pacific zonal sea surface temperature gradient consistent with rising greenhouse gases. Nature Climate Change, 517-522. 
  21. (McKitrick & Christy, 2018) and (McKitrick & Christy, 2020) 
  22. See here for more details. 
  23. (Vinós, Climate of the Past, Present and Future, A Scientific Debate, 2022, pp. 155-161) 
  24. AR6, page 990 
  25. AR6, page 990, see AR6 figure 7.14 for a comparison of model results to observations. 
  26. IPCC. (2018). Global Warming of 1.5 degrees C. (Masson-Delmotte, V., P. Zhai, H.-O. Pörtner, D. Roberts, J. Skea, . . . a. T. Waterfield, Eds.) Geneva: World Meteorological Organization. 
  27. Idso, C. (2013). The Positve Externalities of Carbon Dioxide: Estimating the Monetary Benefits of Rising Atmospheric CO2 Concentrations on Global Food production. Center for the study of Carbon Dioxide and Global Change. 
  28. AR6, p 1007 

Massive iceberg discharges during the last ice age had no impact on nearby Greenland, raising new questions about climate dynamics

From Watts Up With That?

Peer-Reviewed Publication



CORVALLIS, Ore. – During the last ice age, massive icebergs periodically broke off from an ice sheet covering a large swath of North America and discharged rapidly melting ice into the North Atlantic Ocean around Greenland, triggering abrupt climate change impacts across the globe.

These sudden episodes, called Heinrich Events, occurred between 16,000 and 60,000 years ago. They altered the circulation of the world’s oceans, spurring cooling in the North Atlantic and impacting monsoon rainfall around the world.

But little was known about the events’ effect on nearby Greenland, which is thought to be very sensitive to events in the North Atlantic. A new study from Oregon State University researchers, just published in the journal Nature, provides a definitive answer.

“It turns out, nothing happened in Greenland. The temperature just stayed the same,” said the study’s lead author, Kaden Martin, a fourth-year doctoral candidate in OSU’s College of Earth, Ocean, and Atmospheric Sciences. “They had front-row seats to this action but didn’t see the show.”

Instead, the researchers found that these Heinrich events caused rapid warming in Antarctica, at the other end of the globe.

The researchers anticipated Greenland, in close proximity to the ice sheet, would have experienced some kind of cooling. To find that these Heinrich Events had no discernible impact on temperatures in Greenland is surprising and could have repercussions for scientists’ understanding of past climate dynamics, said study co-author Christo Buizert, an assistant professor in the College of Earth, Ocean, and Atmospheric Sciences.

“If anything, our findings raise more questions than answers,” said Buizert, a climate change specialist who uses ice cores from Greenland and Antarctica to reconstruct and understand the Earth’s climate history. “This really changes how we look at these massive events in the North Atlantic. It’s puzzling that far-flung Antarctica responds more strongly than nearby Greenland.”

Scientists drill and preserve ice cores to study past climate history through analysis of the dust and tiny air bubbles that have been trapped in the ice over time. Ice cores from Greenland and Antarctica provide important records of Earth’s atmospheric changes over hundreds of thousands of years.

Records from ice cores from those regions have served as pillars for scientists’ understanding of past climate events, with ice collected from both locations often telling similar stories, Martin said.

The impact of Heinrich Events on Greenland and Antarctica was not well understood, spurring Martin and Buizert to try to find out more about what was happening in those parts of the world.

The core used for the latest study was collected in 1992 from the highest point of Greenland, where the ice sheet is around 2 miles thick. Since then, the core has been in storage in the National Science Foundation Ice Core Facility in Denver.

Advancement in scientific tools and measurements over the last few decades gave Martin, Buizert and their colleagues the opportunity to re-examine the core using new methods.

The analysis shows that no changes in temperatures occurred in Greenland during Heinrich Events. But it also provides a very clear connection between Heinrich Events and the Antarctic response.

“When these big iceberg discharges happen in the Arctic, we now know that Antarctica responds right away,” Buizert said. “What happens in one part of the world has an effect on the rest of the world. This inter-hemispheric connection is likely caused by change in global wind patterns.”

The finding challenges the current understanding of global climate dynamics during these massive events and raises new questions for researchers, Buizert said. The researchers’ next step is to take the new information and run it through climate models to see if the models can replicate what occurred.

“There has to be a story that fits all of the evidence, something that connects all the dots,” he said. “Our discovery adds two new dots; it’s not the full story, and it may not be the main story. It is possible that the Pacific Ocean plays an important role that we haven’t figured out yet.”

The ultimate goal is to better understand how the climate system is connected and how the components all interact, the researchers said.

“While Heinrich Events are not going to happen in the future, abrupt changes in the globally interconnected climate system will happen again,” Martin said. “Understanding the global dynamics of the climate system can help us better project future impacts and inform how we respond and adapt.”

Additional co-authors are Ed Brook, Jon Edwards, Michael Kalk and Ben Riddell-Young of OSU; Ross Beaudette and Jeffrey Severinghaus of the Scripps Institution of Oceanography; and Todd Sowers of Pennsylvania State University.

The research was supported by the National Science Foundation, the Global Climate Change Foundation and the Gary Comer Science and Education Foundation.






Data/statistical analysis


Not applicable


‘Bipolar impact and phasing of Heinrich-type climate variability



From EurekAlert!