Tag Archives: Climate Modelling

The Scientists Modelling Climate Change on Made-Up Planets

From The Daily Sceptic

BY STEVEN TUCKER

The famous ‘Fermi Paradox’ asks why, if life really is every bit as prevalent in the cosmos as some astrobiologists claim, with their equally famous ‘Drake Equation’ (which purports to show extraterrestrial life should be teeming just about everywhere in the universe), then where is it all?

One possible answer might be that, once all intelligent civilisations reach a certain point of advancement, they stumble across the so-called ‘Great Filter’, a developmental obstacle which simply can never be overcome, no matter what planet you are living on, which ultimately destroys the whole species in an irreversible Mass Extinction Event. This Great Filter was once often imagined to be nuclear war – now, it is increasingly deemed to be climate change, a phenomenon no cutting-edge industrial civilisation can supposedly ever escape from unscathed, on Earth or off it.

One leading advocate of this kind of doomsday thinking today is Adam Frank, a U.S. astrophysicist whose 2018 book Light of the Stars: Alien Worlds and the Fate of the Earth and many co-authored academic papers have attempted to delineate a so-called “Astrobiology of the Anthropocene”. The ‘Anthropocene’ is the proposed (and recently rejected) term many scientists want to give to the current geological era on Earth, which they say has been irrevocably impacted and influenced by mankind and his technology, namely nuclear bombs and fossil fuels.

The Misanthropocene Era

As the distinctly Malthusian Frank said in a promotional 2018 interview with Scientific American: “My argument is that Anthropocenes may be generic from an astrobiological perspective: what we’re experiencing now may be the sort of transition that everybody goes through, throughout the Universe.”

If Anthropocenes (or Alienthropocenes) are indeed “generic”, then doesn’t that mean they can potentially be modelled? Possibly so. According to Professor Frank (not to be confused with The Simpsons’ Professor Frink), “a civilisation, to some degree, is just a mechanism for transforming energy on a planetary surface”, a statement so utterly reductive in its nature it really ought to be the governing motto of the UN or EU these days.

Being something of a UFO buff myself, I have long been of the personal opinion that any actual aliens mankind should ever encounter will most likely turn out to be totally, well, alien in their nature, so much so we might not even be able to recognise them as being actual animate life-forms at all, a bit like most normal people feel when looking at Rachel Reeves. Professor Frank, though, disagrees, being apparently so in thrall to the currently dominant technocratic myth of Homo Statisticus (have you ever met anyone with 2.4 actual children?) that he feels it plausible to extend its basic pattern out across the entire Universe:

Well, just as we understand planetary climates pretty well, we can use the basic, fundamental tenets of life to guide us, too. Organisms are born, some of them reproduce, and they die. Living things consume energy and they excrete waste. That should be true even if they’re made of silicon or whatever. The next step is to incorporate principles of population biology, in which the idea of ‘carrying capacity’ — the number of organisms that can be sustainably supported by the local environment — is very important. This approach can also be mathematically applied to the state of a planet. So in our modelling work we’ve got an equation for how the planet is changing and an equation for how the population is changing. What ties them together is the predictable result that as environmental conditions on a planet get worse, the total carrying capacity goes down. A civilisation with a population of n will use the resources of its planet to increase n, but at the same time, by using those resources, it tends to degrade the planet’s environment.

But what if some aliens are incorporeal in nature, being made of gases, for instance? What if they therefore don’t actually need to eat or excrete at all? What if some of them are made from – or perhaps breathe – CO2? Or what if they are extremophiles (i.e., lovers of extreme climates) and therefore very high temperatures are actually good for some ETs’ health, not bad for it? Wouldn’t climate change akin to the kind Frank currently warns is taking place here on Earth make them thrive? Plus, what atmospheric gases will there even be to be boosted or dissipated by hypothetical industrial activity on other planets in the first place? Global warming may not even be chemically possible on Planet Fictional at all. These objections are all pretty obvious, and I do hope Professor Frank addresses them in his actual book (which I haven’t read), because if he hasn’t, it may be in danger of being interpreted by the ungenerous-minded as a work of mere sci-fi with numbers in it.

From Drake Equation to Fake Equation?

Speaking of numbers, as Frank and his co-researchers claim to have produced climate-models for generic other planets which do not even actually exist, where have they got the necessary data to fill them up with? It must be pretty detailed data because, look, Frank has somehow managed to create modelling graphs for the four presumed most likely scenarios for any planet’s long-term sustainability or civilisational collapse path, once intelligent life eventually appears on it:

Black line: made-up trajectory of made-up planet’s made-up population
Red line: co-evolving made-up trajectory of made-up planet’s made-up environmental state (a proxy for its made-up temperature, says Professor Frank)

The first model-graph, labelled ‘Die-Off’, is the one which currently appears to apply to doomed old Planet Earth, at least in the view of Frank. According to Commander Data, talking in another 2018 promotional interview with LiveScience:

In this scenario, the civilisation’s population skyrockets over a short period of time, and as the aliens guzzle energy and belch out greenhouse gases, the planet’s temperature spikes, too. (In this study, temperature was used to represent human-made impacts on the planet’s habitability via greenhouse gas pollution.) The population peaks, then suddenly plummets as rising temperatures make survival harder and harder. The population eventually levels off, but with a fraction of the people who were around before. Imagine if seven out of 10 people you knew died quickly. It’s not clear a complex technological civilisation could survive that kind of change.

It probably could if they were all just civil servants.

Chatting SHIT?

Elsewhere, I have developed an innovative but speculative data-based concept of my own that I sincerely hope one day takes off. It is known by the acronym SHIT – or Statistics Having Imaginary Truth, and I coined it to denote the way most economists appear to just make up their forecasts and predictions out of thin air, producing ‘authoritative’ figures from nowhere like fiscal rabbits from a monetary hat. I write this present piece in the days immediately following Jeremy Hunt’s latest budget, by the way.

Although I tend to accept Earth is getting somewhat warmer these days (albeit nowhere near enough to destroy all human life upon it, or whatever Al Gore is currently standing naked in the street and yelling out loud to gullible passersby), I am nonetheless becoming increasingly inclined to consider long-term climate forecasts for our planet as increasingly becoming a complete and utter shower of SHIT likewise. If so, then how much more SHITtier must data-based climate models for other planets – planets which, I here repeat once again, and in italics for extra emphasis this time, do not even exist – be?

A further question must also be asked: where precisely did Frank get the data for the models and graphs reproduced in his 2018 book and articles from, exactly? According to what he said in his interviews, it was from prior studies made of Easter Island/Rapa Nui, the now-lifeless barren ocean island where, once upon a time, or so the standard promulgated narrative goes, foolish humans lived sustainably and well until, in a short-sighted fit of greed, they stupidly chopped down all the place’s trees, causing a drought and then being left with neither wood for fuel nor tasty arboreal produce to eat any more, thus dying off en masse. But was this really true?

Easter Egg-stinction Event

According to Professor Frank and his colleagues:

Easter Island presents a particularly useful example for our own purposes since it is often taken as a lesson for global sustainability. Many studies indicate that Easter Island’s inhabitants depleted their resources, leading to starvation and termination of the island’s civilisation.

Except that, “many studies” of another kind say this isn’t what actually happened there at all; in the opinion of some revisionist modern scholars, the standard story of Easter Island’s downfall and depopulation is actually a modern-day green myth designed to promulgate a cautionary warning about what will happen to modern-day industrialised Western civilisations if, like the imprudent old Easter Islanders, we too are rash enough to use up all our resources, destroy our local environment or help bring about acts of needless climate change.

The key 2021 paper purporting to demonstrate that this now famous old popular narrative is indeed just a myth contains much data and many graphs of a technical nature which, as far as this mere layman knows, may one day turn out to be just so much SHIT too. However, if the Easter Island Extinction narrative really was just an overblown environmentalist fable created to groom Greta Thunberg into existence, it would appear that the primary data Professor Frank says he and his colleagues based his models of shared off-Earth exoplanetary collapse on in his 2018 book were pure SHIT themselves, were they not?   

There Is No Planet B

If Frank’s book was just marketed as a fun hypothetical thought-exercise, then I would find it wholly unobjectionable – it’s good for scientists to be free-thinkers. But he appears to have been promoting it as a spur to persuading politicians to implement actual real-world ‘climate-friendly’ policy objectives, and then being treated seriously in this aim by the mainstream scientific press. Therefore, you do have to ask: are such pop-science studies as Frank’s 2018 book, being aimed primarily at a lay-audience who probably know no better, really intended to function more as green-friendly political agitprop than as genuine science as such?

To end his LiveScience interview, Professor Frank asks his readers a question, the answer to which is clearly supposed to be obvious:

Across cosmic space and time, you’re going to have winners — who managed to see what was going on [i.e., a self-inflicted climate crisis] and figure out a path through it — and losers, who just couldn’t get their act together and their civilisation fell by the wayside. The question is, which category do we want to be in?

The winners! The winners! I want to be on the side of the winners! Except that, if the West does indeed compliantly tear up its whole current dirty – yet conveniently cheap, reliable and efficient – energy infrastructure to save going the way of the supposed Easter Island idiots, who will the actual winners on our planet be? The Chinese, Russians and others, who will quite happily go on burning coal and oil while we shiver in abject poverty and they quite happily extend their autocratic tentacles across the globe, enabled to dominate us militarily, economically and industrially by piles and piles of dubious pseudo-scientific SHIT being promulgated by a caste of weird, West-ruling cultists who honestly expect to be taken seriously when they go around shouting mad things like “Climate change killed off all the imaginary aliens, so junk your gas-fires now or DIE IMMEDIATELY!!” to ostensibly reliable, august and disinfo-slaying outlets like Scientific American and LiveScience.

Perhaps a sinister race of non-human alien beings do indeed walk among us after all – they’re called the governing class.

Steven Tucker is a journalist and the author of over 10 books, the latest being Hitler’s & Stalin’s Misuse of Science: When Science Fiction Was Turned Into Science Fact by the Nazis and the Soviets (Pen & Sword/Frontline), which is out now.

Wrong, The Hill, Climate Change Isn’t Making It Unsafe for Kids to Play Outside

Silhouette, group of happy children playing on meadow, sunset, summertime

From the ClimateRealism

By Heartland Institute

By Linnea Lueken and H. Sterling Burnett

A recent article at The Hill claims that climate change is reversing the multi-decade trend of improving air quality in the United States, due to increased PM2.5 from wildfires and ozone from heatwaves, making it unsafe for children to play outside. This is false, air quality is improving, and the suggestion that kids should avoid outdoor play is hazardous to their health.

The article, “Climate change is making it more dangerous for kids to play outside, report finds,” covers a study published by a “climate analytics” firm called First Street Foundation.

To be clear, First Street is a strictly climate alarmist nonprofit group, that often publishes studies meant to frighten the public. Their predictions are based on climate modelling, and tend to ignore publicly available weather data that make their projections seem unlikely to occur in real life. This kind of work should be taken with a grain of salt. Climate Realism has refuted the Front Street Foundation’s false research before in the posts “No, WaPo, Climate Change is NOT Fueling More Devastating Rains and Flooding,” and “No, Axios, Future U.S. Hurricane Damage Losses Will Not be Driven by Climate Change,” for example.

According to The Hill, the study’s authors project that “by midcentury, the increased levels of microscopic soot particles (PM10 and PM2.5) and ozone molecules entering Americans’ lungs will be back to the levels they were at in 2004–before a decades-long federal campaign to clean up the air.”

This is an odd claim, lacking any basis in data. In fact, since the Clean Air Act was enacted in 1963, and the U.S. Environmental Protection Agency outlined its first official standards for particulate matter and ozone in 1971, measured pollutants, including PM10, PM2.5, and ground level ozone have all declined dramatically.

First Street researchers and The Hill claim PM2.5 is being driven up by increased wildfires, but the fact is, wildfires aren’t getting worse due to climate change.

Currently available data from the National Interagency Fire Center show that while there was an increase in total acres burned between 1983 and the early 2000s, it has since leveled off, and the last two years have had some of the lowest wildfire incidences in the United States in recent years. Older data, much of which was expunged from the NIFC website in 2020, show that wildfires were much worse in the early 20th century than they are today. This trend does not track with the theory that wildfires are driven by fossil fuel use, and global warming. (See figure below)

The increase post-1983 is due almost entirely to a change in forestry practices and management, rather than the modest warming of the past few decades, as described in many Climate Realism posts.

Additionally, satellites have been keeping track of wildfires for decades now, and that data clearly show that not only are wildfires not becoming more common or intense, they have declined since the early 2000s.

Since there are fewer wildfires, there is less smoke from wildfires, which means wildfires can’t be causing an increase in bad air days necessitating keeping kids indoors for their health. Environmental Protection Agency (EPA) data confirm this fact. The EPA reports that during the recent period of modest warming, since 1990 PM10 declined by 32 percent. In 2000, the EPA shifted its attention to also monitoring and regulating the emission of smaller particulate matter particles, PM2.5. Since then, PM2.5 has fallen by 37 percent.

With regards to ozone pollution, First Street argues that rising temperatures will increase ground level ozone, especially in major cities. The reality is that the best available temperature data does not show any increase in high temperature anomalies. (See figure below)

Also, the EPA’s air quality index data is available to the public, so we can look at what the air quality trends are for almost any major city. Taking New York City, since it is the largest city in America, we can see that air quality with regards to particulate matter seems to be getting much better in recent years, with only a few bad days last year (2023) due to that summer’s Canadian wildfires, many of which were started by arsonists. (See figure below)

The EPA’s air quality data show that overall, as the Earth has modestly warmed, ground-level ozone has declined 21 percent, since 1990. Once again, if ozone is declining, climate change can’t be making the air less healthy for children.

It is widely believed that many of today’s youths spend too much time indoors, wedded to their computers, video gaming systems, cell phones, and other devices. On this point an MIT study found:

Compared to the 1970s, children now spend 50% less time in unstructured outdoor activities. Children ages 10 to 16 now spend, on average, only 12.6 minutes per day in vigorous physical activity. Yet they spend an average of 10.4 waking hours each day relatively motionless.

Simultaneous to the decline in outdoor play, has been an increase in childhood obesity and associated diseases like diabetes. As a result, advocating keeping kids indoors while air quality is improving endangers their health, rather than protecting them from harm. This is especially true when one realizes that the U.S. EPA has found that indoor air contains two to five times more pollutants than outdoor air on average.

Outdoor air has not become more dangerous for children, but avoiding outdoor play and activities is a proven health hazard. The Hill should cautiously examine studies from organizations like the First Street Foundation with a skeptical eye, and check the hard data and context, before unnecessarily alarming its readers, and encouraging them to take actions that could result in poorer health for themselves and their kids. Climate change attribution is a hammer wielded by climate alarmists with each and every “non-optimal” weather condition or human health threat being nails they can drive their narrative and agenda home with, regardless of what the facts say.

Arctic Sea Ice Continues its Stonking Recovery

From The Daily Sceptic

By CHRIS MORRISON

Arctic sea ice continued its stonking recovery last month, recording its 24th highest level in the 45-year modern satellite record. As reported previously in the Daily Sceptic, the ice climbed to a 21-year high on January 8th. Good news, of course, for ice fans and polar bears, but frankly a bit of a disaster if you are forecasting future summer swimming galas at the North Pole to promote a collectivist Net Zero agenda. Live by the sword, die by the sword – if you cherry-pick the scientific record to state the climate is collapsing, it might be thought you have some explaining to do when the trend reverts to the norm. Just ask coral alarmists about two years of record growth on the Great Barrier Reef. Sadly, explanations there are none, just a deafening, stunned silence.

Arctic sea ice has long been a poster scare for climate Armageddon. But science tells us that it is cyclical and is heavily influenced by ocean currents and atmospheric heat exchanges. It would appear that these chaotic changes are beyond the ability of any computer to process, although a large, well-funded model industry begs to differ. The recovery in Arctic sea ice has been steady if slow and this has enabled the alarums to hang on in the mainstream headlines. Of course it could go into reverse, nobody really knows, least of all Sir David Attenborough who told BBC viewers in 2022 that the summer ice could all be gone by 2035. He relied, needless to say, on a computer model.

Most mainstream climate journalists just print what they are told without looking too closely at the source of the information. The U.S.-based National Snow and Ice Data Centre (NSIDC) is a source of interpretation for trends in polar ice, but care needs to be taken when reading its often gloomy monthly summaries. According to the NSIDC, January sea ice growth was “lower than average” throughout most of the month. It headlined its report: ‘Nothing swift about January sea ice.’ Other interpretations are available. Consider the graph below tracking the ice extent for January over the satellite record.

Statisticians can argue over when the sea ice started to recover but there has not been much decline going back to around 2007. In this case January shows a similar trend to that seen in September, the month with the lowest sea ice extent. A moving average line from around the middle of the last decade would show an obvious increase. But the NSIDC reproduces this graph for every individual month and year with a downward linear trend from 1979, a noted high point for recent sea ice. The graph is widely used on social media to counter any suggestion that the ice is recovering.

Note also that the NSIDC claims the January growth extent was “below average”. Well it depends on what average you are using. The NSIDC uses a comparative average from 1981-2010, despite a more recent decade of data being available. It is not hard to see why it prefers 1981-2010 since it includes the higher levels of the 1980s and excludes the lower levels of the 2010s. Taking a 1991-2020 average would likely lead to many more ‘above average’ observations. Data before 1979 is not as accurate, but levels going back to the 1950s suggest much lower sea ice extents. Perish the thought that comparisons should be made with these data or observations made about an obvious cyclical trend seen here and in the historical record going back to the early 1800s.

The NSIDC can spin its figures as much as it likes knowing that in the era of ‘settled’ climate science it is unlikely to be widely challenged. On a more serious note, this unwillingness to question perceived authority and engage in the scientific process gave us Michael E. Mann’s infamous 1998 ‘hockey stick’ graph. This purported to show declining temperatures for 1,000 years followed by a sharp recent uptick caused by human-caused burning of hydrocarbons. The unquestioning acceptance in mainstream media, science and politics can be said to have removed the concept of natural climate variability for a generation and put many Western countries on the road to Net Zero insanity. Now the hockey stick is centre stage in a Washington D.C. libel trial brought by Mann complaining that the journalist Mark Steyn branded his work a fraud. By some accounts, the hockey stick does not seem to be having a great time in the dock.

Professor Abraham Wyner is a distinguished statistician at Mann’s own University of Pennsylvania. Asked on the court stand whether Mann’s hockey stick used manipulative techniques, he replied “yes”. He suggested it was possible that if you knew where you wanted to get to, you can lead yourself into a conclusion different from someone who walked down a different set of paths.

In earlier court documents, Mann claimed wrongly that he was a Nobel laureate, a fact noted during the trial. His hockey stick abolished the Medieval Warming Period, while subsequent leaked Climategate emails referred to “Mike Nature Trick”. This was a practice of using the most convenient proxy or temperature measurements to fit the desired narrative.

In the course of his testimony, Dr. Wyner made comments that strike at the heart of so much that is wrong with the ‘settled’ science pronouncements that seemingly cannot be disputed or even discussed.

And so what happens is, and what is happening today in statistical analysis… we’re in a crisis. A crisis of trust and replication because so many results that were thought to be true turned out not to be true and correct have now gone back and looked at or attempted to be replicated and they didn’t work. Lots of things we thought were true turned out not to be true. It’s a crisis. A problem [my colleague] has identified is due to really bad statistical sets of methods that allow you to get away with choices that would produce a very different result if you did it differently.

What the last two decades or so have shown us is that activists will use any weather outlier or natural disaster to claim the climate is collapsing, or the Earth is “boiling” in the odd universe occupied by UN Secretary-General Antonio Guterres. Statistics are bent to fit the desired narrative whether it be natural waxing and waning of ice levels or typhoon jets landing near a measuring device showing a 60-second 40.3°C temperature blip ‘record’ at RAF Coningsby. Net Zero is starting to unravel thread by thread, and it is time the spotlight was amped up to maximum to shine a light on all the dodgy science used to promote this horrendous reset of human society.

Chris Morrison is the Daily Sceptic’s Environment Editor.

Junk Science Alert: Met Office Set to Ditch Actual Temperature Data in Favour of Model Predictions

From The Daily Sceptic

BY CHRIS MORRISON

The alternative climate reality that the U.K. Met Office seeks to occupy has moved a step nearer with news that a group of its top scientists has proposed adopting a radical new method of calculating climate change. The scientific method of calculating temperature trends over at least 30 years should be ditched, and replaced with 10 years of actual data merged with model projections for the next decade. The Met Office undoubtedly hopes that it can point to the passing of the 1.5°C ‘guard-rail’ in short order. This is junk science-on-stilts, and is undoubtedly driven by the desire to push the Net Zero collectivist agenda.

In a paper led by Professor Richard Betts, the Head of Climate Impacts at the Met Office, it is noted that the target of 1.5°C warming from pre-industrial levels is written into the 2016 Paris climate agreement and breaching it “will trigger questions on what needs to be done to meet the agreement’s goal”. Under current science-based understandings, the breaching of 1.5°C during anomalous warm spells of a month or two, as happened in 2016, 2017, 2019, 2020 and 2023, does not count. Even going above 1.5°C for a year in the next five years would not count. A new trend indicator is obviously needed. The Met Office proposes adding just 10 years’ past data to forecasts from a climate model programmed to produce temperature rises of up to 3.2°C during the next 80 years. By declaring an average 20-year temperature based around the current year, this ‘blend’ will provide ”an instantaneous indicator of current warming”.

It will do no such thing. In the supplementary notes to the paper, the authors disclose that they have used a computer model ‘pathway’, RCP4.5, that allows for a possible rise in temperatures of up to 3.2°C within 80 years. Given that global warming has barely risen by much more than 0.2°C over the last 25 years, this is a ludicrous stretch of the imagination. Declaring the threshold of 1.5°C, a political target set for politicians, has been passed based on these figures and using this highly politicised method would indicate that reality is rapidly departing from the Met Office station.

Using anomalous spikes in global temperature, invariably caused in the short-term by natural variations such as El Niño, is endemic throughout mainstream climate activism. ‘Joining the dots’ of individual bad weather events is now the go-to method to provoke alarm. So easily promoted and popular is the scare that an entire pseudoscience field has grown up using computer models to claim that individual weather events can be attributed to the actions of humans. ‘Weather’ and ‘climate’ have been deliberately confused. Climate trends have been shortened, and the weather somehow extended to suggest a group of individual events indicates a much longer term pattern. Meanwhile, the use of a 30-year trend dates back to the start of reliable temperature records from 1900, and was set almost 100 years ago by the International Meteorological Organisation. It is an arbitrary set period, but gives an accurate temperature trend record, smoothing out the inevitable, but distorting, anomalies.

By its latest actions, the Met Office demonstrates that the old-fashioned scientific way lacks suitability when Net Zero political work needs to be done. Trends can only be detected over time, leading to unwelcome delays in being able to point to an exact period when any threshold has been passed. Whilst accepting that an individual year of 1.5°C will not breach the Paris agreement so-called guard-rail, the Met Office claims that its instant indicator will “provide clarity” and will “reduce delays that would result from waiting until the end of the 20-year period”. The Met Office looks forward to the day when its new climate trend indicator comes with an IPCC ‘confidence’ or ‘high likelihood’ statement such as, “it is likely that the current global warming level has now reached (or exceeded) 1.5°C”. In subsequent years, this might become, “it is very likely that the current global warming level exceeded 1.5°C in year X”.

Why is this latest proposal from the state-funded Met Office junk science-on-stilts? A variety of reasons include that climate models have barely an accurate temperature forecast between them, despite 40 years of trying. Inputting opinions that the temperature of the Earth might rise by over 3°C in less than 80 years is hardly likely to improve their accuracy. There are also legitimate questions to be asked about the global temperature datasets that record past temperatures. Well-documented poor placing of measuring devices, unadjusted urban heat effects and frequent retrospective warming uplifts to the overall records do not inspire the greatest of confidence. At its HadCRUT5 global database, the Met Office has added around 30% extra warming over the last few years.

Chris Morrison is the Daily Sceptic’s Environment Editor.

Dr. Jim Advises Panic

Fear, Gloom, and Panic, Oh My! Panic Sells, Calm Saves.

From Watts Up With That?

Guest Post by Willis Eschenbach

I see that my favorite serially failed climate doomcaster, Dr. James Hansen, is at it again. Accompanied by his usual Greek chorus of co-sycophants, he’s written a new paper entitled Global warming in the pipelineby James E Hansen, Makiko Sato, Leon Simons, Larissa S Nazarenko, Isabelle Sangha, Pushker Kharecha, James C Zachos, Karina von Schuckmann, Norman G Loeb, Matthew B Osman, Qinjian Jin, George Tselioudis, Eunbi Jeong, Andrew Lacis, Reto Ruedy, Gary Russell, Junji Cao, and Jing Li.

The press release quotes Hansen as follows:

“We would be damned fools and bad scientists if we didn’t expect an acceleration of global warming,” Hansen said. “We are beginning to suffer the effect of our Faustian bargain. That is why the rate of global warming is accelerating.”

And of course, the press release contains the requisite global warming scare photo complete with a bleached skull in the lower right …

In the underlying paper, Hansen et al. ad infinitum warn us very seriously of a “predicted post-2010 accelerated warming rate”. And how do they know these things?

Models. Yeah, big surprise, I know.

Hmmm, sez I … so I figured I should take a look at the changes in the rate of temperature increase over the last 170 years. To do that, I started by looking at the Berkeley Earth temperature dataset. Then I thought “Well, somebody’s sure to claim I should have used the HadCRUT dataset”, so I threw that in for good measure. Here are the 50-year trailing accelerations for those two surface air temperature datasets. By “50-year trailing accelerations”, I mean the calculated acceleration (or deceleration) for the 50 years preceding a given date.

Figure 1. 50-year trailing acceleration, Berkeley Earth and HadCRUT global mean temperature datasets.

As you can see, at different times, there has been both acceleration and deceleration in the rate of temperature change over the last 170 years.

Of particular interest, and in total contradiction to James Hansen’s claim of a “predicted post-2010 accelerated warming rate”, since about 1990 or so the 50-year trailing acceleration has been decreasing. The rate of warming was actually decelerating starting, ironically, around 2010. And at present, acceleration over the final half-century of the record is approximately zero.

Go figure.

Mentions in their paper:

  • OBSERVATIONS: 11
  • MODELS: 148

In the midst of all of this, what’s actually going on with the temperature? Here are six different datasets.

Figure 2. CEEMD smooths, global mean temperature anomalies from six datasets. Anomalies are taken around the starting point in each smoothed dataset. Note that JMA only goes to Dec 2022, so it misses the final uptick.

There are some curiosities in Figure 2.

  • All six show that temperatures have been decreasing since the peak of the 2018 El Nino event. While such decreases are not uncommon in the record (see post-1998 El Nino in Fig. 2 above), it sure ain’t acceleration.
  • GISS, HadCRUT5, and Berkeley Earth are almost indistinguishable. HadCRUT5, plotted first, hardly peeks out from behind the other two.
  • Over the first twenty years up to the low temperatures around the year 2000, all six datasets are in agreement. After that, RSS goes high and JMA and UAH go low. What changed?
  • Finally, saying that there is some kind of “acceleration” post-2010 as Hansen et all of them claim is a scientific joke. The time period is far too short and the temperature variations are far too complex to say anything about possible acceleration.

Rain predicted tonight, wonderful rain in our dry part of the planet. My very best wishes to all, life is short, enjoy. It does no good to complain about the coming storm, so we might as well learn to dance in the rain …

w.

PS—When you comment, quote the exact words you are discussing. I can defend my words. I can’t defend your interpretation of my words. Thanks.


Models Vs. Reality: Sea Turtle Edition

Climate computer simulations have run hotter than reality since their inception.

From Watts Up With That?

A new paper published at the end of October waxes dramatically over the dreaded consequences of model projections of the dreaded climate change and sea turtle reproduction.

https://onlinelibrary.wiley.com/doi/10.1111/gcb.16991

Abstract

Sea turtles are vulnerable to climate change since their reproductive output is influenced by incubating temperatures, with warmer temperatures causing lower hatching success and increased feminization of embryos. Their ability to cope with projected increases in ambient temperatures will depend on their capacity to adapt to shifts in climatic regimes. Here, we assessed the extent to which phenological shifts could mitigate impacts from increases in ambient temperatures (from 1.5 to 3°C in air temperatures and from 1.4 to 2.3°C in sea surface temperatures by 2100 at our sites) on four species of sea turtles, under a “middle of the road” scenario (SSP2-4.5). Sand temperatures at sea turtle nesting sites are projected to increase from 0.58 to 4.17°C by 2100 and expected shifts in nesting of 26–43 days earlier will not be sufficient to maintain current incubation temperatures at 7 (29%) of our sites, hatching success rates at 10 (42%) of our sites, with current trends in hatchling sex ratio being able to be maintained at half of the sites. We also calculated the phenological shifts that would be required (both backward for an earlier shift in nesting and forward for a later shift) to keep up with present-day incubation temperatures, hatching success rates, and sex ratios. The required shifts backward in nesting for incubation temperatures ranged from −20 to −191 days, whereas the required shifts forward ranged from +54 to +180 days. However, for half of the sites, no matter the shift the median incubation temperature will always be warmer than the 75th percentile of current ranges. Given that phenological shifts will not be able to ameliorate predicted changes in temperature, hatching success and sex ratio at most sites, turtles may need to use other adaptive responses and/or there is the need to enhance sea turtle resilience to climate warming.

1 INTRODUCTION

The world’s climate is changing at an unprecedented rate (Loarie et al., 2009). As a response, species, from polar terrestrial to tropical marine environments, have started to alter their phenology (e.g., timings of cyclical or seasonal biological events), shift their geographic distribution, and modify their trophic interactions (Dalleau et al., 2012; Parmesan & Yohe, 2003; Walther et al., 2002). Species’ responses to climate change can occur through at least three contrasting but non-exclusive mechanisms: (1) range shifts, (2) phenotypic plasticity, and (3) microevolution via natural selection (Fuentes et al., 2020; Hulin et al., 2009; Waldvogel et al., 2020).

Range shifts might be observed by sea turtles responding to changes in climate by shifting their range to more climatically suitable areas (Abella Perez et al., 2016; Mainwaring et al., 2017). It is crucial that these areas provide the environment necessary for colonization and are conducive to egg incubation (Fuentes et al., 2020; Pike, 2013). However, it has been indicated that areas with climatically suitable environments might be impacted by other stressors (e.g., sea level rise, coastal development), which might hinder the potential adaptive capacity of sea turtles (Fuentes et al., 2020). Phenotypic plasticity allows individuals to cope with environmental changes and relates to the ability of individuals to respond by modifying their behavior, morphology, or physiology in response to an altered environment (Hughes, 2000; Hulin et al., 2009; Waldvogel et al., 2020). Microevolution refers to adaptation occurring because of genetic change in response to natural selection (Lane et al., 2018). Phenotypic plasticity provides the potential for organisms to respond rapidly and effectively to environmental changes and thereby cope with short-term environmental change (Charmantier et al., 2008; Przybylo et al., 2000; Réale et al., 2003). However, phenotypic plasticity alone may not be sufficient to offset against projected impacts from climate change (Gienapp et al., 2008; Schwanz & Janzen, 2008). Microevolution, on the other hand, is thought essential for the persistence of populations faced with long-term directional changes in the environment. However, the ability of microevolutionary responses to counteract the impacts of climate change is unknown, because rates of climate change could outpace potential responses (Hulin et al., 2009; Morgan et al., 2020; Visser, 2008) although see Tedeschi et al. (2015).

It is unclear whether potential adaptive responses by turtles will be sufficient to counteract projected impacts from climate change (Monsinjon, Lopez-Mendilaharsu, et al., 2019; Moran & Alexander, 2014; Morjan, 2003). For example, sea turtles have persisted through large changes in climate during the millions of years that they have existed, demonstrating a biological capacity to adapt (Maurer et al., 2021; Mitchell & Janzen, 2010; Rage, 1998). Nevertheless, there is growing concern over the potential impacts that projected temperature increases might have on sea turtles (Patrício et al., 2021). Temperature plays a central role in sea turtle embryonic development, hatching success, hatchling sex ratios (Hays et al., 2017; Standora & Spotila, 1985), hatchling morphology, energy stores, and locomotor performance (Booth, 2017). Sea turtle eggs only successfully incubate within a narrow thermal range (25 and ~35°C), with incubation above the thermal threshold resulting in hatchlings with higher morphological abnormalities and lower hatching success (Howard et al., 2014; Miller, 1985). Furthermore, sea turtles have temperature-dependent sex determination, a process by which the incubation temperature determines the sex of hatchlings (Mrosovsky, 1980). The pivotal temperature (PT ~28.9–30.2°C for the species studied here, Figure S1), where a 1:1 sex ratio is produced, is centered within a transitional range of temperatures (~1.6–5°C, Figure S1), that generally produces mixed sex ratios. Values above the PT will produce mainly female hatchlings while values below produce mainly males (Mrosovsky, 1980).

Thus, projected increases in temperature may cause feminization of sea turtle populations and decrease reproductive success (Patrício et al., 2021). Many studies have suggested that sea turtles may adapt to increases in temperature by altering their nesting behavior, through changes in their nesting distribution, and nest-site choice (Kamel & Mrosovsky, 2006; Morjan, 2003), and by shifting nesting to cooler months (Almpanidou et al., 2018; Dalleau et al., 2012; Pike et al., 2006; Weishampel et al., 2004). Earlier nesting has already occurred in some turtle populations as a response to climatic warming (e.g., Pike et al., 2006; Weishampel et al., 2004). However, it is unclear whether phenological and behavioral shifts can sufficiently buffer the effects of rising temperatures (Almpanidou et al., 2018; Laloë & Hays, 2023; Monsinjon, Lopez-Mendilaharsu, et al., 2019). Although two other studies (Almpanidou et al., 2018; Laloë & Hays, 2023) have explored whether earlier shifts in phenology can preserve the present-day thermal niche of sea turtle nesting environment in a changing climate, only one other study (Monsinjon, Lopez-Mendilaharsu, et al., 2019) explores the implications of phenological responses to sea turtle reproductive output (hatching success and primary sex ratio), of which they focused on loggerhead turtles (Caretta caretta). Given that different sea turtle species have different spatial–temporal nesting patterns, we expand from this study focused on loggerhead turtles to assess the extent to which phenological shifts by four different species of sea turtles could mitigate increases in temperature at different sea turtle nesting sites globally to maintain the reproductive output of affected populations. Furthermore, to build on previous work, we explore whether nesting populations could benefit from both an earlier and a later phenological shift. To do so, we calculated the shift (backward and forward, respectively) that would be required for incubation temperature, hatching success, and sex ratio to stay similar to current ranges. In doing so we are the first study to date to investigate the implications of a later nesting by sea turtles.https://onlinelibrary.wiley.com/doi/10.1111/gcb.16991

I know that the study is making predictions or projections decades out, but other activist scientists, a caterwauling media, and compliant politicians all tell us, screw the models, we are currently in the throes of a CLIMATE CRISIS already!!!! Everything is suffering!

Hottest July ever signals ‘era of global boiling has arrived’ says UN chief

How the climate crisis is changing hurricanes

Warming Oceans Are Making the Climate Crisis Significantly Worse

And now reality. Let’s check in on how those sea turtles are handling the boiling oceans and super rapidly intensifying super hurricanes.

South Florida beaches in Broward County saw a record number of leatherback sea turtle nests this year.

Leatherback sea turtles laid a record 79 nests along the beaches of Broward County in 2021, almost double the number of nests from the previous record, according to the South Florida SunSentinel. The previous record was 46 nests in 2012.Florida county beaches see record number of sea turtle nests this year

LMC has currently documented 21,020 nests: 215 from leatherbacks, 14,469 from loggerheads, and 6,336 from green turtles. All of these species are of sea turtles are threatened or endangered.

The center’s researchers attribute the boost in numbers to successful conservation efforts over the past few decades.Loggerhead Marinelife Center director of research speaks about record-breaking number of sea turtle nests

Last year, the total number of sea turtle nests was an impressive 18,132, but that number has just been blown out of the water with tons of nesting nights still to go in the 2024 season, according to the vice president of research at Loggerhead Marine Life Center, Dr. Justin Perrault.

“We have officially broken the nest record. As of today, we have 21,666 sea turtle nests,” he said.

….

Believe it or not, there are still three whole months left of nest season, so Perrault is predicting a grand total of 27,000 nests come November.Record broken for most sea turtle nests ever in Palm Beach County

FORT LAUDERDALE, Fla. – Biologists were taken by surprise by a record number of leatherback turtle nests found along some South Florida beaches this year.

The 79 nests laid by endangered turtles along beaches in Broward County this year is nearly double the previous record, the South Florida Sun Sentinel reported. The previous record was 46 in 2012, and the record low for leatherback nests was 12 in 2017.South Florida beaches see record year for sea turtle nests

Hawk’s Bill Turtle, Fort Lauderdale, Florida, by Charles Rotter
Green Sea Turtle, Lauderdale By The Sea, Florida, by Charles Rotter

Sea turtles in Florida are handling the climate crisis quite well.

New Paper Reveals Classic Logical Fallacy In IPCC Report

From Tallbloke’s Talkshop

September 16, 2023 by oldbrew

The alarmist media bought it at the time, which may well have been the idea.
– – –
new paper from the Global Warming Policy Foundation reveals that the IPCC’s 2013 report contained a remarkable logical fallacy, says Climate Change Dispatch.

The author, Professor Norman Fenton, shows that the authors of the Summary for Policymakers claimed, with 95% certainty, that more than half of the warming observed since 1950 had been caused by man.

But as Professor Fenton explains, their logic in reaching this conclusion was fatally flawed.

“Given the observed temperature increase, and the output from their computer simulations of the climate system, the IPCC rejected the idea that less than half the warming was man-made. They said there was less than a 5% chance that this was true.”

“But they then turned this around and concluded that there was a 95% chance that more than half of observed warming was man-made.”

This is an example of what is known as the Prosecutor’s Fallacy, in which the probability of a hypothesis given certain evidence is mistakenly taken to be the same as the probability of the evidence given the hypothesis.

As Professor Fenton explains:

If an animal is a cat, there is a very high probability that it has four legs. However, if an animal has four legs, we cannot conclude that it is a cat. It’s a classic error, and is precisely what the IPCC has done.”

Source here.

Controversy surrounding the Sun’s role in climate change

From Climate Etc.

by Dr. Willie Soon, Dr. Ronan Connolly & Dr. Michael Connolly

Gavin Schmidt at realclimate.org attempts to dismiss our recent papers, including pseudo-scientific takedowns.  This post takes a deep dive into the controversies.

In the last month, we have co-authored three papers in scientific peer-reviewed journals collectively dealing with the twin problems of (1) urbanization bias and (2) the ongoing debates over Total Solar Irradiance (TSI) datasets:

  1. Soon et al. (2023). Climatehttps://doi.org/10.3390/cli11090179. (Open access)
  2. Connolly et al. (2023). Research in Astronomy and Astrophysicshttps://doi.org/10.1088/1674-4527/acf18e. (Still in press, but pre-print available here)
  3. Katata, Connolly and O’Neill (2023). Journal of Applied Meteorology and Climatologyhttps://doi.org/10.1175/JAMC-D-22-0122.1. (Open access)

All three papers have implications for the scientifically challenging problem of the detection and attribution (D&A) of climate change. Many of our insights were overlooked by the UN’s Intergovernmental Panel on Climate Change (IPCC) in their last three Assessment Reports (AR), i.e., IPCC AR4 (2007), IPCC AR5 (2013) and IPCC AR6 (2021). This means that the IPCC’s highly influential claims in those reports that the long-term global warming since the 19th century was “mostly human-caused” and predominantly due to greenhouse gas emissions were scientifically premature and the scientific community will need to revisit them.

So far, the feedback on these papers has been very encouraging. In particular, Soon et al. (2023) seems to be generating considerable interest, with the article being viewed more than 20,000 times on the journal website in the first 10 days since it was published.

However, some scientists who have been actively promoting the IPCC’s attribution statements over the years appear to be quite upset by the interest in our new scientific papers.

This week (September 6th, 2023), a website called RealClimate.org published a blog post by one of their contributors, Dr. Gavin Schmidt, the director of the NASA Goddard Institute for Space Studies (NASA GISS). In this post, Dr. Schmidt is trying to discredit our analysis in Soon et al. (2023), one of our three new papers, using “straw-man” arguments and demonstrably false claims.

As we summarize in Connolly et al. (2023),

“A “straw man” argument is a logical fallacy where someone sets up and then disputes a position that was not actually made by the group being criticised. Instead, the group’s arguments or points are either exaggerated, misrepresented, or completely fabricated by the critics.”

In our opinion, while this rhetorical technique might be good for marketing, political campaigning, “hit pieces”, etc., it is not helpful for either science or developing informed opinions. Instead, we strive in our communications to take a “steel-manning” approach. As we point out in Connolly et al. (2023),

“Essentially, this involves addressing the best and most constructive form of someone’s argument – even if it is not the form they originally presented.”

With that in mind, we will first steel-man Dr. Schmidt’s apparent criticisms of Soon et al. (2023).

Steel-manning Dr. Schmidt’s criticisms of Soon et al. (2023)

In his latest RealClimate post, Dr. Schmidt claims the following:

  1. He asserts that one of the two Total Solar Irradiance (TSI) time series that we considered in Soon et al. (2023) is flawed, out-dated and unreliable. (As an aside, this was also one of the 27 TSI series we considered in Connolly et al. (2023) but he does not discuss this paper here)
  2. He claims that a 2005 paper by Dr. Willie Soon looking at the relationship between TSI and Arctic temperatures has been disproven by the passage of time.
  3. He argues that the “rural-only” Northern Hemisphere land surface temperature record that was one of the two temperature records we analysed in Soon et al. (2023) is not representative of rural global temperature trends or even rural Northern Hemisphere temperature trends.

Later in this post, we will respond to each of Dr. Schmidt’s claims and show how they are incorrect. But, first it may be useful to provide some background information on RealClimate.org.

How reliable is RealClimate.org?

RealClimate.org was created in 2004 as a blog to promote the scientific opinions of the website owners. It is currently run by five scientists: Dr. Gavin Schmidt, Prof. Michael Mann, Dr. Rasmus Benestad, Prof. Stefan Rahmstorf and Prof. Eric Steig.

Anybody with scientific training (or even just a careful reader) who actually reads our paper will be able to see that each of the claims in Dr. Schmidt’s recent blog-post are either false, misleading or already clearly addressed by our paper. Therefore, scientifically speaking, his post doesn’t contribute in any productive or meaningful way.

Instead, unfortunately, the goal of his post seems to be to try and stop inquiring minds from reading our paper.

If people were only to read his blog-post then they might be discouraged from even looking at our paper – and therefore wouldn’t find out that Dr. Schmidt’s alleged “criticisms” are without merit.

This type of pseudoscientific “take-down” of any studies that disagree with the RealClimate team’s scientific opinions seems to be a common pattern. For example, in November, they posted a similar “take-down” of our 2021 study that they disdainfully titled “Serious mistakes found in recent paper by Connolly et al.” That post summarized their attempted “rebuttal” of our Connolly et al. (2021) paper by Richardson & Benestad (2022).

Anybody who reads both Connolly et al. (2021) and Richardson & Benestad (2022) will quickly realize that their attempted “rebuttal” was also easily disproven. Indeed, two of our three recent papers explicitly demonstrate that Richardson & Benestad (2022)’s claims were flawed and erroneous.

Again, it seems that the goal of Richardson & Benestad (2022) and RealClimate’s accompanying post in November was NOT to further the science, but rather to discourage people from actually reading Connolly et al. (2021)!

Connolly et al. (2023) is our formal reply to Richardson & Benestad (2022)’s attempted rebuttal of our earlier Connolly et al. (2021) paper.

For anybody who is wondering what our response to Richardson & Benestad’s November 2022 RealClimate post is, we recommend reading the full papers themselves.

All three articles were published in the peer-reviewed journal, Research in Astronomy and Astrophysics (RAA for short).

  • Connolly et al. (2021): here
  • Richardson & Benestad (2022): here
  • Our reply, Connolly et al. (2023): abstractpreprint here (final version is still being typeset)

Addressing claim 1: What is the most reliable TSI time series available?

A key challenge that is the subject of considerable ongoing scientific debate and controversy is the question of how Total Solar Irradiance (TSI) has changed since the 19th century and earlier.

It was only in late 1978, during the satellite era, that it became possible to directly measure the TSI from above the Earth’s atmosphere using TSI-measuring instruments onboard satellites.

Even during the satellite era, it is still unclear exactly how TSI has changed because:

  1. Each satellite mission typically only remains active for 10-15 years.
  2. Each satellite instrument gives a different average TSI value.
  3. Each satellite implies subtly different, but significant differences in the trends between each sunspot cycle.

These problems can be seen below:

We can see from the above that, even though the data from each satellite mission are different, all of the instruments record the increases and decreases in solar activity over the roughly 11 year “solar cycle” which is observed in many solar activity indicators.

However, because the individual instruments typically only cover 10-15 years and they show different underlying trends relative to each other, it is unclear what other trends in TSI have occurred over the satellite era.

Several different research teams have developed their own satellite composites by combining the above satellite data in different ways and using different assumptions and methodologies. See this CERES-Science post for a summary.

Some of the main recent composites are:

  1. ACRIM” – The ACRIM group finds that in addition to the 11 year solar cycle, there are important trends between each cycle. They find there was a significant increase in TSI between each solar minimum and maximum in the 1980s and 1990s, followed by a slight decrease in the early 2000s. See e.g., Scafetta et al. (2019).
  2. PMOD” – The PMOD group applies several adjustments to the data of some of the early satellite missions and uses different methodological choices and assumptions. They find there has been a slight decrease in TSI between each of the cycles, but that it has been quite modest. See e.g., Montillet et al. (2022).
  3. RMIB/ROB” – The ROB group (previously called RMIB) argues that there has been almost no change in TSI over the satellite era other than the 11 year solar cycle. See e.g., Dewitte & Nevens (2016).
  4. The Community Composites” – Dudok de Wit et al. (2017) offer two different TSI composites. One using the original satellite data implies a reconstruction intermediate between the RMIB and ACRIM composites. The other using the PMOD-adjusted satellite data implies a reconstruction similar to the PMOD composite.

For the pre-satellite era, we don’t have direct measurements. Instead, researchers have to rely on “solar proxies” that they hope are accurately capturing important aspects of changing solar activity.

Some of the proxies include: sunspot numbers, group sunspot numbers, solar cycle lengths, the ratios of penumbra and umbra features of sunspots, bright spots in the sun’s photosphere (called solar faculae), cosmogenic isotope measurements, etc.

Typically, the solar proxies are calibrated against the satellite measurements during the satellite era. The calibrated solar proxies are then used to estimate the changes in TSI during the pre-satellite era.

Already you are probably thinking this is a complex and challenging problem. You are correct! Although, some scientists act as if these problems have all been fully resolved, those of us who have been actively researching these problems for many years can tell you that it is a thorny and contentious subject.

Depending on (a) which satellite composite is used; (b) which solar proxies are used; and (c) what methodologies are used, different research teams can develop very different long-term TSI reconstructions.

For example,

  1. Matthes et al. (2017) is the one IPCC AR6 used – based on the average of two other TSI reconstructions – Krivova et al. (2007)’s “SATIRE” and Coddington et al. (2016)’s “NRLTSI2”. All three match well to the PMOD composite.
  2. Dewitte et al. (2022) is a simple reconstruction based on simply rescaling the sunspot number record to match the RMIB composite.
  3. Egorova et al. (2018) developed 4 different estimates. Each shows far more variability over the last few centuries than the IPCC estimates.
  4. In a 2019 NASA study, Scafetta et al. (2019) updated the original Hoyt and Schatten (1993) TSI reconstruction using the ACRIM composite. This is a somewhat unique reconstruction because unlike most of the other reconstructions that only use one or two solar proxies, Hoyt and Schatten used five solar proxies in order to capture multiple aspects of solar variability.
  5. Penza et al. (2022) implies that TSI has changed significantly over the past century or so, but not as much as the Egorova et al. (2018) or the updated Hoyt and Schatten (1993) reconstruction.

In Connolly et al. (2023), we analysed a total of 27 TSI reconstructions, including all of the above. However, in Soon et al. (2023), for simplicity, we focused on just two of the above reconstructions – Matthes et al. (2017) and the ACRIM-updated Hoyt & Schatten reconstruction.

Dr. Schmidt clearly does not like either the ACRIM composite or Hoyt & Schatten’s original TSI composite. We can understand why! It implies a much larger role for the Sun in the climate changes since the 19th century than the RealClimate team claims exist.

However, it is worth noting that Hoyt and Schatten (1993) was used as one of the 6 TSI series considered by the CMIP3 modelling project that the IPCC used for their detection and attribution analysis in their 4th Assessment Report (2007). This can be confirmed by checking the list of “SOL” (i.e., TSI) time series on pages 11-12 of the Supplementary Material for Chapter 9 of IPCC AR4 Working Group 1.

The CMIP5 and CMIP6 modelling projects that contributed to the 2013 and 2021 AR5 and AR6 reports did not consider Hoyt and Schatten (1993) as a potential TSI series. However, this seems to partly be due the influence of Dr. Schmidt. He was the lead author of Schmidt et al. (2011), i.e., the paper recommending the climate modelling forcings to be used for the PMIP3 and CMIP5 projects.

At any rate, as we mentioned above, in Connolly et al. (2023) we considered a total of 27 different TSI reconstructions and we still reached similar conclusions to Soon et al. (2023). So, the specific choice of Hoyt and Schatten is just one way to demonstrate that the IPCC was premature in their AR6 attribution.

Addressing claim 2: Was Soon (2005) wrong?

In Soon (2005), Dr. Soon noticed a remarkable correlation between the Hoyt and Schatten (1993) TSI series and Arctic temperatures from 1875 to 2000.

In a follow-up paper, Soon (2009) repeated his analysis using (a) a newer version of the Hoyt and Schatten TSI series that had been updated to 2007 and (b) NASA GISS’ Arctic temperature record from 1880-2007.

The comparison is shown below adapted from Figure A.1 of Soon (2009):

In this week’s blog post, Dr. Schmidt conceded that the fit looked ok in 2005, but he claims it no longer holds:

“But time marches on, and what might have looked ok in 2005 (using data that only went to 2000) wasn’t looking so great in 2015.”

Dr. Schmidt then showed a plot he did in 2015 using a different TSI record from that used by Soon. His reanalysis failed to identify a compelling correlation when applied to the updated NASA GISS Arctic temperature record.

On the other hand, also in 2015, as part of our analysis of Northern Hemisphere rural temperature trends, in Soon et al. (2015) we included our own update to the original Soon (2005; 2009) analysis.

Below is Figure 27(d) of Soon et al. (2015). The blue line represents Arctic temperatures, while the dashed red line represents TSI:

Dr. Schmidt’s attempt to update Soon (2005)’s analysis to 2015, by using a different TSI record, failed.

In contrast, Soon et al. (2015)’s update’s that used the updated version of the TSI used by Soon (2005) and Soon (2009) confirmed the original findings of the earlier studies.

Addressing claim 3: Is our rural-only Northern Hemisphere temperature record representative of genuine climate change?

Dr. Schmidt appears to be confused about the rural-only Northern Hemisphere temperature record that we used as one of two comparative temperature records in Soon et al. (2023) and also as one of five comparative temperature records in Connolly et al. (2023).

We are surprised that he does not seem to have understood how these temperature records were constructed, since the construction of all five temperature records was described in detail in Connolly et al. (2021), along with a detailed discussion of the rationale for each temperature record. Details were also provided in the Soon et al. (2023) paper he was criticizing.

However, perhaps he hasn’t actually taken the time to read Connolly et al. (2021) yet. When Connolly et al. (2021) was published, Dr. Schmidt was asked to comment on the paper by a journalist for The Epoch Times. According to The Epoch Times:

“When contacted about the new paper, Gavin Schmidt, who serves as acting senior advisor on climate at NASA and the director of the Goddard Institute for Space Studies, was also blunt.

“This is total nonsense that no one sensible should waste any time on,” he told The Epoch Times.

He did not respond to a follow-up request for specific errors of fact or reasoning in the new RAA paper.”

– The Epoch TimesAugust 16th, 2021

One of the key problems we were highlighting in Soon et al. (2023) was the so-called “urbanization bias problem”. It is well-known that urban areas are warmer than the surrounding countryside. This is known as the “Urban Heat Island (UHI)” effect.

Because urban areas still only represent 3-4% of the global land surface, this should not substantially influence global temperatures.

However, most of the weather stations used for calculating the land component of global temperatures are located in urban or semi-urban areas. This is especially so for the stations with the longest temperature records. One reason why is because it is harder to staff and maintain a weather station in an isolated, rural location for a century or longer.

As a result, many of the longest station records used for calculating global temperature changes have probably also experienced localized urban warming over the course of their records. This urban warming is not representative of the climate changes experienced by the non-urban world.

Urban warming that gets mistakenly incorporated into the “global temperature” data is referred to as “urbanization bias”.

It is still unclear exactly how much the current global warming estimates are contaminated by urbanization bias. In their most recent report, the IPCC stated optimistically that urbanization bias probably accounted for less than 10% of the land warming. However, they did not offer a robust explanation for why they felt this was so.

Indeed, as described in Connolly et al. (2021), Soon et al. (2023) and Connolly et al. (2023), several scientific studies have suggested that urban biases accounted for more than 10% of the land warming – and possibly much more.

Brief detour: background to how and why we developed our rural-only Northern Hemisphere temperature record

In Connolly et al. (2021), we attempted to resolve the urbanization bias problem by developing a rural-only temperature record that only used temperature records from rural stations or stations that had been explicitly corrected for urbanization bias. However, we faced two major problems that we had been trying to resolve for nearly a decade:

  1. There was a severe shortage of rural stations with temperature records of a century or longer.
  2. Many weather stations with long temperature records are contaminated by other non-climatic biases, such as station moves, changes in instrumentation, etc.

When we looked at how other international groups (including Dr. Schmidt’s group at NASA GISS) were accounting for the non-climatic biases in the data, we discovered that they were not making any effort to contact the station owners for “station history metadata”, i.e., information on any changes associated with the station over its record.

Instead, most groups (including NASA GISS), were relying on automated computer programs that tried to guess when station changes might have introduced a bias. These programs used statistical algorithms that compared each station record to those of neighboring stations and applying “homogenization adjustments” to the data.

In a series of three “working papers”, two of us (Dr. Ronan Connolly and Dr. Michael Connolly) had published in 2014, we described how:

  1. There were serious flaws in the homogenization algorithms used by NOAA, the group whose homogenized data NASA GISS used for their global temperature estimates. (https://oprj.net/articles/climate-science/34)
  2. There were also serious problems with the additional “urbanization bias” adjustment computer program that NASA GISS applied to NOAA’s homogenized data (see https://oprj.net/articles/climate-science/31)
  3. All of the published studies up until at least 2013 that purported to have ruled out urbanization bias as a substantial problem had methodological flaws that meant their conclusions were invalid (see http://oprj.net/articles/climate-science/28)

In April 2015, while we were visiting New York City (along with Dr. Imelda Connolly), we offered to discuss these problems with Dr. Schmidt at the NASA GISS offices and see if possible solutions could be found. Dr. Schmidt declined the invitation explaining that he was not very familiar with how NASA GISS’ global temperature dataset was constructed. But, he kindly arranged for us to discuss the working papers with Dr. Reto Ruedy who was the lead scientist in charge of NASA GISS’ temperature dataset (called “GISTEMP”) at the time. All four of us (Dr. Reto Ruedy, Dr. Imelda Connolly, Dr. Michael Connolly and Dr. Ronan Connolly) met in the iconic “Tom’s Restaurant” right beside the NASA GISS office building.

Dr. Ruedy admitted that none of their research team had considered the various problems we had raised in our working papers. We asked him if he could see any problems with our analysis. He said that he couldn’t immediately, but he promised to e-mail us if he could see any mistakes in our analysis. Since he never e-mailed us, we assume that he couldn’t find any errors.

During our meeting with Dr. Ruedy, we warned him of a problem with NOAA’s homogenization algorithm which we call the “urban blending” problem, but others have called the “statistical aliasing problem”. Recently we have confirmed the severity of this problem in Katata et al. (2023).

We warned him that because of this problem, the urban warming biases in the data would become more deeply embedded in his global temperature estimates if he used NOAA’s homogenized version of the data.

He admitted that this was a problem, but he explained that the NASA GISS team in charge of the GISTEMP dataset were only allocated a limited number of hours per week to work on the data. So, he said that they had to trust that NOAA’s homogenization efforts were better than nothing.

We left our meeting with Dr. Ruedy quite disappointed to discover that even NASA GISS’ well-resourced research team didn’t know how to resolve these key scientific problems and were effectively just hoping that someone else was looking after the data.

We decided, if nobody else was going to try and properly resolve these problems, we would try.

At the time, NOAA provided two types of urban ratings associated with each of their Global Historical Climatology Network (GHCN) dataset – one based on local populations and the other based on the intensity of night-lights in the area. This was version 3 of the GHCN and NOAA kept it updated until late 2019. The current version of GHCN (version 4) does not include any urbanization metrics. So, for now our main analysis has focused on version 3. However, we are working on expanding our analysis in the future using version 4 – see Soon et al. (2018); O’Neill et al. (2022) and Katata et al. (2023) for more details on this later work.

We decided to sort all 7200 of the GHCN stations into three categories – “rural” stations are those rural in terms of both GHCN urban metrics; “urban” stations are those urban in terms of both metrics. All other stations we classify as “semi-urban”.

Immediately, we found several problems:

  1. Less than 25% of the GHCN stations are “rural”.
  2. Most of the rural stations had very short station records – often only covering 40-50 years or so.
  3. The few rural stations that had long records reaching back to the late-19th century or early-20th century were almost entirely in the Northern Hemisphere – and confined to a few regions: North America, Europe, East Asia and several Arctic locations.
  4. Many of the rural stations with nominally long station records often contained large data gaps and sudden shifts in the average temperature that could potentially be due to non-climatic station changes.

Most of the groups generating global temperature records from the weather station data rely on the temperature homogenization computer programs mentioned above to automatically adjust the original temperature records to remove “non-climatic biases” from the data.

If these homogenization computer programs were as reliable as many scientists have assumed, then these “automatically homogenized” temperature records should no longer be contaminated by non-climatic biases.

However, as we had discovered (and discussed with Dr. Ruedy in our 2015 meeting), these homogenization algorithms have serious statistical problems that introduce new non-climatic biases into the homogenized data.

In subsequent years, we have demonstrated the severity of these biases and statistical problems in several peer-reviewed scientific articles: Soon et al. (2018); O’Neill et al. (2022) and the recent Katata et al. (2023).

Therefore, we realised that the other groups analysing the weather station data were inadvertently making a major scientific blunder by relying uncritically on this automatically “homogenized” data.

Instead, to correct for non-climatic biases in the data, we need to start making more realistic experimentally-based corrections.

We decided to begin our analysis by identifying the areas with the longest rural records and the most information on the non-climatic biases associated with the data. We found that just four regions accounted for more than 80% of all the rural stations with data for the late-19th century/early-20th century. All four regions are in the Northern Hemisphere.

In our opinion, there was simply not enough Southern Hemisphere rural data to construct a global rural-only temperature series that would reach back to the late-19th century.

Therefore, we confined our analysis to the more data-rich Northern Hemisphere.

One region where we think we can expand our analysis in the future is for Europe (as we discuss in the papers). Currently, our European rural temperature analysis is confined to Ireland because we were able to obtain the key station history metadata for the Irish stations from the national meteorological organization (Met Éireann). However, in a recent paper, O’Neill et al. (2022), we carried out a large collaborative effort with scientists across Europe to compile the station history metadata for more than 800 weather stations in 24 European countries – see here for a summary. Most of these European stations are urbanized, but our preliminary analysis suggests that we should be able to use this metadata in the future to develop a more data-rich “rural Europe” temperature record.

How does our rural-only series compare to the standard “urban and rural” record?

The top panel shows the standard Northern Hemisphere land temperature estimates using all stations – urban as well as rural. The bottom panel shows our rural-only temperature estimate.

In Soon et al. (2023) – the paper Dr. Schmidt was complaining about – we consider both temperature estimates. We also consider two different TSI records – the Matthes et al. (2017) TSI series used for the attribution experiments in IPCC AR6 and the updated Hoyt and Schatten TSI series that we discussed earlier.

See below for a summary of some of our key findings:

Dr. Schmidt complains that our Northern Hemisphere rural-only temperature record is not reliable because,

“It’s not a good areal sample of the northern hemisphere, it’s not a good sample of rural stations – many of which exist in the rest of Europe, Australia, Southern Africa, South America etc., it’s not a good sample of long stations (again many of which exist elsewhere).”

This is a rather clever, but deceptive, misrepresentation of the data. Notice how he splits up his statement into two parts.

For the first part, he correctly says that there is “a good sample of rural stations” in other regions from those we analysed, but he neglects to explain that they are mostly stations with short records.

Indeed, if we look at the number of stations available in the year 2000, there are indeed many rural stations around the globe:

Then in the second part of his statement, he correctly says that there are many long station records outside of the regions we analysed, but he neglects to explain that they are not rural!

Below are the stations with data available in 1880. We have highlighted the three Southern Hemisphere regions he implied we should have also incorporated, i.e., Australia, Southern Africa and South America:Do you see why Dr. Schmidt’s characterization of the available data was disingenuous?

But, what about other estimates of Northern Hemisphere temperatures?

In Soon et al. (2023), we were assessing the Northern Hemisphere land surface warming (1850-2018) based on the weather station data. However, in Connolly et al. (2023), we also considered an additional three Northern Hemisphere temperature series. One was generated from the Sea Surface Temperature (SST) data. The other two were based on temperature proxy data: (a) tree-ring temperature proxies or (b) glacier-length temperature proxies.

Dr. Schmidt claimed that our rural-only temperature series was not representative of Northern Hemisphere temperature trends. If he is correct, then presumably these other temperature estimates would show very different trends. Let’s see!

What do you think?

Obviously, they are not exactly identical. However, in our opinion, all three of these alternative temperature estimates are broadly similar to our rural-only temperature record.

Therefore, we disagree with Dr. Schmidt’s claim.

Finally, if you recall from the beginning, Dr. Schmidt didn’t like the second TSI series we analysed in Soon et al. (2023). However, in Connolly et al. (2023), we analysed a total of 27 TSI series.

For each temperature record, we carried out statistical analysis in terms of the “natural” and “anthropogenic” (i.e., human-caused) climate drivers that the IPCC used for their attribution experiments. The IPCC only considered two natural drivers – TSI and volcanic eruptions. We used all of the IPCC’s climate driver records, but we repeated our analysis using each of the 27 TSI series in turn.

In both Connolly et al. (2023) and Soon et al. (2023), we adopt a similar approach to the IPCC’s attribution analysis. That is, we compare the results you get by considering:

  1. “Only natural factors” (which the IPCC defines as TSI and volcanic)
  2. “Only anthropogenic factors”.
  3. “Natural and anthropogenic factors”

If you want to see the results from all of these combinations, we recommend reading the full papers. But, for simplicity, let us just compare the first two combinations.

In Figure 6 of Connolly et al. (2023), we compared the “natural factors only” fittings using three of the best-fitting TSI series to the “anthropogenic factors only” fits. See below:

The TSI record that Dr. Schmidt was complaining about is labelled here as “H1993”. But, notice how for each temperature record, we can obtain similarly good fits using other TSI estimates.

Conclusions

In Soon et al. (2023), we reached the following key conclusions:

“(1) urbanization bias remains a substantial problem for the global land temperature data; (2) it is still unclear which (if any) of the many TSI time series in the literature are accurate estimates of past TSI; (3) the scientific community is not yet in a position to confidently establish whether the warming since 1850 is mostly human-caused, mostly natural, or some combination.”

These conclusions are consistent with our findings in another two of our papers that were also published in the last few weeks – in separate peer-reviewed scientific journals and using different independent analyses.

That is, in Katata et al. (2023), we confirmed that the IPCC’s estimate of the extent of urbanization bias in the global temperature data was much too low. Meanwhile, in Connolly et al. (2023), we concluded that it is “unclear whether the observed warming is mostly human-caused, mostly natural or some combination of both”.

Dr. Schmidt and the RealClimate team apparently do not want you to read our papers. They seem to be afraid that if you did, their claims on climate change would no longer seem so convincing.

In contrast, we are not afraid of people reading papers that disagree with us. In fact, we encourage people to read multiple scientific perspectives and form their own opinions. We agree with J.S. Mill’s quote below:

Below are links to the papers we mentioned above. We look forward to further discussion on our papers

References mentioned

  1. W. Soon, R. Connolly, M. Connolly, S.-I. Akasofu, S. Baliunas, J. Berglund, A. Bianchini, W.M. Briggs, C.J. Butler, R.G. Cionco, M. Crok, A.G. Elias, V.M. Fedorov, F. Gervais, H. Harde, G.W. Henry, D.V. Hoyt, O. Humlum, D.R. Legates, A.R. Lupo, S. Maruyama, P. Moore, M. Ogurtsov, C. ÓhAiseadha, M.J. Oliveira, S.-S. Park, S. Qiu, G. Quinn, N. Scafetta, J.-E. Solheim, J. Steele, L. Szarka, H.L. Tanaka, M.K. Taylor, F. Vahrenholt, V.M. Velasco Herrera and W. Zhang (2023). “The Detection and Attribution of Northern Hemisphere Land Surface Warming (1850–2018) in Terms of Human and Natural Factors: Challenges of Inadequate Data”, Climate, 11(9), 179; https://doi.org/10.3390/cli11090179. (Open access)R. Connolly, W. Soon, M. Connolly, S. Baliunas, J. Berglund, C.J. Butler, R.G. Cionco, A.G. Elias, V. Fedorov, H. Harde, G.W. Henry, D.V. Hoyt, O. Humlum, D.R. Legates, N. Scafetta, J.-E. Solheim, L. Szarka, V.M. Velasco Herrera, H. Yan and W.J. Zhang (2023). “Challenges in the detection and attribution of Northern Hemisphere surface temperature trends since 1850”. Research in Astronomy and Astrophysicshttps://doi.org/10.1088/1674-4527/acf18e. (Still in press, but pre-print available here)G. Katata, R. Connolly and P. O’Neill (2023). “Evidence of urban blending in homogenized temperature records in Japan and in the United States: implications for the reliability of global land surface air temperature data”. Journal of Applied Meteorology and Climatology. 62(8), 1095-1114. https://doi.org/10.1175/JAMC-D-22-0122.1. (Open access)R. Connolly, W. Soon, M. Connolly, S. Baliunas, J. Berglund, C. J. Butler, R. G. Cionco, A. G. Elias, V. M. Fedorov, H. Harde, G. W. Henry, D. V. Hoyt, O. Humlum, D. R. Legates, S. Lüning, N. Scafetta, J.-E. Solheim, L. Szarka, H. van Loon, V. M. Velasco Herrera, R. C. Willson, H. Yan and W. Zhang (2021). “How much has the Sun influenced Northern Hemisphere temperature trends? An ongoing debate”. Research in Astronomy and Astrophysics, 21, 131. https://doi.org/10.1088/1674-4527/21/6/131. (Open access)M.T. Richardson and R.E. Benestad (2022). “Erroneous use of Statistics behind Claims of a Major Solar Role in Recent Warming”. Research in Astronomy and Astrophysics, 22, 125008. http://dx.doi.org/10.1088/1674-4527/ac981c. (pdf available here).N. Scafetta, R.C. Willson, J.N. Lee and D.L. Wu (2019). “Modeling Quiet Solar Luminosity Variability from TSI Satellite Measurements and Proxy Models during 1980–2018”. Remote Sensing. 11(21), 2569. https://doi.org/10.3390/rs11212569. (Open access)Montillet, J.-P., Finsterle, W., Kermarrec, G., Sikonja, R., Haberreiter, M., Schmutz, W., & Dudok de Wit, T. (2022). “Data fusion of total solar irradiance composite time series using 41 years of satellite measurements”. Journal of Geophysical Research: Atmospheres, 127, e2021JD036146. https://doi.org/10.1029/2021JD036146. (Preprint here)S. Dewitte and S. Nevens (2016). “The Total Solar Irradiance Climate Data Record”. The Astrophysical Journal, 830 25. https://doi.org/10.3847/0004-637X/830/1/25 (Open access).T. Dudok de Wit, G. Kopp, C. Fröhlich and M. Schöll (2017). “Methodology to create a new total solar irradiance record: Making a composite out of multiple data records”. Geophysical Research Letters, 44(3), 1196-1203. https://doi.org/10.1002/2016GL071866. (Open access)K. Matthes, B. Funke, M.E. Andersson, L. Barnard, J. Beer, P. Charbonneau, M.A. Clilverd, T. Dudok de Wit, M. Haberreiter, A. Hendry, C.H. Jackman, M. Kretzschmar, T. Kruschke, M. Kunze, U. Langematz, D.R. Marsh, A.C. Maycock, S. Misios, C.J. Rodger, A.A. Scaife, A. Seppälä, M. Shangguan, M. Sinnhuber, K. Tourpali, I. Usoskin, M. van de Kamp, P.T. Verronen and S. Versick (2017) “Solar forcing for CMIP6 (v3.2)”. Geoscientific Model Development, 10, 2247–2302. https://doi.org/10.5194/gmd-10-2247-2017 . (Open access).N. A. Krivova, L. Balmaceda and S. K. Solanki (2007). “Reconstruction of solar total irradiance since 1700 from the surface magnetic flux”. Astronomy and Astrophysics, 467, 335-346. https://doi.org/10.1051/0004-6361:20066725 . (Open access).O. Coddington, J.L. Lean, P. Pilewskie, M. Snow and D. Lindholm (2016). “A Solar Irradiance Climate Data Record”. Bulletin of the American Meteorological Society, 97(7), 1265-1282. https://doi.org/10.1175/BAMS-D-14-00265.1 . (Open access)S. Dewitte, J. Cornelis and M. Meftah (2022). “Centennial Total Solar Irradiance Variation”, Remote Sensing, 14(5), 1072. https://doi.org/10.3390/rs14051072 . (Open access)T. Egorova, W. Schmutz, E. Rozanov, A.I. Shapiro, I. Usoskin, J. Beer, R.V. Tagirov and T. Peter (2018). “Revised historical solar irradiance forcing”. Astronomy and Astrophysics, 615, A85. https://doi.org/10.1051/0004-6361/201731199 (Open access)D.V. Hoyt and K.H. Schatten (1993). “A discussion of plausible solar irradiance variations, 1700-1992”. Journal of Geophysical Research Space Physics, 98(A11), 18895-18906. https://doi.org/10.1029/93JA01944 . (pdf available here)V. Penza, F. Berrilli, L. Bertello, M. Cantoresi, S. Criscuoli and P. Giobbi (2022). “Total Solar Irradiance during the Last Five Centuries”. The Astrophysical Journal, 937(2), 84. https://doi.org/10.3847/1538-4357/ac8a4b . (Open access).W. W.-H. Soon (2005). “Variable solar irradiance as a plausible agent for multidecadal variations in the Arctic-wide surface air temperature record of the past 130 years”. Geophysical Research Letters. 32, 16. https://doi.org/10.1029/2005GL023429 (Open access)W. W.-H. Soon (2009). “Solar Arctic-Mediated Climate Variation on Multidecadal to Centennial Timescales: Empirical Evidence, Mechanistic Explanation, and Testable Consequences”. Physical Geography. 30, 144-184. https://doi.org/10.2747/0272-3646.30.2.144 (pdf available here).W. Soon, Ronan Connolly and M. Connolly (2015). “Re-evaluating the role of solar variability on Northern Hemisphere temperature trends since the 19th century”. Earth-Science Reviews, 150, 409-452. https://doi.org/10.1016/j.earscirev.2015.08.010. (Preprint version)P. O’Neill, R. Connolly, M. Connolly, W. Soon, B. Chimani, M. Crok, R. de Vos, H. Harde, P. Kajaba, P. Nojarov, R. Przybylak, D. Rasol, Oleg Skrynyk, Olesya Skrynyk, P. Štěpánek, A. Wypych and P. Zahradníček (2022). “Evaluation of the homogenization adjustments applied to European temperature records in the Global Historical Climatology Network dataset”. Atmosphere, 13(2), 285. https://doi.org/10.3390/atmos13020285. (Open access)W.W-H. Soon, R. Connolly, M. Connolly, P. O’Neill, J. Zheng, Q. Ge, Z. Hao and H. Yan (2018). Comparing the current and early 20th century warm periods in China. Earth-Science Reviews, 185, 80-101. https://doi.org/10.1016/j.earscirev.2018.05.013. (Preprint version)G.A. Schmidt, J.H. Jungclaus, C.M. Ammann, E. Bard, P. Braconnot, T.J. Crowley, G. Delaygue, F. Joos, N.A. Krivova, R. Muscheler, B.L. Otto-Bliesner, J. Pongratz, D.T. Shindell, S.K. Solanki, F. Steinhilber, and L.E.A. Vieira (2011). “Climate forcing reconstructions for use in PMIP simulations of the last millennium (v1.0)”. Geoscientific Model Development, 4, 33–45, https://doi.org/10.5194/gmd-4-33-2011. (Open access).

Walker Circulation study is a damp squib for climate worriers, contradicts models

From Tallbloke’s Talkshop

August 25, 2023 by oldbrew

Walker Circulation – El Niño conditions [image credit: NOAA]

The paper this article is based on informs us that ‘The Pacific Walker circulation (PWC) has an outsized influence on weather and climate worldwide. Yet the PWC response to external forcings is unclear’. Under a headline saying: ‘Greenhouse gases are changing air flow over the Pacific Ocean, raising Australia’s risks of extreme weather’, the article here offers almost nothing to support an argument for any human-caused climate effects ‘in the industrial era’. The paper is somewhat embarrassing for climate models: ‘Most climate models predict that the PWC will ultimately weaken in response to global warming. However, the PWC strengthened from 1992 to 2011, suggesting a significant role for anthropogenic and/or volcanic aerosol forcing, or internal variability’. So that role could be anything or nothing, but the models trended the wrong way anyway. The search for ‘anthropogenic fingerprints’ continues.
– – –
After a rare three-year La Niña event brought heavy rain and flooding to eastern Australia in 2020-22, we’re now bracing for the heat and drought of El Niño at the opposite end of the spectrum, says The Conversation (via Phys.org).

But while the World Meteorological Organization has declared an El Niño event is underway, Australia’s Bureau of Meteorology is yet to make a similar declaration. Instead, the Bureau remains on “El Niño alert.”

The reason for this discrepancy is what’s called the Pacific Walker Circulation. The pattern and strength of air flows over the Pacific Ocean, combined with sea surface temperatures, determines whether Australia experiences El Niño or La Niña events.

In our new research, published in the journal Nature, we asked whether the buildup of greenhouse gases in the atmosphere had affected the Walker Circulation. We found the overall strength hasn’t changed yet, but instead, the year-to-year behavior is different.

Switching between El Niño and La Niña conditions has slowed over the industrial era. That means in the future we could see more of these multi-year La Niña or El Niño type events. So we need to prepare for greater risks of floods, drought and fire. [Talkshop comment – alarmist psychobabble].

An ocean-atmosphere climate system

La Niña and its counterpart El Niño are the two extremes of the El Niño Southern Oscillation—a coupled ocean-atmosphere system that plays a major role in global climate variability.

The Walker Circulation is the atmospheric part. Air rises over the Indo-Pacific Warm Pool (a region of the ocean that stays warm year-round) and flows eastward high in the atmosphere. Then it sinks back to the surface over the eastern equatorial Pacific and flows back to the west along the surface, forming the Pacific trade winds. In short, it loops in an east-west direction across the equatorial Pacific Ocean.

But the Walker Circulation doesn’t always flow with the same intensity—sometimes it is stronger, and sometimes it is weaker.

So far, the Walker Circulation is what’s missing from the current El Niño event developing in the Pacific Ocean: it has not weakened enough for the Bureau to declare an El Niño event.

What’s happening to the Walker Circulation?

The Walker Circulation is a major influence on weather and climate in many places around the world, not just Australia.

A stronger-than-usual Walker Circulation even contributed to the “global warming slowdown” of the early 2000s. This is because a stronger Walker Circulation is often associated with slightly cooler global temperature.

So we need to know how it is going to behave in the future. To do that, we first need to know if—and if so, how—the Walker Circulation’s behavior has changed due to human activities. And to do that, we need information about how the Walker Circulation behaved before humans started affecting the climate system. [Talkshop comment – unsupported assertion].

We reconstructed Walker Circulation variability over the past millennium. We used global data from ice cores, trees, lakes, corals and caves to build a picture of how the Walker Circulation changed over time.

We found that on average, there has not yet been any industrial-era change in the strength of the Walker Circulation.

This was surprising, because computer simulations of Earth’s climate generally suggest global warming will ultimately cause a weaker, or

more El Niño-like, Walker Circulation.

Full article here..
– – –
Research paper: Forced changes in the Pacific Walker circulation over the past millennium (Aug. 2023)

New Scientist: How worried should we be about climate change?

From Net Zero Watch

How worried should we be, asks New Scientist in a Climate Change Special Issue. The 19th August issue is billed as a guide to a year of extreme weather – “a year of extremes,” when 2023 is barely half way over.

In a New Scientist Climate Special Report senior reporter Michael Le Page asks if climate change is worse than we thought it would be? Well, it depends upon who you ask – and New Scientist usually asks the same experts for their unwavering opinions which, as we shall see, are sometimes just a premonition they have.

The article in question quotes the usual crew: Peter Stott of the UK Met Office, Piers Forster of the University of Leeds, Zeke Hausfather of Berkeley Earth and Stefan Rahmstorf of the Potsdam Institute for Climate Impact Research in Germany. Together they have been quoted in the New Scientist 109 times.

According to New Scientist: “Here are the key facts you need to know.”

It’s first question is: Is the world is warming faster than expected?

In short, the conclusion the article draws is no, it isn’t; the temperatures we’re seeing are “well within the range” of climate model predictions. New Scientist adds that even the models of the 1970’s were “pretty close,” presumably before a great deal of time and money was expended making them more complicated but not more accurate.

But it’s actually not quite like that. The article quotes Zeke Hausfather of Berkeley Earth who says: “If anything, temperatures have been a bit on the low end.” So the answer to the first question is just the inverse of the habitual media narrative — the world is not warming faster than expected or predicted. The article emphasises that to put July’s weather extremes into context will take “a decade or so,” so that’s a memo for the New Scientist’s’s August 2033 Climate Special, if it’s still around by then.

Next comes the question, are we seeing more extreme weather than predicted?

The answer to this also depends upon who you ask. They asked Piers Forster who said he hasn’t seen any physical evidence for more extreme weather … although he thinks it might be possible. New Scientist then asked Peter Stott who said he thinks there is some evidence that the IPCC may have underestimated … “but the jury is still out.” So, scientifically speaking that would be another no.

Opinions not evidence

Next up: Have the impacts of our current level of warming been underestimated?

Le Page writes that “coral bleaching and die-off events have been more extensive.” As evidence for this he refers to a March 2022 article by fellow New Scientist journalist Adam Vaughan who wrote that “unusually warm ocean temperatures have turned corals white on Australia’s Great Barrier Reef the first ever mass bleaching created by the La Nina weather event.”

The link to the news release produced by the Australian Government Reef Authority on which this article was based is no longer available. But if you look on the same website at more up to date information you will find on 9th August 2023 “Coral Cover – Dynamic and Still Resistant Reef.” Last November it also carried a story, “Coral Spawning – key to reef’s remarkable recovery.”

In order to illustrate how dire things are at the New Scientist, the once respected science journal cherry-picks bad news from last March and ignores good news from just a few days ago. As a reference here is the Reef Authority’s Coral Bleaching true or false page.

Then the magazine asks: Are we closer to tipping points than anticipated?

New Scientist says “Yes, we are, though a great deal of uncertainty remains.” In other words, the answer might equally be: No, we aren’t, or We actually don’t know.

New Scientist then claims that the Atlantic Meridional Overturning Circulation (AMOC) is slowing down faster than thought. The consequences of this would be dramatic. It’s something we have covered previously – and concluded there is a very low probability of this happening. Despite the low odds, Stefan Rahmstorf thinks the danger of the AMOC collapsing this century is larger than 10 per cent. He’s entitled to his opinion. Other opinions are available.

Unsurprisingly, important data and factors are omitted from the article. Nowhere does it mention observations that show that more heat from the Sun is being retained. There has been an increase of 0.3 W/m2 since 2019 as the Sun surges to its current solar maximum. Also the new regulations reducing the emission of sulphur particulates from ship fuels seems to have made a significant difference since incoming radiation is less reflected by cleaner air over the shipping lanes. In fact over the Northern Hemisphere shipping corridor (a region where the recent heating has been particularly strong) it is estimated that there has been a very large decrease of 2 W/m2 of outgoing shortwave radiation.

In its “key facts that you need to know” New Scientist also omits that Arctic sea ice appears to have stabilised in recent years; Greenland’s ice mass balance is higher than average, and the recent global spikes in temperature are very similar to previous spikes in 2016 and 1998 based on land and satellite data. Antarctic ice however is very low, with Judy Curry publishing an excellent analysis of the various factors contributing to these developments.

A knee-jerk reaction to the weather events we have seen can result in poor articles that don’t give a fair and accurate overall picture of what is really going on. I expect we might review and assess this year’s weather events quite differently once more data is available and more reliable analyses have been done than a rushed and one-sided job done during the heat of the moment.

Feedback: david.whitehouse@netzerowatch.com