Tag Archives: Temperatures

If you are dry, you fry in the summer — naturally

From CFACT

By Joe Bastardi

There are two ways to keep temperatures well above normal. We saw that in the UK this May, the warmest on record, not because of sunshine and dry ground, but instead because it was cloudy and we. So IT’S THE NIGHT-TIME LOWS that led to one of the warmest Mays on record for the UK, but many people did not even realize it because it was so wet. So the days were not that warm at all. That is a thumbprint of extra water vapor.

Of late, the Eastern and Southern United States have had warmer nights than days vs the averages.

Night time lows last 5 years July-Sep:

Now look at the precip.

Where it is very dry, THE MAX TEMPERATURES ARE WAY ABOVE AVERAGE. This is centered over West Texas.

Further east, where it rains a lot, it is not as warm.

Now look at nighttime lows in the areas that were near normal.

They are well above normal.

But where it is dry, the max temps do go through the roof.

Mexico has been very dry and warm since spring.

That dryness is shifting north over the next few months.

And the response is the warmest we have seen on the European model.

Since this is the hottest time of the year, this is extremely impressive. The reason is because the higher the averages, the harder it is to make it higher. We could see unheard-of anomalies, up to 5F above normal, in the darker shades. You can see the warmth over eastern Hudson Bay, but with the means lower, it is easier to have a bigger deviation.

The mid and late-summer increase in moisture in the southwest, which is counted on for moisture and some cooling, is likely not to be that strong this year. For instance, in India, a strong monsoon will set in, and they have been very hot, but it will be cooler the rest of the summer. In fact, the strong Indian Monsoon is linked to the strong hurricane season in the Atlantic as those waves move west into Africa and then out into the Atlantic.  But why would it be dry over the southwest?  The reason is likely due to the hurricane season. When you have heavy precip around the Caribbean and Gulf into eastern Mexico, you have compensating areas of dryness to the northwest.

The entire basin

We have a hot summer for the country, and it may indeed be an endless summer into the fall for much of the nation. The center of the greatest heat RELATIVE TO AVERAGES is likely to be where averages are highest, meaning a challenge to the hottest JAS on record. Combine that with our ideas for an active hurricane season; there will be plenty to talk about. And, of course, even non-events relative to averages are already being hyped.

So I  wanted this idea out there front and center. Last year, the core of the worst heat was Texas and Louisiana. This year, it is further west.

Get ready for the heat to get turned up, not only with temperatures but weaponizing weather.

No, ScienceNews, Your “Ocean’s Record-Breaking Hot Streak” Claims Are False

From ClimateRealism

By Anthony Watts

A recent ScienceNews (SN) article claims that ocean temperatures are out of control in a year-long record-breaking hot streak. This is false. Numerous ocean temperature data sets show no such record-breaking values and the source SN cited to support its claims was thoroughly discredited when it made similar “record breaking” claims last year.

The entire claim of the article is based on one data set, which is seen below in the SN article:

The problem is this single source isn’t even an “official” ocean temperature data data set, rather the source is: Source: Climate Reanalyzer/Univ. of Maine • Visualization: C. Crockett. In fact, that isn’t temperature data at all, but climate model output. The data SN cited was not official data, but from a private website run by the University of Maine. Examining actual data sets show that SN claim of record ocean heat is a gross error.

The about page for ClimateReanalyser.org (the source of the SN claim) says this (bold authors):

Climate Reanalyzer began in early 2012 as a platform for visualizing climate and weather forecast models. Site content is organized into three general categories: Weather Forecasts, Climate Data, and Research Tools. Pages within the first two groups are the easiest to use and include maps, map animations, and interactive time series charts (with data export options). Research Tools include pages for generating custom maps, time series, and linear correlations from monthly climate reanalysis, gridded data, and climate models.

In other words, they take in temperature data and use models to “reanalyse” it, producing a new output.

This isn’t the first time a media outlet has been duped by Climate Reanalyzer into using model output presented as data rather than actual data. Last year, The Associated Press (AP), among many other media sources reported that July 4th was the hottest day since records began. Irresponsible fear mongering followed, such as this CNBC article, where reporter Sam Meredith wrote:

The planet’s average daily temperature climbed to 17.18 degrees Celsius (62.9 degrees Fahrenheit) on Tuesday, according to the University of Maine’s Climate Reanalyzer, an unofficial tool that is often used by climate scientists as a reference to the world’s condition.

“Monday, July 3rd was the hottest day ever recorded on Planet Earth. A record that lasted until … Tuesday, July 4th,” said Bill McGuire, professor emeritus of geophysical and climate hazards at University College London, via Twitter.

“Totally unprecedented and terrifying,” he added.

Almost immediately after the claims were published, they were thoroughly debunked by experts citing unreanalyzed data, posted widely on social media. The AP had to run a retraction.

Climate Realism debunked that claim then, noting:

All those media outlets missed the fact that they were looking at the output of a climate model, not actually measured temperatures. Only one news outlet, The Associated Press, bothered to print a sensible caveat. In the July 5th story “Earth hit an unofficial record high temperature this week – and stayed there” reporting:

On Thursday, the National Oceanic and Atmospheric Administration (NOAA) distanced itself from the designation, compiled by the University of Maine’s Climate Reanalyzer, which uses satellite data and computer simulations to measure the world’s condition. That metric showed that Earth’s average temperature on Wednesday remained at an unofficial record high, 62.9 degrees Fahrenheit (17.18 degrees Celsius), set the day before.

Bowing to pressure for corrections, the AP updated its story on July 7th to include this single yet very important paragraph:

NOAA, whose figures are considered the gold standard in climate data, said in a statement Thursday that it cannot validate the unofficial numbers. It noted that the reanalyzer uses model output data, which it called “not suitable” as substitutes for actual temperatures and climate records. The agency monitors global temperatures and records on a monthly and an annual basis, not daily.

So, in the space of two days, the media claims went from temperature data that was “[t]otally unprecedented and terrifying,” to temperature data that was not suitable for purpose.

Similarly, the computer generated reanalysis of ocean temperatures cited by SN isn’t suitable for purpose in claiming a “year-long record-breaking hot streak.” The SN data isn’t even complete, going back only to 1979.

NOAA reports that although the world’s oceans did have a warm year, the actual temperatures were significantly cooler than SN claimed. NOAA attributes the warmer temperatures to major ocean circulation patterns, rather than climate change, writing:

The year 2023 was the warmest year since global records began in 1850 at 1.18°C (2.12°F) above the 20th century average of 13.9°C (57.0°F). This value is 0.15°C (0.27°F) more than the previous record set in 2016.

Unlike the previous two years (2021 and 2022), which were squarely entrenched in a cold phase El Niño Southern Oscillation (ENSO) episode, also known as La Niña, 2023 quickly moved into ENSO neutral territory, transitioning to a warm phase episode, El Niño, by June. ENSO not only affects global weather patterns, but it also affects global temperatures. … [D]uring the warm phase of ENSO (El Niño), global temperatures tend to be warmer than ENSO-neutral or La Niña years, while global temperatures tend to be slightly cooler during cold phase ENSO episodes (La Niña). … 2021 and 2022 [did] not ranking among the five warmest years on record ….

In other words, we had one warm year in the oceans during 2023, but 2021 and 2022 weren’t abnormally warm at all. 2023 was, but it was driven by a phase shift from La Niña to El Niño conditions in the Pacific Ocean. Nature was doing what it has naturally done throughout history.

ScienceNews should stick to reporting actual science based on actual data, rather than using computer model outputs to fearmonger, making claims which aren’t true, but which do correspond to the climate crisis narrative. This SN story was neither news, nor science.

Climate Alarmist Hype that May 2024 is the “Hottest” Global Average Temperature Anomaly is Meaningless in the U.S. and at other global locations around the World

From Watts Up With That?

Guest Essay by Larry Hamlin

The usual climate alarmists’ suspects are at it again trying to use the scientifically flawed claim that a single May 2024 global average temperature anomaly data point can characterize that the “world” must be the “hottest” it’s ever been as hyped below.

Alarmists also grossly misrepresent that the earth has exceeded a 1.5 degrees threshold temperature limit that is nothing but an arbitrary and purely politically contrived alarmist propaganda claim.

Of course, this purely politically contrived climate alarmist hype tells us absolutely nothing about the actual measured temperature anomalies or absolute temperatures at any specific location anywhere in the world.

NOAA data through May 2024 for the Contiguous U.S. (shown belowoverwhelmingly establishes that the U.S. is not having the “hottest ever maximum temperature anomaly”The U.S. is not experiencing any established increasing upward trend in maximum temperature anomaly values since at least the year 2005

Furthermore, the highest May maximum temperature anomaly in the Contiguous U.S. occurred in May 1934 as shown below at 5.66 degrees F versus 1.22 degrees F (shown in red highlights above) in May 2024.

There isn’t a shred of scientific evidence that the U.S. maximum temperature anomalies or maximum absolute temperatures (addressed below) are at all unusual. 

Looking at NOAA ‘s maximum temperatures for the Contiguous U.S. (shown below) we see that May 2024 was only the 106th highest May (at 74.68 degrees F highlighted in red) out of 130 total measurement months with the highest May ever measured occurring in 1934 at 79.21 degrees F.  May 2024

Looking at NOAA’s data for the maximum measured temperature in California (shown below) we see that May 2024 was only the 96th highest measured May (at 76.8 degrees F as highlighted in red below) out of a total of 130 measurements with May 2001 being the highest ever California maximum (at 83.8 degrees F) measured temperature. 

Looking at NOAA’s data for the maximum temperature measured in May 2024 for Los Angeles (shown below) we see this month is only the 38th highest measured May (at 66.4 degrees F highlighted in red) out of 80 May measurement values. The highest maximum May temperature in Los Angeles was in May 2014 at 75.8 degrees F. 

Climate alarmists conceal the lack of validity in their use of a single global average temperature anomaly value to falsely hype that the world is the “hottest” ever when, in fact, this climate alarmist propaganda claim applies to no specific location anywhere on earth including the Contiguous U.S. or the state of California or the city of Los Angeles or other global locations.  

Comparing Temperatures: Past and Present

From Watts Up With That?

From Untitled (call me Stephen)

How accurate are our historical temperature records?

In 2021 Bugatti released their Chiron Super Sport 300+. The “300+” is because it is the first road-legal car that has reached speeds above 300mph, although production models are electronically limited to 271mph.

© Copyright: Laurent Jerry and licensed for reuse under CC-BY-SA-4.0

The Dublin Port tunnel was opened on 20th December 2006. Technically there are two 4.5km long tunnels, one for each direction of traffic.

© Copyright P L Chadwick and licensed for reuse under CC-BY-SA-2.0.

The walls of the tunnel are hard, unlike surface level motorways which have flexible safety barriers. Crashing into the walls at high speeds would be not only fatal for those involved in the crash but could also potentially damage the tunnel’s structural integrity.

To encourage people to respect the 80km/h speed limit, average speed cameras have been installed. An average speed camera system consists of at least two cameras (but ideally more) distributed over the region where the speed limit is being enforced.

The cheap option would be two cameras on each tunnel, one at the start and one at the end. If the timestamp of your car passing the second camera is less than 202.5 seconds after you passed the first camera then you have travelled the 4.5km at an average speed faster than 80km/h and a speeding ticket and penalty points for your license will follow.

The better option is to have more than two cameras distributed along the tunnel to prevent any reckless and idiotic Chiron Super Sport 300+ driver from attempting the following…

  • Enter the tunnel at a rolling 80km/h start
  • Put your foot down and reach the top speed of 490km/h after about 33.5s
  • Hold top speed for about 1.5s
  • Break hard to 30km/h over 3.5s
  • Keep to 30km/h for the remaining 164s of the journey

Because that takes exactly 202.5 seconds, if the Dublin Port tunnel only has two cameras installed per tunnel, the average speed is 80km/h even though the car’s speed varied between 30km/h and 490hm/h!

“Stephen, what exactly have the Bugatti Chiron Super Sport 300+ and the Dublin Port tunnel got to do with comparing past and present temperatures” I hear you ask. Please stick with me. I hope that it will make sense by the time you reach the end.

The thermometer is a relatively recent invention. While Galileo was messing about with thermoscopes as early as 1603, it would take until 1724 when Daniel Gabriel Fahrenheit proposed his thermometer scale before the idea of standardized temperature scales would take off. That René-Antoine Ferchault de Réaumur and Anders Celsius proposed different and competing standards in 1732 and 1742 should be unsurprising to anyone familiar with how standards work.

Unfortunately for Réaumur, his choice of 80 degrees between freezing and boiling points of water didn’t catch on.

Most of the world now uses the Celsius scale, though with the 0°C and 100°C points reversed from Anders’ original proposal, with only the United States, the Bahamas, the Cayman Islands, Palau, the Federated States of Micronesia and the Marshall Islands using Fahrenheit.

Our earliest actual temperature measurements, especially those from before 1732, rely on people either having later cross-calibrated the thermometers they originally used with other ones, or having documented their own choice of reference.

It also took a while for people to figure out how to measure the temperature. Most of the early measurements are actually indoor measurements from unheated rooms recorded once a day. Eventually it was figured out that measuring the outdoor temperature required the thermometer to actually be outside and shaded from the sun.

It would be 1864 before Thomas Stevenson would propose a standardised instrument shelter and after comparing with other shelters his final Stevenson screen design was published in 1884.

While we may laugh now at people who measured the outdoor temperature with a thermometer located in an indoor unheated room, because the room has a large heat capacity, that is it takes a while to both heat up and cool down, taking one indoor measurement a day is actually not that bad a way to measure the average outdoor temperature.

When you move your thermometer to a well-ventilated outdoor shelter such as a Stevenson screen, the thermometer will change much more rapidly. If you want to measure the average temperature you will need to take multiple readings throughout the day and night.

In 1780, James Six invented a thermometer that keeps track of the maximum and minimum temperature since it was reset, though as it relied on mercury to move the markers, it can have issues in cold temperatures. By 1790 Daniel Rutherford had developed designs for separate minimum and maximum thermometers that used alcohol and mercury respectively and allowed for greater accuracy of both readings.

It would take until the 1870’s before minimum and maximum thermometers would be widely used to track the variability of temperature. For example the Central England Temperature history, the longest temperature record, is based on observations for a variety of hours prior to 1877 with daily minimum and maximum temperatures used thereafter.

Meteorologists use the daily minimum and maximum temperatures to estimate the daily average temperature by just averaging the minimum and maximum. This is called Taxn and the formula is: Taxn=(Tmax+Tmin)/2.

Do not get me wrong, if all you have is the daily minimum and maximum temperatures, averaging the two is the best guess you can make, but it is not the average daily temperature called Tavg which you get from measuring the temperature ideally more than 20 times evenly spaced throughout the 24 hour period and averaging all of those results.

Here’s where the Bugatti Chiron Super Sport 300+ and the Dublin Port tunnel come back in. If I told you the top speed of the Bugatti in the tunnel was 490km/h and it never went slower than 30km/h, if we used the Meteorologists’ algorithm we would conclude that it was travelling on average at (490+30)/2=260km/h. Yet we know from earlier that it is possible for those two limits to result in an average speed of 80km/h.

If you work it out, keeping the minimum and maximum speeds in the tunnel at 30km/h and 490km/h it is possible to get an average speed anywhere between 71km/h and 332km/h. While the Meteorologists’ 260km/h average speed is in that range, the range is quite wide.

To give another example of how the Meteorologists’ method can give an estimate that is quite a bit off, according to the Irish Central Statistics Office, in 2022 the top 1% of workers earned at least €3,867 per week. In contrast the bottom 1% of workers earned at most €92 per week. The mean weekly earnings were €856 per week and only 27% of workers earned at least that with the median weekly earnings being €671.

If we take the average of €3,867 and €92 that’s €1,980 per week. Less than 6% of earners received at least €1,980 per week which puts the Meteorologists’ average quite a bit off for estimating earnings or the average speed of a Bugatti through the port tunnel.

In the 1970’s, with the advent of cheap computers, it became possible to automate temperature measurement. A computer has no choice, if we tell it to measure the temperature every hour or every 5 minutes, rain or shine, sleet or snow, the measurement will be recorded. As most of the weather stations transitioned to automated measurement, mostly in the period 1990-2010, we are now able to measure the true average temperature, Tavg.

Valentia Observatory is 1km west of Cahirciveen, Co Kerry and a weather station has been operated in the area, with some temperature records for 1850-51 and continuous daily min-max records since mid-January 1872. The historical data sets have been carefully transcribed and are available from Met Éireann, 1850-1920 and 1921-1943. In 1944 Met Éireann did something a bit unusual, they started measuring the temperature every hour. Rain or shine, sleet or snow, the diligent staff of Met Éireann would go out to the weather station and record the temperature. Between January 1944 and April 2012 when the station was replaced with an automated station only 2 hours were missed. The data set from 1944 onwards is available from the Irish Government website: daily summary (includes minimum and maximum temperature) and hourly measurements.

Because we have an overlap of measurements from minimum and maximum thermometers and the 24 hourly measurements for Valentia, this means we can check just what the difference is between Tavg and Taxn to see how accurate the Meteorologists’ method of estimating average temperature from Tmin and Tmax is.

This first graph shows the difference Tavg-Taxn for every day since 14th January 1944 plotted as blue points. Overlaid is the 1 year rolling average as a red line. If you are interested in the statistics, Tavg is greater than Taxn in Valentia on average by 0.17ºC (std deviation 0.53, N=29339, min=-2.20, max=3.20).

Source data Copyright © Met Éireann and licensed for reuse under CC-BY-4.0. Mathematical analysis, transformations and visual presentation by Stephen Connolly.

If we just look at the rolling average, you can see that the relationship is not constant, for example in the 1970’s the average temperature was on average 0.35ºC warmer than the Meteorological estimate, while in the late 1940’s, 1990’s and 2000’s there were occasions where the Meteorological estimate was slightly higher that the actual average daily temperature.

Source data Copyright © Met Éireann and licensed for reuse under CC-BY-4.0. Mathematical analysis, transformations and visual presentation by Stephen Connolly.

It’s important to highlight that this multi-year variability is both unexpected and intriguing, particularly for those examining temperature anomalies. However, putting aside the multi-year variability, by squeezing nearly 30,000 data points onto the x-axis we may have hidden a potential explanation why the blue points typically show a spread of about ±1ºC… Is ±1°C spread seasonal variability?

The shortest day of the year in Valentia is December 21st  when the day lasts for approximately 7h55m. The longest day of the year is June 21st when the day lasts for 16h57m. On the shortest day of the year there is little time for the sun to heat up and most of the time it is dark and we expect heat to be lost. So we expect the average temperature to be closer to the minimum temperature during the winter than during the summer.

We can check the seasonal effects in the difference between Tavg and Taxn by looking at a time dependent correlation. As not everyone will be familiar with this kind of analysis, I will start by showing you the time dependent correlation of Tavg with itself:

Source data Copyright © Met Éireann and licensed for reuse under CC-BY-4.0. Mathematical analysis, transformations and visual presentation by Stephen Connolly.

The x-axis is how many days there are between measurements and the y-axis is the Pearson correlation coefficient, known as r, which measures how similar measurements are averages across all the data. A Pearson correlation coefficient of +1 means that the changes in one are exactly matched by changes in the other, a coefficient of -1 means that the changes are exactly opposite and a correlation coefficient of 0 means that the two variables have no relationship to each other.

The first point on the x-axis is for 1 day separation between the average temperature measurements.

The laziest weather forecast is the following:

“Tomorrow’s weather will be basically the same as today’s”

The r value of +0.91 for 1 day separation is an illustration of the accuracy of the laziest weather forecast and suggests that for average temperature it is approximately 82% accurate.

If we move out to half a year separation, we get an r value of -0.64 which says that 6 months from now, 41% of the average daily temperature can be explained as the opposite of today’s.

At a year separation the r value of 0.67 days that 44% of today’s average temperature can be explained as seasonal for this time of year. What this means is that actually the laziest weather forecast is only explaining 38% better than the seasonal forecast

You see very similar graphs if you look at the time-dependent correlation of the  Tmax,  Tmin or indeed the Taxn, with the 1 day r values being 0.90, 0.81 and 0.90 respectively and the seasonal swing being approximately -0.6 to +0.6 for 6 months and 1 year.

The above graph basically tells us what to expect when something is strongly seasonal.

What happens when we plot the time-dependent correlation of Tavg-Taxn? Well you get this:

Source data Copyright © Met Éireann and licensed for reuse under CC-BY-4.0. Mathematical analysis, transformations and visual presentation by Stephen Connolly.

The 1 day correlation is 0.19, this tells us that approximately 4% of today’s correction factor between Tavg and Taxn can be predicted if we know yesterday’s correction factor. The seasonality is even worse, the 6 month correlation coefficient is -0.02 and the 1 year correlation coefficient is +0.07.

This answers our earlier question… The ±1°C spread is not seasonal variability.

What this means is that if we only know Taxn then Tavg could be anywhere ±1°C.

Here is another graph to illustrate this. The x-axis is Tavg and the y-axis is Taxn. Now obviously when the average daily temperature is higher, the average of the minimum and maximum temperatures is also higher and so we get a straight line of slope 1, but the thickness of the line represents the uncertainty of the relationship, so if we know Taxn is say 15°C then from this graph we can say that Tavg is probably between 13.5°C and 16.5°C.

Source data Copyright © Met Éireann and licensed for reuse under CC-BY-4.0. Mathematical analysis, transformations and visual presentation by Stephen Connolly.

Now because most weather stations were not recording hourly until recently, most of our historical temperature data is the Taxn form and not the Tavg. That means that if Valentia is representative then the past temperature records are only good to ±1°C. If somebody tells you that the average temperature in Valentia on the 31st of May 1872 was 11.7°C, the reality is that we just do not know. It’s 95% likely to have been somewhere between 10.6ºC and 12.7ºC and we have no way of knowing just like knowing what the maximum and minimum speeds of the Bugatti through the port tunnel doesn’t really tell us much about its average speed.

In this last graph the blue points show the average Taxn of each year at Valentia since 1873 with vertical error bars showing the 95% confidence interval. The red points show the average Tavg for each year starting from 1944 with error bars showing the annual variation. The blue poking out from under the red shows the difference, even on the scale of a yearly average between the Meteorologist’s estimate of average temperature and the actual average temperature.

Source data Copyright © Met Éireann and licensed for reuse under CC-BY-4.0. Mathematical analysis, transformations and visual presentation by Stephen Connolly.

Valentia Observatory is one of the best weather stations globally. With the switch to automated stations in the 1990s, we can now get precise average temperatures.

Thanks to the meticulous efforts of past and present staff of Valentia Observatory and Met Éireann, we have 80 years of data which allows comparison of the old estimation methods with actual averages.

The takeaway?

Our historical temperature records are far less accurate than we once believed.

UAH April 2024: NH Pushes Global Warming by Land and Sea

From Science Matters

By Ron Clutz

The post below updates the UAH record of air temperatures over land and ocean. Each month and year exposes again the growing disconnect between the real world and the Zero Carbon zealots.  It is as though the anti-hydrocarbon band wagon hopes to drown out the data contradicting their justification for the Great Energy Transition.  Yes, there has been warming from an El Nino buildup coincidental with North Atlantic warming, but no basis to blame it on CO2.  

As an overview consider how recent rapid cooling  completely overcame the warming from the last 3 El Ninos (1998, 2010 and 2016).  The UAH record shows that the effects of the last one were gone as of April 2021, again in November 2021, and in February and June 2022  At year end 2022 and continuing into 2023 global temp anomaly matched or went lower than average since 1995, an ENSO neutral year. (UAH baseline is now 1991-2020). Now we have an usual El Nino warming spike of uncertain cause, but unrelated to steadily rising CO2.

For reference I added an overlay of CO2 annual concentrations as measured at Mauna Loa.  While temperatures fluctuated up and down ending flat, CO2 went up steadily by ~60 ppm, a 15% increase.

Furthermore, going back to previous warmings prior to the satellite record shows that the entire rise of 0.8C since 1947 is due to oceanic, not human activity.

The animation is an update of a previous analysis from Dr. Murry Salby.  These graphs use Hadcrut4 and include the 2016 El Nino warming event.  The exhibit shows since 1947 GMT warmed by 0.8 C, from 13.9 to 14.7, as estimated by Hadcrut4.  This resulted from three natural warming events involving ocean cycles. The most recent rise 2013-16 lifted temperatures by 0.2C.  Previously the 1997-98 El Nino produced a plateau increase of 0.4C.  Before that, a rise from 1977-81 added 0.2C to start the warming since 1947.

Importantly, the theory of human-caused global warming asserts that increasing CO2 in the atmosphere changes the baseline and causes systemic warming in our climate.  On the contrary, all of the warming since 1947 was episodic, coming from three brief events associated with oceanic cycles. And now in 2024 we are seeing an amazing episode with a temperature spike driven by ocean air warming in all regions, along with rising NH land temperatures.

Update August 3, 2021

Chris Schoeneveld has produced a similar graph to the animation above, with a temperature series combining HadCRUT4 and UAH6. H/T WUWT

See Also Worst Threat: Greenhouse Gas or Quiet Sun?

April 2024 El Nino Recedes While Oceans and NH Land Warms

With apologies to Paul Revere, this post is on the lookout for cooler weather with an eye on both the Land and the Sea.  While you heard a lot about 2020-21 temperatures matching 2016 as the highest ever, that spin ignores how fast the cooling set in.  The UAH data analyzed below shows that warming from the last El Nino had fully dissipated with chilly temperatures in all regions. After a warming blip in 2022, land and ocean temps dropped again with 2023 starting below the mean since 1995.  Spring and Summer 2023 saw a series of warmings, continuing into October, but with cooling since. 

UAH has updated their tlt (temperatures in lower troposphere) dataset for April 2024. Posts on their reading of ocean air temps this month comes after the April update from HadSST4.  I posted this week on SSTs using HadSST4 Nino Recedes, NH Keeps Ocean Warm April 2024. This month also has a separate graph of land air temps because the comparisons and contrasts are interesting as we contemplate possible cooling in coming months and years.

Sometimes air temps over land diverge from ocean air changes. Last February 2024, both ocean and land air temps went higher driven by SH, while NH and the Tropics cooled slightly, resulting in Global anomaly matching October 2023 peak. Then in March Ocean anomalies cooled while Land anomalies rose everywhere. Now in April, Ocean anomalies rose NH and SH, while Tropics moderated.  Meanwhile NH land spiked up and Global land warmed, despite SH spiking down

Note:  UAH has shifted their baseline from 1981-2010 to 1991-2020 beginning with January 2021.  In the charts below, the trends and fluctuations remain the same but the anomaly values changed with the baseline reference shift.

Presently sea surface temperatures (SST) are the best available indicator of heat content gained or lost from earth’s climate system.  Enthalpy is the thermodynamic term for total heat content in a system, and humidity differences in air parcels affect enthalpy.  Measuring water temperature directly avoids distorted impressions from air measurements.  In addition, ocean covers 71% of the planet surface and thus dominates surface temperature estimates.  Eventually we will likely have reliable means of recording water temperatures at depth.

Recently, Dr. Ole Humlum reported from his research that air temperatures lag 2-3 months behind changes in SST.  Thus cooling oceans portend cooling land air temperatures to follow.  He also observed that changes in CO2 atmospheric concentrations lag behind SST by 11-12 months.  This latter point is addressed in a previous post Who to Blame for Rising CO2?

After a change in priorities, updates are now exclusive to HadSST4.  For comparison we can also look at lower troposphere temperatures (TLT) from UAHv6 which are now posted for April.  The temperature record is derived from microwave sounding units (MSU) on board satellites like the one pictured above. Recently there was a change in UAH processing of satellite drift corrections, including dropping one platform which can no longer be corrected. The graphs below are taken from the revised and current dataset.

The UAH dataset includes temperature results for air above the oceans, and thus should be most comparable to the SSTs. There is the additional feature that ocean air temps avoid Urban Heat Islands (UHI).  The graph below shows monthly anomalies for ocean air temps since January 2015.

Note 2020 was warmed mainly by a spike in February in all regions, and secondarily by an October spike in NH alone. In 2021, SH and the Tropics both pulled the Global anomaly down to a new low in April. Then SH and Tropics upward spikes, along with NH warming brought Global temps to a peak in October.  That warmth was gone as November 2021 ocean temps plummeted everywhere. After an upward bump 01/2022 temps reversed and plunged downward in June.  After an upward spike in July, ocean air everywhere cooled in August and also in September.   

After sharp cooling everywhere in January 2023, all regions were into negative territory. Note the Tropics matched the lowest value, but since have spiked sharply upward +1.7C, with the largest increases in April to July, and continuing through adding to a new high of 1.3C January to March 2024. In April that dropped to 1.2C.  NH also spiked upward to a new high, while Global ocean rise was more modest due to slight SH cooling. In February, NH and Tropics cooled slightly, while greater warming in SH resulted in a small Global rise. Now in April NH is back up to match its peak of 1.08C and SH also rose to its new peak of 0.89C, pulling up the Global anomaly, also to a new high of 0.97 despite a drop in the  Tropics.

Land Air Temperatures Tracking in Seesaw Pattern

We sometimes overlook that in climate temperature records, while the oceans are measured directly with SSTs, land temps are measured only indirectly.  The land temperature records at surface stations sample air temps at 2 meters above ground.  UAH gives tlt anomalies for air over land separately from ocean air temps.  The graph updated for April is below.

Here we have fresh evidence of the greater volatility of the Land temperatures, along with extraordinary departures by SH land.  Land temps are dominated by NH with a 2021 spike in January,  then dropping before rising in the summer to peak in October 2021. As with the ocean air temps, all that was erased in November with a sharp cooling everywhere.  After a summer 2022 NH spike, land temps dropped everywhere, and in January, further cooling in SH and Tropics offset by an uptick in NH. 

Remarkably, in 2023, SH land air anomaly shot up 2.1C, from  -0.6C in January to +1.5 in September, then dropped sharply to 0.6 in January 2024, matching the SH peak in 2016. Then in February and March SH anomaly jumped up nearly 0.7C, and Tropics went up to a new high of 1.5C, pulling up the Global land anomaly to match 10/2023. Now in April SH dropped sharply back to 0.6C, Tropics cooled very slightly, but NH land jumped up to a new high of 1.5C, pulling up Global land anomaly to its new high of 1.24C.

The Bigger Picture UAH Global Since 1980

The chart shows monthly Global anomalies starting 01/1980 to present.  The average monthly anomaly is -0.04, for this period of more than four decades.  The graph shows the 1998 El Nino after which the mean resumed, and again after the smaller 2010 event. The 2016 El Nino matched 1998 peak and in addition NH after effects lasted longer, followed by the NH warming 2019-20.   An upward bump in 2021 was reversed with temps having returned close to the mean as of 2/2022.  March and April brought warmer Global temps, later reversed

With the sharp drops in Nov., Dec. and January 2023 temps, there was no increase over 1980. Then in 2023 the buildup to the October/November peak exceeded the sharp April peak of the El Nino 1998 event. It also surpassed the February peak in 2016.  December and January were down slightly, but now March and April have taken the Global anomaly to a new peak of 1.05C. Where it goes from here, up further or dropping down, remains to be seen, though there is evidence that El Nino is weakening.

The graph reminds of another chart showing the abrupt ejection of humid air from Hunga Tonga eruption.

TLTs include mixing above the oceans and probably some influence from nearby more volatile land temps.  Clearly NH and Global land temps have been dropping in a seesaw pattern, nearly 1C lower than the 2016 peak.  Since the ocean has 1000 times the heat capacity as the atmosphere, that cooling is a significant driving force.  TLT measures started the recent cooling later than SSTs from HadSST4, but are now showing the same pattern. Despite the three El Ninos, their warming has not persisted prior to 2023, and without them it would probably have cooled since 1995.  Of course, the future has not yet been written.

No, Sun Sentinel, Florida Isn’t Under Future “Climate Threats”

From ClimateRealism

By Anthony Watts

A recent article in the South Florida Sun Sentinel (SFSS) newspaper, titled “Florida in 50 years: Study says land conservation can buffer destructive force of climate change,” makes some catastrophic claims about what Florida’s climate will be like in 50 years. The article relies heavily on climate model projections, that are undermined by real world evidence and by the fact that the climate models in question have been shown to create “implausibly hot forecasts of future warming.

As outlined in Climate at A Glance: Climate Model Fallibility peer reviewed science has shown that climate forecasts like the one cited by the SFSS have no basis in reality because comparisons of actual measured atmospheric temperature data to model forecasts show up to a 200% discrepancy between model temperature outputs and observed temperatures.

Because the temperature forecasts are wildly implausible, the claimed disastrous impacts that are forecast to result from those unbelievably high temperatures also lack credibility.

The article starts off by asserting as a fact that, “Climate change is making temperatures and sea levels rise.”

The SFSS cites a National Oceanic and Atmospheric Administration (NOAA) graph showing increased average temperatures for the state of Florida, seen below:

It is important to note that average temperatures really didn’t start to significantly increase until around 1990, not coincidentally as the state’s population began to rise rapidly. Plus, average temperatures are just half of the picture. If you look at NOAA’s minimum temperatures for Florida, it is easy to see that it makes up the bulk of the increase in average temperatures since 1990:

An increase in overnight low temperatures is a clear indicator of an increased Urban Heat Island effect (UHI). Florida’s population has doubled from about 10 million in 1990 to over 20 million now. This more than doubling of the state’s population is reflected clearly in UHI data compiled by Dr. Roy Spencer as seen in the graph below. Note the huge temperature effects for Florida’s rapidly growing coastal cities.

Concerning the SFSS’s sea level rise claims, Miami is often used as an example of supposed sea level rise due to occasional street flooding there. Miami’s real problem isn’t rising seas as much as land subsidence. Much of Miami was built on reclaimed swamp land, and then built up with modern infrastructure. That extra weight causes a sinking of the land, known as subsidence, allowing seawater to seep in when the surfaces sink to near sea-level. It also means that during strong rainfall events, and hurricane storm surge, areas that have subsided don’t drain as they did years before.

This is clearly covered in the scientific paper Land subsidence contribution to coastal flooding hazard in southeast Floridapublished in Proceedings of IAHS in 2020. The paper clearly states:

Preliminary results reveal that subsidence occurs in localized patches (< 0.02 km2) with magnitude of up to 3 mm yr−1, in urban areas built on reclaimed marshland. These results suggest that contribution of local land subsidence affect[s] only small areas along the southeast Florida coast, but in those areas coastal flooding hazard is significantly higher compared to non-subsiding areas.

Subsidence is also driven by freshwater withdrawals from the region’s groundwater reservoirs to satisfy the Miami metro area’s growing population.

Clearly, sea level rise in Florida has more to do with subsidence and land management than climate change induced rise. Plus, Miami’s flat terrain, just a few feet above sea level, lacks natural drainage routes for rainwater to flow away from urban areas.

As discussed in numerous Climate Realism articles, here and here, for instance, there is no evidence whatsoever seas are rising at an usually rapid rate. As shown in Climate at a Glance: Sea Level Rise, there is approximately the same pace of sea-level rise today that has occurred since at least the mid-1800s, disproving claims of recent climate change worsening it.

SFSS goes on to outline a trifecta of additional climate threats, saying:

There are three main climate change threats in Florida, said Polsky: More intense rain events, which leads to greater flooding; more coastal flooding — both from storm surge and high tides; and more heat and wildfire risk.

Let’s examine rainfall. Actual monthly rainfall data since 1895 from the National Oceanic and Atmospheric Administration NOAA shows no upward trend in rainfall for the state, nor does it show excessive monthly spikes in the present.

As for coastal flooding, the Intergovernmental Panel on Climate Change shows no indication that climate change is causing increased coastal flooding, as is show in Table 12.12 | on Page 90 – Chapter 12 of the UN IPCC Sixth Assessment Report. Emergence of Climate Impact Drivers (CIDs) in time periods, shows no correlation. The color corresponds to the confidence of the region with the highest confidence: white colors indicate where evidence of a climate change signal is lacking or the signal is not present, leading to overall low confidence of an emerging signal. The section is highlighted in yellow. Neither sea-level nor coastal flooding has been an observed element of climate change.

Even in 2050 and 2100 the IPCC does not forecast any climate change impact on coastal flooding. Also, the possible predicted effect on sea level rise that the IPCC suggests might occur in 2050 and beyond stems from the organization’s use of the RCP8.5 scenario “high emissions” scenario that the U.S. Environmental Protection Agency and many climate scientists have by now explicitly disavowed, as being wildly implausible if not impossible.  Climate Realism has discussed problems with the RCP8.5 scenario repeatedly, here and here, for example.

As for SFSS’s wildfire claims, while Florida had a single bad year in 2017 due to warmer local weather conditionslightning, and arsonists, there is no overall upwards trend in the number of wildfires for the state over the last decade:

According to a summary by Alchera, which produced the Florida wildfires graph above, climate change is not a factor:

Florida’s unique combination of flat terrain, abundant vegetation, and frequent lightning strikes makes the state prone to wildfires. The flat landscape allows fires to spread quickly, while the dense vegetation provides ample fuel for them to grow in intensity. Lightning strikes, particularly during the stormy summer months, can ignite dry vegetation and lead to rapidly spreading wildfires. Human activities, such as arson, debris burning, and equipment use, are also significant factors in causing wildfires in Florida.

The SFSS story claiming climate change is causing rapidly rising temperatures and increased flooding and wildfires in Florida has no basis in fact. Rather than presenting news, the SFSS’s story is consistent with a pattern Climate Realism has exposed time and again hyping the dogmatic narrative that climate change is causing virtually everything bad. Almost daily, climate alarmists and the media are painting a dire future due to climate change, even when the facts refute their claims. Such stories may make for good disaster fiction, but they are not fact-based news reporting, and thus are not worthy of being published by a supposedly journalistic enterprise.

With Tree Rings On Their Fingers

CDN

There’s a lot of apparently confident talk about how current temperatures compare with those in the past, including claims of 2023 being the “hottest year ever” or at least in the last 125,000. But how do we actually know, and how much do we actually know, about historic and prehistoric temperatures? In this Climate Discussion Nexus “Backgrounder” video John Robson examines the uses, and abuses, of various temperature proxies.

Transcript below [apologies for any misspellings of proper nouns~cr]



You’ve probably heard the claim that the Earth today is the warmest it’s been in a thousand years, or 10,000, or even 125,000 years. But how do they know, when the earliest modern thermometers were invented by German physicist Daniel Fahrenheit in 1709, and we have very few systematic weather records anywhere before the mid-1800s, and few or none in most of the world until the mid-20th century? So how can anyone claim to know temperatures anywhere, let alone around the world, in, say, 1708, or even further back? How do we know what’s warmer today in Scotland than it was in 1314, the year Robert the Bruce defeated the English army at Bannockburn, or that Rome is warmer today than in 306, when Constantine became emperor, or 410 AD, when Alaric the Visigoth invaded and sacked it, or that Israel is warmer today than in 587 BC, when King Nebuchadnezzar of Babylon destroyed Jerusalem and led the Jews into captivity? He didn’t confiscate their thermometers—there weren’t any. So how can we say anything definitive, or even plausible, about a single location, never mind the whole world, 70% of which is open ocean, where nobody was keeping even anecdotal records?

Obviously, we don’t have satellite data to make up for the lack of thermometers. Instead, scientists use indirect measures called proxies. These are evidence from the geological record of what the landscape was like in the past that we believe correlate fairly well with temperature—things like tree ring widths, different isotopes of carbon in ice core layers, and the kind and quantity of shells, pollen, and other remains of living creatures found in sediments at the bottom of the ocean. If a proxy record goes back thousands of years and we think we know fairly precisely when a given part of it was created, then according to the theory, it can be used to estimate what the local temperature probably was back then compared to today. Now, we’re not criticizing proxies in principle; on the contrary, they represent an ingenious way to get important data that we can’t measure directly—or at least they can represent an important way.

But when you look closely, as we’re about to do, you find that the estimates can be rough, very uncertain, and often no better than sheer guesswork. In fact, sometimes they’re much worse than guesswork. What you have is researchers who know what they want to find and deliberately select only the kind of proxy, or only the specific proxy data series, that says what they want to hear. And far too many scientists who work with these proxies have actually gone to great lengths not to disclose the uncertainties but to hide them, to make sure the public never hears about how imprecise, or sometimes even dubious, their reconstructions are—which is where we come in.

For the Climate Discussion Nexus, I’m John Robson, and this is a CDN backgrounder on proxy reconstructions of the Earth’s temperature history. But before we plunge into the past, let’s look at how temperatures are measured, or not measured, more recently. Because if you’re going to compare modern records with older ones, it matters how both are generated. Systematic weather records from around the world since the mid-1800s are archived at the Global Historical Climatology Network and elsewhere. So, if, for instance, we pick a fairly recent year, like 1965, we can see that records were available on land from most countries around the world, although many places only had partial records from a handful of stations, and the annual average had to be based on estimating the missing numbers. And of course, there’s always the question of how good the measurements were, where the instruments were situated, how well they were maintained, and how carefully they were read. And if 1965 is shaky, take a look 40 years further back, in 1925—there was hardly any data from Africa, South America, and vast regions of Asia. Yet we’re now confidently told that, say, the Central African Republic was hotter in 2023 than in 1923. And if we go back another 40 years to 1885, we see that basically there was no data at all, other than the US, Europe, and a few places in India and Australia.

Now, here’s a surprise: from 1885 to 1965, the record gets more complete, but after 1965, it thins out again. As of 2006, the sample looked much the way it had early in the 20th century. And if we chart the number of locations supplying data to the global climate archive over the years from 1900 to 2008, it rather unexpectedly looks like this. So, as you can see, the sample size has been constantly changing, which ought always to make us uneasy about precise findings, or more exactly, claims of precise findings. And when scientists construct those famous charts of global average temperature back to the mid-1800s, they quietly admit among themselves that over half the data is missing and has to be imputed, which is a fancy way of saying ‘made up.’ But they don’t draw this issue to the attention of the public, and journalists don’t ask about it—or at least, they don’t ask the scientists who would insist on bringing it up. The coverage is fragmentary, however, which is a major statistical challenge over the entire period. Over half—53%—of data entries are missing, most of them at the poles and over Africa. The coverage generally worsens back in time, with notable gaps during the two World Wars. That survey just covers the modern data, which is supposedly the best part of the record and is at least in part based on thermometers. Prior to about 1850, we have to resort to proxies to get temperature estimates. And while there are many potential proxy records, most attention is paid to tree rings, ice core layers, and marine sediments. So, obviously, it’s important to ask how reliable they are. In 2006, the US National Academy of Sciences did just that, conducting a review of all these methods in light of the controversies that had arisen concerning the IPCC hockey stick graph, which was mostly based on tree rings.

In general, that review said the proxies sometimes contain useful information, but scientists have to be careful about how they use them, and they need to be honest about the uncertainties they specifically cautioned that uncertainties of the published reconstructions have been underestimated. So how are proxy-based reconstructions done? Let’s start with tree rings. As trees grow, they add a ring of new wood around their trunks every year. Scientists measure the width and density of these rings by taking small, pencil-like cores out of the trunk, and the general principle is that trees grow faster and further in good years than bad, so thick rings mean favorable conditions, which certainly would include warmth. So variations in these rings might, in some cases, correlate with variations in temperature. The first problem, which is obvious to anyone who’s ever seen a tree stump, is that the ring width patterns can be completely different depending on which side of the tree you take the core from. And the National Academy’s panel noted that many other things than temperature affect tree ring growth, such as precipitation, disease, fire, and competition from other trees. Scientists need to try to find locations where they are sure temperature is the main controlling factor, but even if they are diligent, it’s not always possible to know if that’s the case.

They also emphasize that it’s not enough to look at a single tree. If a pattern found in a tree core is truly a climate signal, it should be seen in cores taken from at least 10 to 20 trees in an area, because a single tree can suffer storm damage or be attacked by pests. So whenever you see a tree-ring-based reconstruction, the first question you need to ask is how many trees were sampled. But good luck finding out. One of the problems we run into when we look at these studies is the number of times scientists rely on insufficiently large samples, or worse, take a large sample and then throw out the ones that don’t tell them what they want to see, or simply refuse to say how many trees they examined. Canadian researcher Steven McIntyre spent about 15 years blogging at the site ClimateAudit.org, detailing his efforts to get tree ring researchers to report these things, often without success. If they’re not deliberately hiding something, they’re sure doing a good imitation. Another problem with tree rings is that as a tree gets older and its trunk widens, if the volume of growth is constant, the width of each year’s ring will decrease, meaning that ring widths will get narrower, even if temperatures stay the same. Scientists need to use statistical models to remove this trend from the data, but every time you start manipulating data, even for valid reasons and carefully, it creates further uncertainties. So it’s far from straightforward. For instance, the National Academy’s panel focused attention on two issues that arose during the debates about the Michael Mann hockey stick graph.

First, they pointed out that the underlying theory assumes the correlation between temperature and tree ring widths must be constant over time. If wide rings in the 20th century mean temperatures were high, the narrower rings hundreds of years ago mean it was cooler.

But what if this sub-theory doesn’t hold? What if something else changes the growth pattern from time to time? It might sound like a weird thing to worry about, but when you start checking tree rings against actual recent thermometer data, you find significant evidence that it does happen. For instance, after 1960, tree rings in many locations around the world started getting narrower even while thermometers said local temperatures were rising. Scientists gave this a fancy label—the Divergence Problem—waved it away by saying it was probably a one-off occurrence, and then started deleting the post-1960 data so that people wouldn’t notice it. And we discussed a particularly glaring example of this approach in our video on Hiding the Decline. Unfortunately, as Rudyard Kipling once said, giving something a long name doesn’t make it better. On the contrary, the Divergence Problem undermines the whole field, or forest, because if trees aren’t picking up the warming happening now, how do we know they didn’t also fail to pick it up then? If narrow tree rings are happening during a warm interval today, how can scientists insist that narrow tree rings prove it was cold in the past? And worse, instead of being honest about the question, scientists simply resorted to hiding the decline, hoping no one would notice. It didn’t work.

Another issue the National Academy pointed to, still on the tree ring proxy, was that some kinds of tree are definitely not good for recording temperatures and should be avoided. They particularly singled out bristlecone pines. These are small conifers that grow to a great age, which of course makes them superficially appealing. Unfortunately, over their long lives, they form twisted, bunched-up trunks with ring width patterns that have nothing to do with temperature. And one of the discoveries made by Steven McIntyre in his analysis of the Mann hockey stick was that its shape depended entirely on a set of 20 bristlecone pine records from Colorado that have a 20th-century trend of bigger rings despite, awkwardly, coming from a region where thermometers say no warming took place. This figure shows in the top panel the result of applying Mann’s statistical method to a collection of over 200 tree ring proxies, including the 20 bristlecone series, using a flawed method that puts most of the emphasis on those bristlecones. It has a compelling hockey stick shape. The bottom panel shows the same calculation after removing just the 20 bristlecone pine records. It’s clear that the hockey stick shape is entirely due to tree rings that experts have long known are not valid for showing temperature. What’s worse, as McIntyre has pointed out, Mann himself computed the bottom graph, but he hid the results instead of showing them to his readers.

This pattern is far too common. When we look hard at paleoclimate reconstructions, they fall apart on close inspection, but the scientists who do them almost never tell you about their weaknesses upfront. In fact, it’s happened so often that you’re justified in assuming it’s the rule, not the exception.

Another series that used to be popular in climate reconstructions was a collection of Russian tree rings from the Polar Urals region, introduced in a 1995 journal article by the late British climatologist Keith Briffa and his co-authors. They argue that their tree ring reconstruction showed the 20th century was quite warm compared to the previous 1,100 years, and they specifically identified the years around AD 1000 as among the coldest of the millennium. Here’s that chart.

This Briffa Polar Urals data series naturally became very popular in other tree ring reconstructions. But the problem was that the early part of the data was only based on three trees, which is not enough for confident conclusions. In 1998, some other scientists obtained more tree ring samples from the same areas, and suddenly the picture looked completely different. Instead of AD 1000 being super cold, it was right in the middle of the hottest period of all—the supposedly non-existent Medieval Warm Period—and the 20th century was no longer the least bit unusual.

So what did Briffa and his colleagues do? Did they publish a correction or let people know that they’d actually found evidence of a Medieval Warm Period? No, of course not. They just quietly stopped using Polar Urals data and switched to a new collection of tree rings from the nearby Yamal Peninsula that had the right shape. Now that switcheroo was bad enough, but the story gets worse. The dogged Steve McIntyre asked Briffa to release his Yamal data, but Briffa steadfastly refused. Eventually, after nearly a decade, the journal where he published his research ordered Briffa to release it, and McIntyre promptly made two remarkable discoveries. First, the number of trees in the 20th-century segment dropped off to only five near the end, which clearly fails the data quality standard. Second, McIntyre found that another scientist, Fritz Schweingruber, who happened to be a co-author of Briffa, had already archived lots of tree ring data from the same area, and while it looked similar to Briffa’s up to the year 1900, instead of going up in the 20th century, it went down. Briffa, surprise, surprise, hadn’t used it. So it’s not just incomplete data that happens to have a bias; it’s data that’s been deliberately chosen to introduce one.

This graph from McIntyre’s ClimateAudit website shows a close-up of the 20th-century portion. The red line is the data Briffa used, the black line is the Schweingruber data, and the green line is the result from combining all the data together. Clearly, when you include the more complete data, the blade of the hockey stick disappears, and the 20th century shows a slight cooling, not warming, which is kind of important to the story. Another source of bias in tree ring reconstructions comes from the practice of something called pre-screening. Recall that the National Academy of Science panel said that researchers should sample a lot of trees at a location, and if there is a climate signal, it should be common to all of them. If modern temperatures line up with some of the tree cores but not others, it might be a spurious correlation. This would mean the early portion of the record is not reliable for temperature reconstructions. In a 2006 study, Australian statistician David Stockwell illustrated the problem by using a computer to generate a thousand sets of random numbers, each one 2,000 numbers long. He selected the ones where the last 100 numbers happened to correlate with the orthodox 20th-century global average temperature series and threw out the rest, then combined the data together the way paleoclimatologists do. The result was an impressive hockey stick, which according to common practice would lead to the conclusion that today’s climate is the warmest in the past millennium. The problem is the graph has absolutely no information about the past climate in it, true or false.

Instead, it was constructed using random numbers that were then pre-screened to fit modern temperatures and then spliced to the modern temperature record to create the illusion of providing information about the past, which is exactly what far too many tree ring researchers are doing now. One way to guard against generating spurious results like this one is to use all the data from a sampling location, but researchers on a mission don’t do so. Instead, they pre-screen and may even end up throwing out most of the data they’ve collected if it’s what it takes to get the result they wanted. In 1989, American climate scientist Gordon Jacoby and his co-author Rosanne D’Arrigo published a reconstruction of northern hemisphere temperatures that had the usual hockey stick shape, although it only went back to 1670. In the article, the authors said they sampled data from 36 sites but only kept data from 10 of them. So McIntyre emailed Jacoby and asked for the others, and Jacoby, unsurprisingly, refused to show them. What is surprising is the frankness of his explanation: “Sometimes, even with our best efforts in the field, there may not be a common low-frequency variation among the cores or trees at a site. This result would mean that the trees are influenced by other factors that interfere with the climate response. There can be fire, insect infestation, wind or ice storm, etc., that disturb the trees, or there can be ecological factors that influence growth. We try to avoid the problems but sometimes cannot.

If we get a good climatic story from an chronology, we write a paper using it. That is our funded mission. It does not make sense to expend efforts on marginal or poor data, and it is a waste of funding agency and taxpayer dollars.” The rejected data are set aside and not archived. And you can guess what makes a good climatic story. McIntyre eventually gave up trying to get the 26 datasets Jacoby threw away, but Jacoby died in 2014, and the same year, his university archived a lot of his data, and later in the fall of 2023, McIntyre noticed that buried in the archive was one of the series Jacoby had rejected, from Sukakpak Peak in Alaska. Even though it was close to the two sites that Jacoby and D’Arrigo had retained, and had at least as many individual tree cores in it as other sites, it was rejected as being poor data. And here’s what it looks like: the ring widths, if they’re a temperature proxy, show that the Medieval period was very warm, then there were a couple of very cold periods, and the 20th century was nothing unusual, which is not a good climate story, which is what poor data now means to these sorts. So they threw it out. You can see how the game works. When they get a hockey stick shape, they say it’s based on good data, and when we ask how they define good, the answer is, if it’s shaped like a hockey stick, QED.

Now let’s look at a very different type of temperature proxy, marine sediments. Here scientists look at organic compounds called alkanones, which are produced in the ocean by tiny creatures called phytoplankton and which settle in layers on the ocean floor. Since alkanones have chemical properties that correlate to temperature, by drilling cores out of the ocean floor and examining the changing density of alkanones in layers that they estimate to have been formed at various times, science can say something about the past climate. Once again, there’s a lot more uncertainty than we often hear about, because the layers form very slowly. Unlike tree rings, alkanone layers don’t pick up year-by-year changes, only average changes over multiple centuries. A single data point will represent the alkanone density in a thin layer of a core sample, but it might, at best, indicate not a single year but average climate conditions over several hundred years.

As a result, it means they can’t be used for comparing modern short-term warming and cooling trends to the past. The appropriate comparison would be a single data point for temperature from 1823 to 2023. On the plus side, because thin layers cover long periods, a single sediment core can provide information a long way into the past, even 10,000 years or more. And thus it was that in March 2013, headlines around the world announced that the Earth was now warmer than any time in the past 11,000 years, based on a new proxy reconstruction published in Science magazine by a young scientist named Shaun Marcott, based mostly on a global sample of alkanone cores collected by other scientists in previous years. The graph showed that the climate had warmed after the end of the last glaciation, 11,000 years ago, stayed warm for millennia, then cooled gradually until the start of the 20th century, after which it warmed at an exceptional rate, doing 8,000 years of cooling in only one century. Gotcha, right? Except a reader at Climate Audit soon noticed something odd. Marcott had just finished his PhD at Oregon State University, and the paper in Science was based on one of his thesis chapters, which was posted online, and, drum roll, please, in that version, there was no uptick at the end, no 20th-century warming, no hockey stick. So where did the blade of the stick come from in the version published in Science? While climate scientists were busy proclaiming the Marcott result as more proof of the climate crisis, it fell to outsiders, like once again Steve McIntyre and his readers at Climate Audit, to dig into the details. In this case, McIntyre was able to obtain the Marcott data promptly, and to show that the big jump at the end was based on just one single data point.

As a mining consultant, McIntyre also knew something important about drill cores. The topmost layer, which represents the most recent data, can be contaminated during the drilling process. He wanted to know how the various scientists who collected the alkanone samples dealt with that issue, so he looked up the original studies, and to his surprise, he found that they didn’t consider the core tops to be reliable measures of recent temperatures. Most of them only reported temperature proxies starting centuries in the past, even a thousand years or more. Marcott and his co-authors had redated the cores to the present, but if they used the dates assigned by the original authors, there would be no uptick at the end. After being confronted with this data, Marcott and his co-authors put out a posting on the web in which they made a startling admission: “The 20th-century portion of our paleo temperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions.” But of course, the damage had been done. How many news stories that pounced on the original even mentioned this critical correction, let alone made a big fuss over it?

Now let’s look at another popular type of proxy, the one that comes from drilling out cores in large ancient ice caps like the ones over Greenland and Antarctica. These cylinders are believed to provide evidence of temperatures back hundreds of thousands of years, because every year, a layer of snow becomes ice, and the chemical composition of the ice contains clues about temperature. One of the most famous of these reconstructions is the Vostok ice core from Antarctica.

It shows that most of the past half-million years have been spent in Ice Age conditions, interrupted only by short interglacial periods. The last 10,000 years, our current interglacial, has been longer than the previous three, but colder than the previous four. The ice core record also shows that changes in and out of ice ages are extremely rapid. When we start diving into the next glaciation, we may not have much time to prepare, assuming, of course, that ice cores are reliable. It’s no good for us to point to methodological uncertainties when proxies confirm the orthodox and then tout their precision when they challenge it. And one important point about ice cores is that, as with sediment layers, the bubbles don’t take definitive shape in just one year or a couple of years, so there’s a certain degree of blurring.

That means that they can miss significant spikes or dips in temperature if they’re sudden and brief. “Brief” here being a word that can even extend to a century. Which isn’t to say that proxies are inherently useless, or even disreputable. On the contrary, as we said at the outset, we applaud the ingenuity of researchers who look for indirect ways of measuring things that matter when the direct ones aren’t available. But we insist that they be honest in how they collect and sort the data, and how they present it, including how much certainty they claim that it carries. Oh, there’s one more key point that we need to make about the whole business of using proxy data to reconstruct past temperatures. During the overlap period when we have both thermometer and proxy data, the challenge is to construct a statistical model connecting them.

And the problem is that, mathematically speaking, it’s well known that many different models can be constructed with no particular reason to favor one over the others. In a 2011 study in the Annals of Applied Statistics, two statisticians, Blakely McShane and Abraham Wyner, demonstrated this by constructing multiple different models using the Mann hockey stick data and showed that while they implied completely different conclusions about how today’s climate compares to the past, they all fit the data about the same. While climatologists tend to produce results like the red line, it would be just as easy and just as valid to produce the green line from the same data. So the uncertainties in these kinds of reconstructions go way beyond the little error bars climatologists like to draw around their reconstructions, because in truth, they can’t be certain of the shape of the reconstruction to begin with.

So yes, by all means apply proper scientific methods to reconstructing the past climate, but proper ones, handling data honestly and recognizing the often very large amount of uncertainty. For the Climate Discussion Nexus, I’m John Robson, and that’s our backgrounder on temperature reconstruction from the pre-metered era.

Comprehensive Russian Temperature Reconstruction Shows Warmer Temperatures 1000 Years Ago!

From NoTricksZone

By P Gosselin

Dr. Michael E. Mann and the IPCC claims of a hockey stick temperature trend are challenged.

paper published by a team of scientists of the Russian Academy of Sciences led by В. V. Klimenko presents a quantitative reconstruction of the mean annual temperatures of northeastern Europe for the last two millennia. The study was done in cooperation with the Alexander von Humboldt Foundation (Germany).

Result: it was modestly warmer 1000 years ago than it is today.

The reconstruction of the mean annual temperatures is based on dendrochronological, palynological and historical information, and shows the comparative chronology of climatic and historical events over a large region of Northeast Europe:

Figure 1. Map of the study region showing locations for which indirect climatic data are available.
Yellow circles indicate palynological data, green circles indicate dendrochronological data, and black circles indicate the most important historical evidence. Triangles indicate the location of long-row weather stations in and around the study region: Haparanda (1), Vardø (2), Arkhangelsk (3), Kem (4), Petrozavodsk (5), Malye Karmakuly (6), Salekhard (7), Tobolsk (8), Syktyvkar (9), Turukhansk (10), Tomsk (11), Yeniseysk (12). Source: here.

Warmer in the years 981-990 and in mid 20th century

Unlike what papers authored by scientists close to the IPCC like to suggest (a flat temperature mean over the past 1000 years followed by a 20th century hockey stick blade warming),the Russian reconstruction of decadal mean annual temperature values shows major climatic events manifested both on the scale of the entire Northern Hemisphere and in its separate regions.

Figure 4. Final reconstruction of decadal mean annual temperatures for Northeast Europe (blue line)
and instrumental data (red line). The instrumental period is enlarged in the inset. Source: here

According to the paper’s abstract:

In the pre-industrial era, the maximum annual mean temperatures in 981-990 were 1°C higher and minimum temperatures in 1811-1820 were 1.3°C lower than on average for 1951-1980. The constructed chronology has a noticeably larger amplitude of variability compared to hemispheric and pan-Arctic reconstructions.”

The paper concludes that the results of the reconstruction point to “major climatic events” such as the Roman Optimum, the cold epoch of the Great Migration of Peoples in the 5th and 6th centuries, the Medieval Climatic Optimum of the 10th-12th centuries, and the Little Ice Age (13th-19th centuries).

These were manifested both on the scale of the entire Northern Hemisphere, and its individual regions.

Hat-tip: inderwahrheitliegtdiekraft at X.

Fancy Fins Didn’t Help with the Visibility

According to the mainstream media it’s the southern Great Barrier Reef that has been hit hardest with coral bleaching, and particularly the corals in the Capricorn region that includes Great Keppel Island.   I have a home in Yeppoon, which is just a short ferry ride away. I have been keen to go see for myself, but the weather has been less than ideal.

Looking down at the ferry anchored in Fisherman’s Beach bay, Great Keppel Island. The water is looking rather grey-blue, because the sun is behind a cloud. (Photograph taken by Jennifer Marohasy, 15th March 2024.)

The dive shop on the island has been telling me there is some bleaching, but it’s hard to see because visibility is a problem.  That has been the situation for the last week – since John Abbot and I drove back to Yeppoon from Noosa.

By ‘visibility’ they mean the distance one can see under-the-water.

Yesterday, Friday, I nevertheless caught the ferry across to the island and made my way around to Monkey Beach reef.

Great Keppel Island is located just to the northeast of the mouth of the Fitzroy River, so when there is a lot of wind and particularly a lot of rain, the water can become murky to the extent you cannot see the corals even if they are just, one metres below, which was the situation yesterday.

Just got out of the water at Monkey Beach, the reef is a good distance out, and visibility was so bad I couldn’t see the corals.  The water nevertheless looked very blue from the beach when the sun was out.  (Photograph taken yesterday, 15th March 2024.)
The view from Wreck Point, Yeppoon, looking across Keppel Bay to Rossyln Bay marina and beyond to Great Keppel Island far in the distance on the horizon. The water at the moment is so muddy in the bay.  At other times of the year and depending on wind direction this bay can be so blue. (Photograph by Jennifer Marohasy, 16th March 2024.)

While the coral bleaching has been blamed on elevated sea temperatures, which may be the situation, it’s hard to know for sure.

The coral bleaching could be from freshwater runoff, from the Fitzroy River.

The Fitzroy River drains the single largest area (approximately 143,000 km2) of the Great Barrier Reef catchments and discharges into the largest estuary and then into Keppel Bay. The Keppel Islands within the bay have many fringing inshore coral reefs, including Monkey Bay reef.

When I look at the data for Rosslyn Bay marina, just across from the island and around the bay from where I live in Yeppoon, the most remarkable thing is the drop in sea-level at the end of last year, 2023.

Note the monthly data for Rosslyn Bay Marina, just across from Great Keppel Island.
From The Australian Bureau of Meteorology’s Australian Baseline Sea Level Monitoring Array Monthly Data Report – December 2023. The most recent report available as at March 2024.

Low tides associated with low sea levels can also cause corals to bleach; particularly if the low tides occur on sunny days when there is significant incoming solar radiation and not much wave action.

The lower than usual sea levels are perhaps due to the El Nino, with all the water sloshing across to the other side of the Pacific because the trade winds haven’t been blowing as hard as usual.

Sooner or later, I will get back out to Monkey Beach reef, and I will report what I see, once visibility has improved and I can see the corals again.  In the meantime, consider subscribing for my irregular email newsletter so that I can keep you in the loop.

Subscribe here: https://jennifermarohasy.com/subscribe/

To get to Monkey Beach reef I had to walk across the island from the southern end of Fisherman’s Beach all the way to Long Beach and then cut across that headland back to Monkey Beach. The more direct route is badly eroded, and not maintained.  Photograph of Long Beach by Jennifer Marohasy, 15th March 2024.
The walking tracks on Great Keppel Island are in disrepair. This map shows theoretically how it should be, https://hikingtheworld.blog/national_park/woppa-great-keppel-island/

Can anyone find me some location specific up-to-date sea temperature data for anywhere at the Great Barrier Reef?  Not the colour maps, that are variously based on homogenised satellite records.

Acropora (branching hard corals) and painted sweetlip (fish) photographed at Monkey Beach reef on 28th December 2023, by Jennifer Marohasy when visibility was much better.
Acropora corals at Monkey Beach reef, December 2023.

*****

The feature photograph is of Jennifer Marohasy sitting in Fisherman’s Beach waiting for the ferry later in the afternoon on 15th March 2024.

Cold brings out more climate lunacy

By Joe Bastardi

I am not even talking about wind turbines failing due to it being too cold.

Instead, cold brings out the Lunatic Fringe when it comes to climate. Because of the everyone-gets-a-trophy mentality that many of them have (no matter what happens, they always attribute it to climate), they try to blame cold on warming.

Let us get something straight. The so-called global temperature is a horrible metric for climate because temperatures are a third derivative indicator of climate. Wet bulb temperatures are far better and best are saturation mixing ratios. They show the relationship of water vapor to temperatures and, since there is no such relationship between CO2 and temperature, intuitively it means CO2 can not be proven to do anything.  And so they refuse to quantify the mixing ratios.  But the increase in water vapor due to warming oceans can be directly linked not only to increased temperature but also to understanding where and when it’s warmer; we see it has to be water vapor since the warming is in the coldest driest areas much more than toward the equator. This, by the way, sets off a whole chain of events that I have been trying to show people. While I am at it, I would be happy to show this entire hypothesis to groups that want to see it.  At the very least, you can laugh it out of existence. ( I am confident when I explain, you won’t) I have explained this dozens of times and can defend the hypothesis. I figure the best way to prove it is to get out in front of events in the weather and expose these people when they open their mouths on something I was predicting way before. They are clueless until it happens since the only way they care about the weather is to use it for non-weather purposes.

In any case the public does not understand that even if we use the global temperature, it’s an average of all observations. If it is 1 degree above average, it means there is enough warmth to outdo the cold by that much, but there is still going to be cold around.  There is still going to be much of the planet not knowing it is warm unless they are told and told so in a way that is pure exaggeration (hottest ever). Trying to use a temperature that is a bit over 59 (though at actual weather stations its under 58) and portraying it as hot is deceptive and delusional. 59 is not hot. The fact that it is warmer where its coldest and driest DECREASES temperature contrast and lead to less severe events.

But what about the idea that warm causes cold?  This is Climate 101 from back in the 1950s. So they recycle it because no one reading this probably cared about the climate in the ’60s, and if you don’t, certainly their mindless minions did not. The idea is, if the world warms, it is because of more water vapor, so the snowier areas of the world, though warming more, would get more snow, and it would naturally start to counter warming. I learned about this idea when I was 8 years old out of some books my dad gave me on the weather. I learned it again at Penn State in the 1970s  pre Michael Mann ( he is at U Penn now). It is a natural built-in cycle that follows Le Chateliers, something that is never mentioned.

Weatherbell laid the trap for the cold you see now, starting back in Spring. We are now going out as far as 9 months in our forecasting (hence the hurricane forecast that came out in December). In any case, the bullet points in the forecast had this from Aug 15 (first official release).

  • The first part of winter may be mild.
  • The coldest and snowiest part of the winter should be from mid-January onward.

From our summation:

I expect the opposite of last winter, where we came out of the gate fast and then fell apart. I can’t rule out a 2009 December to remember, but the idea here is the core of the worst part of winter, relative to averages, should be from mid-January onward.

Why? Because we expected blocking. And, of course, the know-nothings pushing CAGW think this is something out of the ordinary.

Well, look at Jan 1977:

Let’s see Feb 1958:

How about Jan 1985?

I can get any spectacular cold month and find blocking. I am using these because they occurred before Al Gore and his ilk started pushing all this jibberish, which, of course, has an army steeped in ignorance and arrogance now following it.

Let us take this month so far:

It was seen; it was predicted. We have shown the stratospheric event in early December ( there is one now, so look out Mid-Feb to mid-March, and the models are seeing it).  Like I said the people saying that this is a sign of climate change are ignorant and so emboldened that they get no resistance or pushback; that they are arrogant.

By the way, where were they in late December when it was so warm, and this came out on CFACT:

https://www.cfact.org/2023/12/29/extreme-cold-on-the-table-for-europe-and-then-the-u-s/

They simply Weaponize Weather in a Phony Climate War.

Someone should write a book with that title.

The post Cold brings out more climate lunacy appeared first on CFACT.