The most expensive electricity on Earth is in countries with “cheapest sources of electricity”

From JoNova

By Jo Nova

In the Bermuda Triangle of electricity bills, the more cheap generators you add, the higher your electricity bills grow

The experts at the CSIRO tell us that renewables are the cheapest sources of electricity, with all their Capex calculations and their levelised maths, and yet the electricity bills set the house on fire. (It’s Russia’s fault!) Could it be that the experts accidentally forgot to analyze the system cost and that all the hourly megawatt dollars per machine don’t mean a thing?

In the race to the most expensive electricity in the world, this week the UK is the winner. Germany is handicapped by being bundled into the EU27, lumbered with all the French nukes and is therefore not in the running. Australia is missing in action, but possibly only because the price rises were too fast and too much for the Eurostat, the US DoE, and IEA to keep up with, so they gave up.

And people wonder why China is the world’s manufacturing base.

A European Commission study:

In the next graph is the “rest of the world”. After 2021 Australian electricity prices are unmarked for some reason, but officially they rose 20% two years in a row. So that cost of €210 per MWh in 2021 could easily have become €300 by 2023, putting Australians second highest in the world after the UK.*

The bottom line is that from 2008 the price of electricity in China fell from €100 down to €80 per megawatt hour. While in Australia it rose from €125 to €300 and in the UK prices rose from €150 to €360. Effectively, the price of electricity fell 20% in China at the same time as it rose 240% in Australia and the UK.

If President Xi had wanted to run a campaign to sabotage our grids, he couldn’t have done it better.

By uncanny coincidence the percentage of wind and solar power penetration on each national grid pretty much predicts the order of the price graphs the EU collated. Among this pool, the nation with the highest penetration of wind and solar power is the UK which gets 29% of its electricity from wind and solar power.  Australia is second at 26%, and the EU collective third at 22%. Turkey and Brazil get 16% of their power from the unreliable generators, the USA got 15%, China 14%, Japan 11%, India 9% and Russia 1%.

Source: OWID

Japan’s electricity is more expensive than its modest unreliable-generator-percentage would suggest, but then they have virtually no oil, gas or coal to call their own, and no interconnectors to rescue them either.

Is 20% renewables the tipping point?

The three winners of the Highest Price Electricity race are all states with renewable penetration above 20%.

The whole grid can absorb the penetration of unreliable energy up to a point, but there comes a time when adding more random energy generators is a burden too far. The system costs start to breed like Ebola, as the good generators get euthanized, storage costs get out of hand, frequency stability becomes an issue, and everyone wants their own personal interconnector. Then word spreads that the bird killing, bat destroying and whale shredding equipment is noisy, ugly and a fire risk, and before you know it, farmers need 100 times the money to make the high voltage towers bearable. It all just adds to the cost. And finally everyone realizes that the environment you were supposed to be protecting is being clubbed by a windmill, and Florence the borer is stuck in tunnel.

Smaller grids or countries without interconnectors will hit that tipping point faster. Watch this space, world. There is no nation over the border to rescue the Australian grid.

* Estimating the unlisted Australian price leap: the ACCC here found domestic retail bills jumped from $1400 annually to $2000 in NSW, and $1200 to $1600 in Victoria. (p66). In Australia the retail electricity rates now roughly average 33c per kilowatt hour, with a range of 26-45c/KWh (AUD). But that useage cost doesn’t include all the charges. As Craig Kelly  points out the €250/MWh European rate is effectively 25 euro-¢/kWh.  But the official “Default offer” in South Australia is $0.68 kWh (or 41 euro-¢/KWh). In NSW it is $0.53 – $0.56 kWH (32-34 euro-¢)  and in Queensland it is $0.50 kWH (30 euro-¢). So Australia really is more expensive than the crazy-land EU. And while traditionally few customers paid the “default offer”, in 2023 as many as 40% of customers on flat rate plans were paying that rate, according to the ACCC (p47).

h/t to Schroder, thank you, and @CraigKellyPHON.

REFERENCES

European Commission, Directorate-General for Energy, Smith, M., Jagtenberg, H., Lam, L. et al., Study on energy prices and costs – Evaluating impacts on households and industry – 2023 edition, Publications Office of the European Union, 2024, https://data.europa.eu/doi/10.2833/782494

Or https://op.europa.eu/en/publication-detail/-/publication/3b43f47c-e1c5-11ee-8b2b-01aa75ed71a1/language-en/format-PDF/source-316287713

Inquiry into the National Electricity Market: December 2023 Report, ACCC, Australia, December 2023.

With Tree Rings On Their Fingers

CDN

There’s a lot of apparently confident talk about how current temperatures compare with those in the past, including claims of 2023 being the “hottest year ever” or at least in the last 125,000. But how do we actually know, and how much do we actually know, about historic and prehistoric temperatures? In this Climate Discussion Nexus “Backgrounder” video John Robson examines the uses, and abuses, of various temperature proxies.

Transcript below [apologies for any misspellings of proper nouns~cr]



You’ve probably heard the claim that the Earth today is the warmest it’s been in a thousand years, or 10,000, or even 125,000 years. But how do they know, when the earliest modern thermometers were invented by German physicist Daniel Fahrenheit in 1709, and we have very few systematic weather records anywhere before the mid-1800s, and few or none in most of the world until the mid-20th century? So how can anyone claim to know temperatures anywhere, let alone around the world, in, say, 1708, or even further back? How do we know what’s warmer today in Scotland than it was in 1314, the year Robert the Bruce defeated the English army at Bannockburn, or that Rome is warmer today than in 306, when Constantine became emperor, or 410 AD, when Alaric the Visigoth invaded and sacked it, or that Israel is warmer today than in 587 BC, when King Nebuchadnezzar of Babylon destroyed Jerusalem and led the Jews into captivity? He didn’t confiscate their thermometers—there weren’t any. So how can we say anything definitive, or even plausible, about a single location, never mind the whole world, 70% of which is open ocean, where nobody was keeping even anecdotal records?

Obviously, we don’t have satellite data to make up for the lack of thermometers. Instead, scientists use indirect measures called proxies. These are evidence from the geological record of what the landscape was like in the past that we believe correlate fairly well with temperature—things like tree ring widths, different isotopes of carbon in ice core layers, and the kind and quantity of shells, pollen, and other remains of living creatures found in sediments at the bottom of the ocean. If a proxy record goes back thousands of years and we think we know fairly precisely when a given part of it was created, then according to the theory, it can be used to estimate what the local temperature probably was back then compared to today. Now, we’re not criticizing proxies in principle; on the contrary, they represent an ingenious way to get important data that we can’t measure directly—or at least they can represent an important way.

But when you look closely, as we’re about to do, you find that the estimates can be rough, very uncertain, and often no better than sheer guesswork. In fact, sometimes they’re much worse than guesswork. What you have is researchers who know what they want to find and deliberately select only the kind of proxy, or only the specific proxy data series, that says what they want to hear. And far too many scientists who work with these proxies have actually gone to great lengths not to disclose the uncertainties but to hide them, to make sure the public never hears about how imprecise, or sometimes even dubious, their reconstructions are—which is where we come in.

For the Climate Discussion Nexus, I’m John Robson, and this is a CDN backgrounder on proxy reconstructions of the Earth’s temperature history. But before we plunge into the past, let’s look at how temperatures are measured, or not measured, more recently. Because if you’re going to compare modern records with older ones, it matters how both are generated. Systematic weather records from around the world since the mid-1800s are archived at the Global Historical Climatology Network and elsewhere. So, if, for instance, we pick a fairly recent year, like 1965, we can see that records were available on land from most countries around the world, although many places only had partial records from a handful of stations, and the annual average had to be based on estimating the missing numbers. And of course, there’s always the question of how good the measurements were, where the instruments were situated, how well they were maintained, and how carefully they were read. And if 1965 is shaky, take a look 40 years further back, in 1925—there was hardly any data from Africa, South America, and vast regions of Asia. Yet we’re now confidently told that, say, the Central African Republic was hotter in 2023 than in 1923. And if we go back another 40 years to 1885, we see that basically there was no data at all, other than the US, Europe, and a few places in India and Australia.

Now, here’s a surprise: from 1885 to 1965, the record gets more complete, but after 1965, it thins out again. As of 2006, the sample looked much the way it had early in the 20th century. And if we chart the number of locations supplying data to the global climate archive over the years from 1900 to 2008, it rather unexpectedly looks like this. So, as you can see, the sample size has been constantly changing, which ought always to make us uneasy about precise findings, or more exactly, claims of precise findings. And when scientists construct those famous charts of global average temperature back to the mid-1800s, they quietly admit among themselves that over half the data is missing and has to be imputed, which is a fancy way of saying ‘made up.’ But they don’t draw this issue to the attention of the public, and journalists don’t ask about it—or at least, they don’t ask the scientists who would insist on bringing it up. The coverage is fragmentary, however, which is a major statistical challenge over the entire period. Over half—53%—of data entries are missing, most of them at the poles and over Africa. The coverage generally worsens back in time, with notable gaps during the two World Wars. That survey just covers the modern data, which is supposedly the best part of the record and is at least in part based on thermometers. Prior to about 1850, we have to resort to proxies to get temperature estimates. And while there are many potential proxy records, most attention is paid to tree rings, ice core layers, and marine sediments. So, obviously, it’s important to ask how reliable they are. In 2006, the US National Academy of Sciences did just that, conducting a review of all these methods in light of the controversies that had arisen concerning the IPCC hockey stick graph, which was mostly based on tree rings.

In general, that review said the proxies sometimes contain useful information, but scientists have to be careful about how they use them, and they need to be honest about the uncertainties they specifically cautioned that uncertainties of the published reconstructions have been underestimated. So how are proxy-based reconstructions done? Let’s start with tree rings. As trees grow, they add a ring of new wood around their trunks every year. Scientists measure the width and density of these rings by taking small, pencil-like cores out of the trunk, and the general principle is that trees grow faster and further in good years than bad, so thick rings mean favorable conditions, which certainly would include warmth. So variations in these rings might, in some cases, correlate with variations in temperature. The first problem, which is obvious to anyone who’s ever seen a tree stump, is that the ring width patterns can be completely different depending on which side of the tree you take the core from. And the National Academy’s panel noted that many other things than temperature affect tree ring growth, such as precipitation, disease, fire, and competition from other trees. Scientists need to try to find locations where they are sure temperature is the main controlling factor, but even if they are diligent, it’s not always possible to know if that’s the case.

They also emphasize that it’s not enough to look at a single tree. If a pattern found in a tree core is truly a climate signal, it should be seen in cores taken from at least 10 to 20 trees in an area, because a single tree can suffer storm damage or be attacked by pests. So whenever you see a tree-ring-based reconstruction, the first question you need to ask is how many trees were sampled. But good luck finding out. One of the problems we run into when we look at these studies is the number of times scientists rely on insufficiently large samples, or worse, take a large sample and then throw out the ones that don’t tell them what they want to see, or simply refuse to say how many trees they examined. Canadian researcher Steven McIntyre spent about 15 years blogging at the site ClimateAudit.org, detailing his efforts to get tree ring researchers to report these things, often without success. If they’re not deliberately hiding something, they’re sure doing a good imitation. Another problem with tree rings is that as a tree gets older and its trunk widens, if the volume of growth is constant, the width of each year’s ring will decrease, meaning that ring widths will get narrower, even if temperatures stay the same. Scientists need to use statistical models to remove this trend from the data, but every time you start manipulating data, even for valid reasons and carefully, it creates further uncertainties. So it’s far from straightforward. For instance, the National Academy’s panel focused attention on two issues that arose during the debates about the Michael Mann hockey stick graph.

First, they pointed out that the underlying theory assumes the correlation between temperature and tree ring widths must be constant over time. If wide rings in the 20th century mean temperatures were high, the narrower rings hundreds of years ago mean it was cooler.

But what if this sub-theory doesn’t hold? What if something else changes the growth pattern from time to time? It might sound like a weird thing to worry about, but when you start checking tree rings against actual recent thermometer data, you find significant evidence that it does happen. For instance, after 1960, tree rings in many locations around the world started getting narrower even while thermometers said local temperatures were rising. Scientists gave this a fancy label—the Divergence Problem—waved it away by saying it was probably a one-off occurrence, and then started deleting the post-1960 data so that people wouldn’t notice it. And we discussed a particularly glaring example of this approach in our video on Hiding the Decline. Unfortunately, as Rudyard Kipling once said, giving something a long name doesn’t make it better. On the contrary, the Divergence Problem undermines the whole field, or forest, because if trees aren’t picking up the warming happening now, how do we know they didn’t also fail to pick it up then? If narrow tree rings are happening during a warm interval today, how can scientists insist that narrow tree rings prove it was cold in the past? And worse, instead of being honest about the question, scientists simply resorted to hiding the decline, hoping no one would notice. It didn’t work.

Another issue the National Academy pointed to, still on the tree ring proxy, was that some kinds of tree are definitely not good for recording temperatures and should be avoided. They particularly singled out bristlecone pines. These are small conifers that grow to a great age, which of course makes them superficially appealing. Unfortunately, over their long lives, they form twisted, bunched-up trunks with ring width patterns that have nothing to do with temperature. And one of the discoveries made by Steven McIntyre in his analysis of the Mann hockey stick was that its shape depended entirely on a set of 20 bristlecone pine records from Colorado that have a 20th-century trend of bigger rings despite, awkwardly, coming from a region where thermometers say no warming took place. This figure shows in the top panel the result of applying Mann’s statistical method to a collection of over 200 tree ring proxies, including the 20 bristlecone series, using a flawed method that puts most of the emphasis on those bristlecones. It has a compelling hockey stick shape. The bottom panel shows the same calculation after removing just the 20 bristlecone pine records. It’s clear that the hockey stick shape is entirely due to tree rings that experts have long known are not valid for showing temperature. What’s worse, as McIntyre has pointed out, Mann himself computed the bottom graph, but he hid the results instead of showing them to his readers.

This pattern is far too common. When we look hard at paleoclimate reconstructions, they fall apart on close inspection, but the scientists who do them almost never tell you about their weaknesses upfront. In fact, it’s happened so often that you’re justified in assuming it’s the rule, not the exception.

Another series that used to be popular in climate reconstructions was a collection of Russian tree rings from the Polar Urals region, introduced in a 1995 journal article by the late British climatologist Keith Briffa and his co-authors. They argue that their tree ring reconstruction showed the 20th century was quite warm compared to the previous 1,100 years, and they specifically identified the years around AD 1000 as among the coldest of the millennium. Here’s that chart.

This Briffa Polar Urals data series naturally became very popular in other tree ring reconstructions. But the problem was that the early part of the data was only based on three trees, which is not enough for confident conclusions. In 1998, some other scientists obtained more tree ring samples from the same areas, and suddenly the picture looked completely different. Instead of AD 1000 being super cold, it was right in the middle of the hottest period of all—the supposedly non-existent Medieval Warm Period—and the 20th century was no longer the least bit unusual.

So what did Briffa and his colleagues do? Did they publish a correction or let people know that they’d actually found evidence of a Medieval Warm Period? No, of course not. They just quietly stopped using Polar Urals data and switched to a new collection of tree rings from the nearby Yamal Peninsula that had the right shape. Now that switcheroo was bad enough, but the story gets worse. The dogged Steve McIntyre asked Briffa to release his Yamal data, but Briffa steadfastly refused. Eventually, after nearly a decade, the journal where he published his research ordered Briffa to release it, and McIntyre promptly made two remarkable discoveries. First, the number of trees in the 20th-century segment dropped off to only five near the end, which clearly fails the data quality standard. Second, McIntyre found that another scientist, Fritz Schweingruber, who happened to be a co-author of Briffa, had already archived lots of tree ring data from the same area, and while it looked similar to Briffa’s up to the year 1900, instead of going up in the 20th century, it went down. Briffa, surprise, surprise, hadn’t used it. So it’s not just incomplete data that happens to have a bias; it’s data that’s been deliberately chosen to introduce one.

This graph from McIntyre’s ClimateAudit website shows a close-up of the 20th-century portion. The red line is the data Briffa used, the black line is the Schweingruber data, and the green line is the result from combining all the data together. Clearly, when you include the more complete data, the blade of the hockey stick disappears, and the 20th century shows a slight cooling, not warming, which is kind of important to the story. Another source of bias in tree ring reconstructions comes from the practice of something called pre-screening. Recall that the National Academy of Science panel said that researchers should sample a lot of trees at a location, and if there is a climate signal, it should be common to all of them. If modern temperatures line up with some of the tree cores but not others, it might be a spurious correlation. This would mean the early portion of the record is not reliable for temperature reconstructions. In a 2006 study, Australian statistician David Stockwell illustrated the problem by using a computer to generate a thousand sets of random numbers, each one 2,000 numbers long. He selected the ones where the last 100 numbers happened to correlate with the orthodox 20th-century global average temperature series and threw out the rest, then combined the data together the way paleoclimatologists do. The result was an impressive hockey stick, which according to common practice would lead to the conclusion that today’s climate is the warmest in the past millennium. The problem is the graph has absolutely no information about the past climate in it, true or false.

Instead, it was constructed using random numbers that were then pre-screened to fit modern temperatures and then spliced to the modern temperature record to create the illusion of providing information about the past, which is exactly what far too many tree ring researchers are doing now. One way to guard against generating spurious results like this one is to use all the data from a sampling location, but researchers on a mission don’t do so. Instead, they pre-screen and may even end up throwing out most of the data they’ve collected if it’s what it takes to get the result they wanted. In 1989, American climate scientist Gordon Jacoby and his co-author Rosanne D’Arrigo published a reconstruction of northern hemisphere temperatures that had the usual hockey stick shape, although it only went back to 1670. In the article, the authors said they sampled data from 36 sites but only kept data from 10 of them. So McIntyre emailed Jacoby and asked for the others, and Jacoby, unsurprisingly, refused to show them. What is surprising is the frankness of his explanation: “Sometimes, even with our best efforts in the field, there may not be a common low-frequency variation among the cores or trees at a site. This result would mean that the trees are influenced by other factors that interfere with the climate response. There can be fire, insect infestation, wind or ice storm, etc., that disturb the trees, or there can be ecological factors that influence growth. We try to avoid the problems but sometimes cannot.

If we get a good climatic story from an chronology, we write a paper using it. That is our funded mission. It does not make sense to expend efforts on marginal or poor data, and it is a waste of funding agency and taxpayer dollars.” The rejected data are set aside and not archived. And you can guess what makes a good climatic story. McIntyre eventually gave up trying to get the 26 datasets Jacoby threw away, but Jacoby died in 2014, and the same year, his university archived a lot of his data, and later in the fall of 2023, McIntyre noticed that buried in the archive was one of the series Jacoby had rejected, from Sukakpak Peak in Alaska. Even though it was close to the two sites that Jacoby and D’Arrigo had retained, and had at least as many individual tree cores in it as other sites, it was rejected as being poor data. And here’s what it looks like: the ring widths, if they’re a temperature proxy, show that the Medieval period was very warm, then there were a couple of very cold periods, and the 20th century was nothing unusual, which is not a good climate story, which is what poor data now means to these sorts. So they threw it out. You can see how the game works. When they get a hockey stick shape, they say it’s based on good data, and when we ask how they define good, the answer is, if it’s shaped like a hockey stick, QED.

Now let’s look at a very different type of temperature proxy, marine sediments. Here scientists look at organic compounds called alkanones, which are produced in the ocean by tiny creatures called phytoplankton and which settle in layers on the ocean floor. Since alkanones have chemical properties that correlate to temperature, by drilling cores out of the ocean floor and examining the changing density of alkanones in layers that they estimate to have been formed at various times, science can say something about the past climate. Once again, there’s a lot more uncertainty than we often hear about, because the layers form very slowly. Unlike tree rings, alkanone layers don’t pick up year-by-year changes, only average changes over multiple centuries. A single data point will represent the alkanone density in a thin layer of a core sample, but it might, at best, indicate not a single year but average climate conditions over several hundred years.

As a result, it means they can’t be used for comparing modern short-term warming and cooling trends to the past. The appropriate comparison would be a single data point for temperature from 1823 to 2023. On the plus side, because thin layers cover long periods, a single sediment core can provide information a long way into the past, even 10,000 years or more. And thus it was that in March 2013, headlines around the world announced that the Earth was now warmer than any time in the past 11,000 years, based on a new proxy reconstruction published in Science magazine by a young scientist named Shaun Marcott, based mostly on a global sample of alkanone cores collected by other scientists in previous years. The graph showed that the climate had warmed after the end of the last glaciation, 11,000 years ago, stayed warm for millennia, then cooled gradually until the start of the 20th century, after which it warmed at an exceptional rate, doing 8,000 years of cooling in only one century. Gotcha, right? Except a reader at Climate Audit soon noticed something odd. Marcott had just finished his PhD at Oregon State University, and the paper in Science was based on one of his thesis chapters, which was posted online, and, drum roll, please, in that version, there was no uptick at the end, no 20th-century warming, no hockey stick. So where did the blade of the stick come from in the version published in Science? While climate scientists were busy proclaiming the Marcott result as more proof of the climate crisis, it fell to outsiders, like once again Steve McIntyre and his readers at Climate Audit, to dig into the details. In this case, McIntyre was able to obtain the Marcott data promptly, and to show that the big jump at the end was based on just one single data point.

As a mining consultant, McIntyre also knew something important about drill cores. The topmost layer, which represents the most recent data, can be contaminated during the drilling process. He wanted to know how the various scientists who collected the alkanone samples dealt with that issue, so he looked up the original studies, and to his surprise, he found that they didn’t consider the core tops to be reliable measures of recent temperatures. Most of them only reported temperature proxies starting centuries in the past, even a thousand years or more. Marcott and his co-authors had redated the cores to the present, but if they used the dates assigned by the original authors, there would be no uptick at the end. After being confronted with this data, Marcott and his co-authors put out a posting on the web in which they made a startling admission: “The 20th-century portion of our paleo temperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions.” But of course, the damage had been done. How many news stories that pounced on the original even mentioned this critical correction, let alone made a big fuss over it?

Now let’s look at another popular type of proxy, the one that comes from drilling out cores in large ancient ice caps like the ones over Greenland and Antarctica. These cylinders are believed to provide evidence of temperatures back hundreds of thousands of years, because every year, a layer of snow becomes ice, and the chemical composition of the ice contains clues about temperature. One of the most famous of these reconstructions is the Vostok ice core from Antarctica.

It shows that most of the past half-million years have been spent in Ice Age conditions, interrupted only by short interglacial periods. The last 10,000 years, our current interglacial, has been longer than the previous three, but colder than the previous four. The ice core record also shows that changes in and out of ice ages are extremely rapid. When we start diving into the next glaciation, we may not have much time to prepare, assuming, of course, that ice cores are reliable. It’s no good for us to point to methodological uncertainties when proxies confirm the orthodox and then tout their precision when they challenge it. And one important point about ice cores is that, as with sediment layers, the bubbles don’t take definitive shape in just one year or a couple of years, so there’s a certain degree of blurring.

That means that they can miss significant spikes or dips in temperature if they’re sudden and brief. “Brief” here being a word that can even extend to a century. Which isn’t to say that proxies are inherently useless, or even disreputable. On the contrary, as we said at the outset, we applaud the ingenuity of researchers who look for indirect ways of measuring things that matter when the direct ones aren’t available. But we insist that they be honest in how they collect and sort the data, and how they present it, including how much certainty they claim that it carries. Oh, there’s one more key point that we need to make about the whole business of using proxy data to reconstruct past temperatures. During the overlap period when we have both thermometer and proxy data, the challenge is to construct a statistical model connecting them.

And the problem is that, mathematically speaking, it’s well known that many different models can be constructed with no particular reason to favor one over the others. In a 2011 study in the Annals of Applied Statistics, two statisticians, Blakely McShane and Abraham Wyner, demonstrated this by constructing multiple different models using the Mann hockey stick data and showed that while they implied completely different conclusions about how today’s climate compares to the past, they all fit the data about the same. While climatologists tend to produce results like the red line, it would be just as easy and just as valid to produce the green line from the same data. So the uncertainties in these kinds of reconstructions go way beyond the little error bars climatologists like to draw around their reconstructions, because in truth, they can’t be certain of the shape of the reconstruction to begin with.

So yes, by all means apply proper scientific methods to reconstructing the past climate, but proper ones, handling data honestly and recognizing the often very large amount of uncertainty. For the Climate Discussion Nexus, I’m John Robson, and that’s our backgrounder on temperature reconstruction from the pre-metered era.

Dominion’s pile driving boat violates the Marine Mammal Protection Act

From CFACT

By David Wojick

The boat is the Orion, and it is BIG, about 710′ long, with a 5,000-ton crane capacity and 8 powerful Thrusters for dynamic positioning. Its coming job is to place and drive the 178 enormous monopiles in Dominion Energy’s huge offshore wind facility.

Turns out the Orion has a serious noise problem. Its thrusters are so loud they exceed the harassment threshold established by NOAA under the Marine Mammal Protection Act. Harassment of marine mammals is illegal unless authorized by NOAA’s National Marine Fisheries Service (NMFS), and no such authorization exists.

Dominion applied for and received NMFS authorization to harass almost 80,000 marine mammals while constructing its facility. But that was for the noise of pile driving and sonar surveys. The noise from dynamic positioning was not included.

It, therefore, appears that the Orion cannot set and drive piles until the necessary authorization has been applied for and issued. This could take some time.

Dynamic positioning means the boat can be held motionless despite significant wind, waves, and current acting to move it. This is essential because if the boat moved while driving a pile, that pile would not be vertical, and a leaning pile is useless. The piles each weigh about 1,500 tons with a diameter of 28 feet, so holding them perfectly steady is a feat.

So the 8 thrusters work like tugboats as each is a separate, powerful propulsion device. Orion is a DP3 boat, meaning it has the most elaborate dynamic positioning system.

The harassment level noise from the Orion was measured by marine acoustics expert Robert Rand while the boat was working on the Vineyard Wind facility. His recently released report is here: https://randacoustics.com/papers

Rand’s key findings are potentially very important. This from his report:

“The continuous noise generated by vessel propulsion and dynamic positioning (DP) thrusters significantly surpassed the federal threshold for behavioral harassment, with noise levels exceeding 120 dB out to over 6 kilometers. Given federal agencies’ concerns over the compound effects of continuous and impulse noise, this frequently overlooked issue in regulatory assessments constitutes a definitive risk of behavioral harassment to marine mammals, underscoring the need to reevaluate current protective measures.”

Rand also found that the Orion’s pile driving was significantly louder than that assumed in NMFS authorizations. Here is his Conclusion, which may apply to all pile-driving boats:

“This investigation discovered a substantial underestimation of both impulsive and continuous noise levels by current regulatory standards, suggesting that the actual exposure to harmful noise levels from pile driving for marine mammals like the critically endangered North Atlantic Right Whale is substantially greater than NMFS acknowledges in its existing protective measures. This indicates an urgent need to review and possibly revise NMFS monitoring protocols and mitigation strategies for pile driving to ensure adequate protection for marine mammals against both impulse and continuous underwater noise pollution. The findings detailed in this report underscore the need for immediate actiondue to the substantial underestimations uncovered by this independent investigation.”

To make things even more complex, the Orion now has new vibratory pile driving technology onboard, which was not included in Dominion’s application for harassment authorization. The use of this technology may not be authorized.

All told, this is a regulatory mess. The harassment-level thruster noise has not been authorized. The pile-driving noise exceeds that assumed for authorization, and new pile-driving technology was not included in the authorization.

Surely, the Orion cannot be allowed to operate under these conditions, as they would violate the Marine Mammal Protection Act.

Conspiracy Discovered at University of Missouri

From Watts Up With That?

News Brief by Kip Hansen — 20 April 2024

Although hidden from the public for years—hidden in plain sight yet invisible to most—a deep and dangerous conspiracy has been operating in America for generations, possibly since its founding. 

Only in the last few weeks has this pernicious conspiracy been brought to light by the brave actions of  Henderick “Mort” Morton  who founded a society at the University of Missouri to expose and combat the conspiring malefactors.   Carrying a sign in the University’s  “Speakers Circle”  declaring “NOT A CULT. I PROMISE”  Mort has  been recruiting  members to society, which he founded and is the president of.

The Maneater, the official student newspaper of the University, quotes Mort saying:

“Call us crazy if you want,” Morton said. “We are serious about what we do. We are dedicated to the cause.” 

The first meeting of the Society drew over sixty students, a standing- room-only crowd which had to use a meeting room reserved in the name of another student organization, the Philanthropy Club, while the newly founded Society applies for official recognition. 

The “Squirrel Observation Society is investigating the goals and motives of squirrels in an effort to end their “tyranny.”

“During the meeting, Morton presented a slideshow with squirrels edited into historical photos, explaining that squirrels have been responsible for many past tragedies. Barely stifled laughter filled the room as Morton displayed photos of squirrels edited next to dinosaurs or in the background of the “Washington Crossing the Delaware” painting.”

“I really enjoyed the squirrels whenever I first came to campus, but they brought up some good points tonight,” Patton said.

but….

“There’s something up with them.”

“Look around you … Faculty leave, deans retire, students graduate but one thing always stays: a permanent, undying brigade of squirrels always here and forever silently watching.”

“Why are they here? Who leads them? What are their goals? These are all questions we aim to answer in the Squirrel Observation Society.”

You can follow them in their quest for truth and justice on Instagram.

# # # # #

Author’s Comment:

Brought to you through the efforts of The College Fix and the University of Missouri’s student newspaper, The Maneater.

“The Maneater’s name … signifie[s] a strong and aggressive paper” that serves as the student voice of MU. [ source ]

Personally, I too think there is something up with those danged squirrels!

Yes, this is the best I can do for humor.  Hope it made you smile.

Thanks for reading.

# # # # #

Wrecked. Day 3. Part 3.  Great Keppel Island, April 2024

From Jennifer Marohasy

By jennifer

According to the Bureau of Meteorology, it was going to storm on Sunday.  Instead, the day broke calm and sunny.   And so, Jenn (not me, there is another one) and M-J from Keppel Dive, took a few of us around to the very exposed Wreck Beach facing due east.   This is the beach where you can get properly wrecked on Great Keppel Island – as the story goes.

The map at the dive shop, and me.

According to local legends there is gold here, buried somewhere in the sand with a shipwreck, or three.

I found gold by way of a nudibranch, specifically Chromodoris kuiteri.

The tail end of a nudibranch with filaments of gold. Photograph by Jenn Marohasy at Wreck reef on Sunday, April 21.
Can you see the nudi, amongst the corals?

Most striking when you drop down to the reef at the northern end of the bay is all the bleaching.   It is everywhere at the reefs fringing this island.

There is so much bleached coral along the wall that we swam from the beach east, northern end. Wreck Beach reef, Sunday April 21.

What is not bleached seems to be a particularly dark chocolate brown:  it seems much of the coral either has no zooxanthellae or too much.

Plate corals a very dark brown replete with zooxanthellae, contrasted against the stark white branching that is bleached.
Contrasting bleached versus dark brown corals, but these perhaps brown from macro algae rather than symbiotic algae.
What species of coral is this? It looks a healthy brown with purple tips. Wreck Beach reef, April 21, 2024.

Can someone explain to me how this works, how does the zooxanthellae become toxic to the coral when the water becomes too warm? What is the physical mechanism?

I’m told that it is not a case of the zooxanthellae dying insitu, rather the coral polyps kick them out.

Leaving Wreck Beach reef, for Secret Cove. More about this secret and a shark in Part 4, tomorrow.
Trying to remember the name of this fish. There were a lot of them here, and around the headland at Secret Cove.

L A Times Cherry Picks & Misrepresents NOAA Climate Data to Exaggerate March 2024 U.S. and Global Temperature Outcomes

From ClimateRealism

Guest Essay by Larry Hamlin

The L A Times article and headline shown below exaggerate the March 2024 U.S. and global temperature outcomes by cherry picking and misrepresenting data that mischaracterizes what the data actually shows.

The Times article makes the following claims regarding the U.S. for the period January through March 2024:

“In the United States, March was the 17th warmest in the 130-year data record, according to the National Oceanic and Atmospheric Administration. The average temperature in the contiguous U.S. was 45.1 degrees — 3.6 degrees above average.”

The Times article does not present readily available NOAA measured  Maximum Contiguous U.S Temperature for the month of March from 1895 through 2024 as shown below.

The NOAA March data clearly shows that maximum temperatures across the contiguous U.S. have been consistently declining since March 2012 – a highly significant point which is unmentioned in the Times article.

This NOAA measured historical U.S. March temperature data behavior, as shown above, does not support “the heat just keeps coming” hype in the Times article.

The NOAA data shows, as indicated below from NOAA’s website, that the March 2024 temperature is only the 22nd highest temperature measured during this period with the highest measurement in March 1910 and many other prior years.

The Times article claim that March was the “17th warmest in the 130-year data record” is incorrect because that claim it is based on the Average versus Maximum NOAA Temperature data for the Contiguous U.S.

The Times continues to mischaracterize “average temperatures” instead of “maximum temperatures” in claiming “warmest“ and “hottest” temperature outcomes as they did regarding their flawed claim that the summer of 2023 was “The Hottest Summer Ever” as addressed here.

The “hottest” or “warmest” temperature in March 2024 was 56.61 degrees F (shown in NOAA’s March temperature data above) versus the “average temperature” of 45.1 degrees F in March 2024.

The Times article claim that “January through March marked the fifth-hottest start to the year in the U.S., NOAA said.” is discussed below.

NOAA’s maximum contiguous U.S. temperatures for the period between January through March 2024 are shown below.

This NOAA data again shows the declining trend in the January through March 2024 Contiguous U.S. temperature period beginning in 2012 – a highly significant point again unaddressed by the Times.

Additionally, this NOAA data, as shown below, indicates that the January through March 2024 year are only the 11th highest (49.97 degrees F) in the period 1895 through 2024 and not the 5th highest as noted in the Times article which is again based on average instead of maximum temperature anomaly data.

The NOAA January through March 2024 Contiguous U.S. maximum temperature data does not support the Times theme that “the heat just keeps coming”.

Furthermore, NOAA’s USCRN March 2024 maximum temperature anomaly data for the Contiguous U.S., shown below, clearly establishes there is no increasing maximum temperature anomaly trend during the period 2005 through 2024 with the March 2024 value at 1.28 degrees F far below the year 2012 maximum March peak value of 7.72 degrees F.

The Times conceals the clear failure of NOAA’s data to support alarmists claims of increasing maximum temperature anomaly trend outcomes across the contiguous U.S. as readily apparent in the NOAA graph above and unaddressed by the Times.

These measured NOAA values of maximum temperature anomaly outcomes in March 2024 do not support the Times theme that “the heat just keeps coming”.

Also, hyping monthly and annual NOAA measurement temperature differences as representing “climate change” (which is properly evaluated over periods from 30 to 100 years) is politics not science.

The Times article provides NOAA global average temperature anomaly data updated through March 2024 but conceals that the global average temperature anomaly value significantly varies throughout the world.

Even though global CO2 levels are ubiquitous in the atmosphere the average temperature anomaly values associated with the world are not homogeneous but a highly heterogeneous patchwork across the globe contrary to the flawed claim by climate alarmists that the global average temperature anomaly value can be used alone to characterize global climate.

This patchwork discrepancy clearly demonstrates that multiple nonuniform natural weather and climate causalities dominate the global climate behavior versus the data unsupported hype that man made climate change is dominant – a highly significant outcome unaddressed by the Times.

The Times articles concealed highly significant global average temperature anomaly data regional differences that are presented and summarized below.

NOAA’s Global Land area average temperature anomaly measured outcome is shown below for March 2024 revealing a decline from March 2023 (2.09 degrees C versus 2.19 degrees C respectively) with year 2016 (an El Niño event year value of 2.46 degrees C) remaining the highest measured anomaly outcome.

The Global Land area is where Earth’s 8 billion+ people live. Based on NOAA’s measured average global land area temperature anomaly data there is no “the heat just keeps coming” theme in NOAA’s Global Land regions in March 2024.

NOAA’s Northern Hemisphere Land area average temperature anomaly measured outcome is shown below for March 2024 revealing the decline from March 2023 (2.39 degrees C versus 2.62 degrees C respectively) with year 2016 (an El Niño event year value of 3.12 degrees C) remaining the highest measured outcome. There is no “the heat just keeps coming” theme in the March 2024 in the Northern Hemisphere.

NOAA’s Asia Land area average temperature anomaly measured outcome is shown below for March 2024 revealing the decline from March 2023 (2.52 degrees C versus 4.01 degrees C respectively) with year 2008 (4.20 degrees C) remaining the highest measured outcome. There is no “the heat just keeps coming” theme in Asia in March 2024.

NOAA’s Oceania Land area average temperature anomaly measured outcome is shown below for March 2024 revealing the decline from March 2023 (0.93 degrees C versus 1.10 degrees C respectively) with year 2016 (an El Niño event year value of 1.85 degrees C) remaining the highest measured outcome. There is no “the heat just keeps coming” theme in Oceania in March 2024.

NOAA’s Gulf of Mexico Land and Ocean area average temperature anomaly measured outcome is shown below for March 2024 revealing the decline from March 2023 (0.95 degrees C versus 1.51 degrees C respectively) with year 2020 (1.63 degrees C) remaining the highest measured outcome. There is no “the heat just keeps coming” theme in the Gulf of Mexico in March 2024.

NOAA’s Hawaiian Region Land and Ocean area average temperature anomaly measured outcome is shown below for March 2024 revealing the decline from March 2023 (0.31 degrees C versus 0.61 degrees C respectively) with years 1947 and 2017 (1.11 degrees C) remaining the highest measured outcomes. There is no “the heat just keeps coming” theme in the Hawaiian Region in March 2024.

NOAA’s Arctic Land and Ocean area average temperature anomaly measured outcome is shown below for March 2024 revealing the decline from March 2023 (2.42 degrees C versus 2.85 degrees C respectively) with year 2019 (4.33 degrees C) remaining the highest measured outcomes. There is no “the heat just keeps coming” theme in the Arctic in March 2024.

NOAA’s Antarctic Land and Ocean Antarctic area average temperature anomaly measured outcome is shown below for March 2024 revealing the decline from March 2023 (0.24 degrees C versus 0.50 degrees C respectively) with the year 1966 (1.18 degrees C) remaining the highest measured outcome which occurred 58 years ago with a clearly declining trend over this more than five-decade period.

There is no “the heat just keeps coming” theme in the Antarctic in March 2024.

These March 2024 U.S. and global average temperature anomaly regional outcomes presented above reflect climate reality based on climate science data versus climate alarmism hype and politics.

Unfortuanately, California and the U.S. (thanks to Governor Newsom and President Biden’s climate alarmist polices) electricity prices have exploded upward through the roof, as shown in the graphs below, driven by climate alarmist politically mandated use of highly unreliable and hugely costly renewable energy.

The reality of climate alarmism is that the “hype (not heat) just keeps coming” resulting in huge and unnecessary increases to California and U.S. electricity costs creating economic hardships for all citizens, businesses, educational, medical, and other necessary organizations that support the creation of economic benefits for our society.

Climate Change Is Normal and Natural, and Can’t Be Controlled

wallup.net

From Watts Up With That?

By Frits Byron Soepyan

NASA claimed that “Earth is warming at an unprecedented rate” and “human activity is the principal cause.” Others proposed spending trillions of dollars to control the climate. But are we humans responsible for climate change? And what can we do about it?

“The climate of planet Earth has never stopped changing since the Earth’s genesis, sometimes relatively rapidly, sometimes very slowly, but always surely,” says Patrick Moore in Fake Invisible Catastrophes and Threats of Doom. “Hoping for a ‘perfect stable climate’ is as futile as hoping the weather will be the same and pleasant, every day of the year, forever.”

In other words, climate change is normal and natural, and you can forget about controlling it.

For instance, a major influence of weather and climate are solar cycles driven by the Sun’s magnetic field over periods of eight to 14 years. They release varying amounts of energy and produce dark sunspots on the Sun’s surface. The effects of solar cycles on Earth vary, with some regions warming more than 1°C and others cooling.

Climatic changes occur as a result of variations in the interaction of solar energy with Earth’s ozone layer, which influences ozone levels and stratospheric temperatures. These, in turn, affect the speed of west-to-east wind flows and the stability of the polar vortex. Whether the polar vortex remains stable and close to the Arctic or dips southward determines whether winters in the mid-latitudes of the Northern Hemisphere are severe or mild.

In addition to solar cycles, there are three Milankovitch cycles that range in length from 26,000 to 100,000 years. They include the eccentricity, or shape, of Earth’s elliptical orbit around the Sun. Small fluctuations in the orbit’s shape influence the length of seasons. For example, when the orbit is more like an oval than a circle, Northern Hemisphere summers are longer than winters and springs are longer than autumns.

The Milankovitch cycles also involve obliquity, or the angle that Earth’s axis is tilted. The tilt is why there are seasons, and the greater the Earth’s tilt, the more extreme the seasons. Larger tilt angles can cause the melting and retreat of glaciers and ice sheets, as each hemisphere receives more solar radiation during summer and less during winter.

Finally, the rotating Earth, like a toy top, wobbles slightly on its axis. Known as precession, this third Milankovitch cycle causes seasonal contrasts to be more extreme in one hemisphere and less extreme in the other.

Moving from outer space to Earth, ocean and wind currents also affect the climate.

For instance, during normal conditions in the Pacific Ocean, trade winds blow from east to west along the Equator, pushing warm surface waters from South America towards Asia. During El Niño, the trade winds weaken and the warm water reverses direction, moving eastward to the American West Coast. Other times, during La Niña, the trade winds become stronger than usual, and more warm water is blown towards Asia. In the United States and Canada, these phenomena cause some regions to become warmer, colder, wetter, or drier than usual.

In addition to El Niño and La Niña, there is also the North Atlantic Oscillation, which is driven by low air pressure in the North Atlantic Ocean, near Greenland and Iceland (known as the sub-polar low or Icelandic low), and high air pressure in the central North Atlantic Ocean (known as the subtropical high or Azores High). The relative strength of these regions of low and high atmospheric pressures affects the climate in the Eastern United States and Canada and in Europe, affecting both temperatures and precipitation.

Similarly, Hadley cells are the reason Earth has equatorial rainforests that are bounded by deserts to the north and south. Because the Sun warms Earth the most at the Equator, air on either side of the Equator is cooler and denser. As a result, cool air blows towards the Equator as the warm, less dense equatorial air rises and cools, releasing moisture as rain and creating lush vegetation. The rising, drier air reaches the stratosphere blowing north and south to settle in regions made arid by lack of atmospheric moisture.

These and other phenomena influencing our climate are well beyond the control of humans.

This commentary was first published at Real Clear Markets on March 30, 2024.

CO2 Coalition Research and Science Associate Frits Byron Soepyan has a Ph.D. in chemical engineering from The University of Tulsa and has worked as a process systems engineer and a researcher in energy-related projects.

DAVID BLACKMON: Having Biden Declare A Climate Emergency Is A Crazy Idea

From The Daily Caller

DAVID BLACKMON

I recorded a podcast this week in which the host told me I am an “outlier” for being willing to write the truth about the destructive nature of the Biden administration’s energy policies. It was one of the kindest things anyone has ever said to me, frankly.

So, I guess I will be an outlier again when I write that the idea being considered again by White House officials of having President Biden declare a climate emergency so he can implement a draconian crackdown on the domestic oil and gas industry is frankly crazy. That’s the truth.

Bloomberg reported Thursday that unnamed officials inside the White House said the idea of declaring a climate emergency, first considered in 2021 and again in 2022, is once again under consideration. The only “emergency,” of course, is the president’s flagging approval ratings among impressionable young voters that threaten to derail his re-election chances. Declaring a climate emergency would arm the president with dictatorial powers to hamstring the domestic industry more than his regulators and hundreds of executive orders have already managed to do.

According to Bloomberg’s sources, actions being considered would include suspending offshore drilling, restricting exports of oil and LNG, and “throttling” the industry’s ability to transport its production via pipelines and rail. Given the industry’s crucial nature, it all sounds like a recipe for massive economic disaster.

“The average American is certainly not demanding a climate emergency declaration. It’s the losing team of left-wing Democrat activists and the shrinking base of elites who are,” U.S. Oil and Gas Association President Tim Stewart told me in an interview. “It’s not about climate, it’s about control: Control over the entire U.S. economy, control of production, manufacturing, distribution, and consumption. If you control energy, you control all these things. Which means you have control of the people.”

Stewart notes that the use of emergency powers in this instance would represent the same playbook used by federal, state, and local governments to restrict citizens’ freedoms and choices during COVID pandemic. But for the president, it would also be a means of shoring up support among the billionaire class that funds both the climate alarmist movement and so many Democrat Party campaigns, including his own campaign for re-election.

That angle was echoed by Tom Pyle, president of the D.C.-based think tank, the Institute for Energy Research. “By now, we have gotten used to incredibly damaging and stupid decisions from the Biden administration, but the idea of declaring a ‘climate emergency’ is in a class by itself,” Pyle told me. “Like the freeze on new LNG permits, the only emergency President Biden is seeking to address with this latest threat is his slippage in the polls among young voters.”

Others with whom I spoke on the matter were skeptical that the White House would really take such an extreme step in the middle of a re-election effort, but that outlook seems naïve, really. After all, who would have predicted last December that the administration would halt all permitting of new LNG export facilities purely for political reasons? Who would have predicted in late 2021 that the president would order the draining of 40% of the nation’s wartime Strategic Petroleum Reserve for no reason other than a pure political calculation designed to try to influence the 2022 midterm election?

Anyone thinking such a move would be made out of a real, good faith effort to somehow impact climate change needs to consider this: Demand for oil and natural gas is a global phenomenon that will not be reduced just because Biden cracks down on the U.S. domestic industry. Such a crackdown would inevitably create the flight of billions of dollars in capital to other parts of the world where environmental regulations are far less stringent than in the United States.

The climate alarmists advocating for this crazy policy action like to ignore the reality that the Earth has only one atmosphere which everyone shares. The U.S. oil and gas industry has dramatically cut emissions of both methane and CO2 even as it has achieved new records in production. No other nation on Earth can make a similar claim.

This is indeed a crazy idea, but it would be a mistake to assume it is not being seriously considered, and for all the wrong reasons.

David Blackmon is an energy writer and consultant based in Texas. He spent 40 years in the oil and gas business, where he specialized in public policy and communications.

The views and opinions expressed in this commentary are those of the author and do not reflect the official position of the Daily Caller News Foundation.

All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact licensing@dailycallernewsfoundation.org.

Unnecessary Net Zero, Part II: A Demonstration with Global Carbon Project Data

From Roy Spencer, PhD

April 23rd, 2024 by Roy W. Spencer, Ph. D.

Some commenters on my previous blog post, Net Zero CO2 Emissions: A Damaging and Totally Unnecessary Goal, were dubious of my claim that nature will continue to remove CO2 from the atmosphere at about the same rate even if anthropogenic emissions decrease…or even if they were suddenly eliminated.

Rather than appeal to the simple CO2 budget model I created for that blog post, let’s look at the published data from the 123 (!) authors the IPCC relies upon to provide their best estimate of CO2 flows in and out of the atmosphere, the Global Carbon Project team. I created the following chart from their data spreadsheet available here. Updated yearly, the 2023 report shows that their best estimate of the net removal of CO2 from the atmosphere by land and ocean processes has increased along with the rise in atmospheric CO2. This plot is from their yearly estimates, 1850-2022.

The two regression line fits to the data are important, because they imply what will happen in the future as CO2 in the atmosphere continues to rise. In the case of the nonlinear fit, which has a slightly better fit to the data (R2 = 89.3% vs. 88.8%) the carbon cycle is becoming somewhat less able to remove excess CO2 from the atmosphere. This is what carbon cycle modelers expect to happen, and there is some weak evidence that is beginning to occur. So, let’s conservatively assume that nonlinear rate of removal (a gradual decrease in nature’s ability to sequester excess atmospheric CO2) will exist in the coming decades as a function of atmospheric CO2 content.

A Modest CO2 Reduction Scenario

Now, let’s assume a 1% per year cut in emissions (both fossil fuel burning and deforestation) in each year starting in 2024. That 1% per year cut is nowhere near the Net Zero goal of eliminating CO2 emissions by 2050 or 2060, which at this point seems delusional since humanity remains so dependent upon fossil fuels. The resulting future trajectory of atmospheric CO2 looks like this:

This shows that rather modest cuts in global CO2 emissions (33% by 2063) would cause CO2 concentrations to stabilize in about 40 years, with a peak CO2 value of 460 ppm. This is only 2/3 of the way to “2XCO2” (a doubling of estimated pre-Industrial CO2 levels).

How Much Global Warming Would be Caused Under This Scenario?

Assuming all of the atmospheric CO2 rise is due to human activities, and further assuming all climate warming is due to that CO2 rise, the resulting eventual equilibrium warming (delayed by the time it takes for mixing to warm the deep oceans) would be about 1.2 deg.C assuming the observations-based Effective Climate Sensitivity (EffCS) value of 1.9 deg. C we published last year (Spencer & Christy, 2023). Using the Lewis and Curry (2018) value around 1.6-1.7 deg. C would result in even less future warming.

And that’s if no further cuts in emissions are made beyond the 33% cuts vs. 2023 emissions. If the 1% per year cuts continue past the 2060s, as is shown in the 2nd graph above, the CO2 content of the atmosphere would then decline, and future warming would not be in response to 460 ppm, which was reached only briefly in the early 2060s. It would be a still lower value than 1.2 deg. C. Note these are below the 1.5 deg. C maximum warming target of the 2015 Paris Agreement, which is the basis for Net Zero policies.

Net Zero is Based Upon a Faulty View of Nature

Net Zero assumes that human CO2 emissions must stop to halt the rise in atmospheric CO2. This is false. The first plot above shows that nature removes atmospheric CO2 at a rate based upon the CO2 content of the atmosphere, and as long as that remains elevated, nature continues to remove CO2 at a rapid rate. Satellite-observed “global greening” is evidence of that over land. Over the ocean, sea water absorbs CO2 from the atmosphere in proportion to the difference in CO2 partial pressures between the atmosphere and ocean, that is, the higher the atmospheric CO2 content is, the faster the ocean absorbs CO2.

Neither land nor ocean “knows” how much CO2 we emit in any given year. They only “know” how much CO2 is in the atmosphere.

All that is needed to stop the rise of atmospheric CO2 is for yearly anthropogenic emissions to be reduced to the point where they match the yearly removal rate by nature. The Global Carbon Project data suggest that reduction is about 33% below 2023 emissions. And that is based upon the conservative assumption that future CO2 removal will follow the nonlinear curve in the first plot, above, rather than the linear relationship.

Finally, the 1.5 deg. C maximum warming goal of the 2015 Paris Agreement would be easily met under the scenario proposed here, a 1% per year cut in global net emissions (fossil fuel burning plus land use changes), with a total 33% reduction in emissions vs. 2023 by the early 2060s.

I continue to be perplexed why Net Zero is a goal, because it is not based upon the science. I can only assume that the scientific community’s silence on the subject is because politically driven energy policy goals are driving the science, rather than vice versa.

There’s Nothing “Scientific” About Climate Models

From The Daily Sceptic

BY PAUL SUTTON

On Sunday’s BBC Politics, Luke Johnson asked for evidence that the recent Dubai flooding was due to climate change. Chris Packham glibly responded: “It comes from something called science.”

This simply highlighted his poor scientific understanding. The issue is his and others’ confusion over what scientific modelling is and what it can do. This applies to any area of science dealing with systems above a single atom – everything, in practice.  

My own doctoral research was on the infrared absorption and fragmentation of gaseous molecules using lasers. The aim was to quantify how the processes depended on the laser’s physical properties. 

I then modelled my results. This was to see if theory correctly predicted how my measurements changed as one varied the laser pulse. Computed values were compared under different conditions with those observed. 

The point is that the underlying theory is being tested against the variations it predicts. This applies – on steroids – to climate modelling, where the atmospheric systems are vastly more complex. All the climate models assume agreement at some initial point and then let the model show future projections. Most importantly, for the projected temperature variations, the track record of the models in predicting actual temperature observations is very dubious, as Professor Nicola Scafetta’s chart below shows. 

For the climate sensitivity – the amount of global surface warming that will occur in response to a doubling of atmospheric CO2 concentrations over pre-industrial levels – there’s an enormous range of projected temperature increases, from 1.5° to 4.5°C. Put simply, that fits everything – and so tells us almost nothing about the underlying theories. 

That’s a worrying problem. If the models can’t be shown to predict the variations, then what can we say about the underlying theory of manmade climate change? But the public are given the erroneous impression that the ‘settled science’ confirms that theory – and is forecasting disastrously higher temperatures.

Such a serious failing has forced the catastrophe modellers to (quietly) switch tack into ‘attribution modelling’. This involves picking some specific emotive disaster – say the recent flooding in Dubai – then finding some model scenario which reproduces it. You then say: “Climate change modelling predicted this event, which shows the underlying theory is correct.”  

What’s not explained is how many other scenarios didn’t fit this specific event. It’s as if, in my research, I simply picked one observation and scanned through my modelling to find a fit. Then said: “Job done, the theory works.” It’s scientifically meaningless. What’s happening is the opposite of a prediction. It’s working backwards from an event and showing that it can happen under some scenario.

My points on the modelling of variations also apply to the work done by Neil Ferguson at Imperial College on catastrophic Covid fatalities. The public were hoodwinked into thinking ‘the Science’ was predicting it. Not coincidentally, Ferguson isn’t a medical doctor but a mathematician and theoretical physicist with a track record of presenting demented predictions to interested parties.

I’m no fan of credentialism. But when Packham tries it, maybe he needs questioning on his own qualifications – a basic degree in a non-physical ‘soft’ science then an abandoned doctorate.

Paul Sutton can be found on Substack. His new book on woke issues The Poetry of Gin and Tea is out now.

Global warming, climate change, all these things are just a dream come true for politicians. I deal with evidence and not with frightening computer models because the seeker after truth does not put his faith in any consensus. The road to the truth is long and hard, but this is the road we must follow. People who describe the unprecedented comfort and ease of modern life as a climate disaster, in my opinion have no idea what a real problem is.

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. View more
Cookies settings
Accept
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active

Wer wir sind

Textvorschlag: Die Adresse unserer Website ist: https://climate-science.press.

Kommentare

Textvorschlag: Wenn Besucher Kommentare auf der Website schreiben, sammeln wir die Daten, die im Kommentar-Formular angezeigt werden, außerdem die IP-Adresse des Besuchers und den User-Agent-String (damit wird der Browser identifiziert), um die Erkennung von Spam zu unterstützen. Aus deiner E-Mail-Adresse kann eine anonymisierte Zeichenfolge erstellt (auch Hash genannt) und dem Gravatar-Dienst übergeben werden, um zu prüfen, ob du diesen benutzt. Die Datenschutzerklärung des Gravatar-Dienstes findest du hier: https://automattic.com/privacy/. Nachdem dein Kommentar freigegeben wurde, ist dein Profilbild öffentlich im Kontext deines Kommentars sichtbar.

Medien

Textvorschlag: Wenn du ein registrierter Benutzer bist und Fotos auf diese Website lädst, solltest du vermeiden, Fotos mit einem EXIF-GPS-Standort hochzuladen. Besucher dieser Website könnten Fotos, die auf dieser Website gespeichert sind, herunterladen und deren Standort-Informationen extrahieren.

Cookies

Textvorschlag: Wenn du einen Kommentar auf unserer Website schreibst, kann das eine Einwilligung sein, deinen Namen, E-Mail-Adresse und Website in Cookies zu speichern. Dies ist eine Komfortfunktion, damit du nicht, wenn du einen weiteren Kommentar schreibst, all diese Daten erneut eingeben musst. Diese Cookies werden ein Jahr lang gespeichert. Falls du ein Konto hast und dich auf dieser Website anmeldest, werden wir ein temporäres Cookie setzen, um festzustellen, ob dein Browser Cookies akzeptiert. Dieses Cookie enthält keine personenbezogenen Daten und wird verworfen, wenn du deinen Browser schließt. Wenn du dich anmeldest, werden wir einige Cookies einrichten, um deine Anmeldeinformationen und Anzeigeoptionen zu speichern. Anmelde-Cookies verfallen nach zwei Tagen und Cookies für die Anzeigeoptionen nach einem Jahr. Falls du bei der Anmeldung „Angemeldet bleiben“ auswählst, wird deine Anmeldung zwei Wochen lang aufrechterhalten. Mit der Abmeldung aus deinem Konto werden die Anmelde-Cookies gelöscht. Wenn du einen Artikel bearbeitest oder veröffentlichst, wird ein zusätzlicher Cookie in deinem Browser gespeichert. Dieser Cookie enthält keine personenbezogenen Daten und verweist nur auf die Beitrags-ID des Artikels, den du gerade bearbeitet hast. Der Cookie verfällt nach einem Tag.

Eingebettete Inhalte von anderen Websites

Textvorschlag: Beiträge auf dieser Website können eingebettete Inhalte beinhalten (z. B. Videos, Bilder, Beiträge etc.). Eingebettete Inhalte von anderen Websites verhalten sich exakt so, als ob der Besucher die andere Website besucht hätte. Diese Websites können Daten über dich sammeln, Cookies benutzen, zusätzliche Tracking-Dienste von Dritten einbetten und deine Interaktion mit diesem eingebetteten Inhalt aufzeichnen, inklusive deiner Interaktion mit dem eingebetteten Inhalt, falls du ein Konto hast und auf dieser Website angemeldet bist.

Mit wem wir deine Daten teilen

Textvorschlag: Wenn du eine Zurücksetzung des Passworts beantragst, wird deine IP-Adresse in der E-Mail zur Zurücksetzung enthalten sein.

Wie lange wir deine Daten speichern

Textvorschlag: Wenn du einen Kommentar schreibst, wird dieser inklusive Metadaten zeitlich unbegrenzt gespeichert. Auf diese Art können wir Folgekommentare automatisch erkennen und freigeben, anstatt sie in einer Moderations-Warteschlange festzuhalten. Für Benutzer, die sich auf unserer Website registrieren, speichern wir zusätzlich die persönlichen Informationen, die sie in ihren Benutzerprofilen angeben. Alle Benutzer können jederzeit ihre persönlichen Informationen einsehen, verändern oder löschen (der Benutzername kann nicht verändert werden). Administratoren der Website können diese Informationen ebenfalls einsehen und verändern.

Welche Rechte du an deinen Daten hast

Textvorschlag: Wenn du ein Konto auf dieser Website besitzt oder Kommentare geschrieben hast, kannst du einen Export deiner personenbezogenen Daten bei uns anfordern, inklusive aller Daten, die du uns mitgeteilt hast. Darüber hinaus kannst du die Löschung aller personenbezogenen Daten, die wir von dir gespeichert haben, anfordern. Dies umfasst nicht die Daten, die wir aufgrund administrativer, rechtlicher oder sicherheitsrelevanter Notwendigkeiten aufbewahren müssen.

Wohin deine Daten gesendet werden

Textvorschlag: Besucher-Kommentare könnten von einem automatisierten Dienst zur Spam-Erkennung untersucht werden.
Save settings
Cookies settings
Exit mobile version