Global warming, climate change, all these things are just a dream come true for politicians. I deal with evidence and not with frightening computer models because the seeker after truth does not put his faith in any consensus. The road to the truth is long and hard, but this is the road we must follow.
Reader M.P. points out how ‘auto motor sport’ and other magazines are reporting that BMW is recalling its plugin hybrids on a large scale. What is the problem?
Since August this year, BMW has been recalling its plug-in hybrid electric vehicle (PHEV) models X1 to 3 and X5, 3 Series, 3 Series Touring, 2 Series Active Tourer, 7 Series, 5 Series (incl. Touring) and ‘Mini Countryman’.
The reasons given are production errors during welding and impurities, some of which may cause short circuits in the high-voltage battery (i.e. the traction battery). Production batches from the period January 20 to September 18 are partially affected.
In addition to the recall, there is also a delivery stop. More than 25,000 cars are affected worldwide, 8,000 of which are in the hands of customers (Germany: 5,300/1,800). The models already sold currently may not be charged; during trips only with restrictions. This should not be a problem, as many customers do not load their company cars off the grid.
At the end of October, the Federal Motor Transport Authority will start a check of the cars sold. The procedure will take about 30 minutes without repair, if necessary.
Not only BMW has the PHEV problem. Ford, too, has already had to call back its Kuga model. The reason was a memory error with fire hazard, which prohibited recharging the battery.
While we must steward the planet God has gifted to us, there is no empirical basis for apocalyptic predictions of impending doom.
The “Climate Clock” looms ten stories above Manhattan’s Union Square so all passersby can track the precise moment the world passes its supposed tipping-point toward irreversible, apocalyptic environmental demise. This clock has that moment of doom pegged at a little more than seven years from today. One of the men who created the clock, artist Gan Golan, said his motivation for the project was the birth of his daughter two years ago.
“This is arguably the most important number in the world,” the team explained to The New York Times, adding, “You can’t argue with science, you just have to reckon with it.” And that is where the problem lies with the environmental doom and gloom — you can absolutely argue with science. That is precisely what the scientific method is: the careful, relentless discipline of skepticism and discovery. It’s testing and questioning what others claim is beyond debate.
How many times was Doomsday predicted but failed to happen at midnight.
Nine leading climate scientists from Germany, France, Finland, and Ireland have, indeed, questioned whether anyone can reliably determine how much time remains between now and an irreversible trajectory toward environmental ruin.
Drawing from 36 different meta-analyses on the question, involving more than 4,600 individual studies spanning the last 45 years, their findings were recently published in the journal Nature Ecology and Evolution.They conclude that the empirical data doesn’t allow scientists to establish ecological thresholds or tipping points. As natural bio-systems are dynamic, ever-evolving, and adapting over the long-term, determining longevity timeframes is currently impossible.
These scholars write that frankly, “we lack systematic quantitative evidence as to whether empirical data allow definitions of such thresholds” and “our results thus question the pervasive presence of threshold concepts” in environmental politics and policy. Their findings also reinforced the contention that “global change biology needs to abandon the general expectation that system properties allow defining thresholds as a way to manage nature under global change.”
Professor José M. Montoya, one of the nine authors and an ecologist at the Theoretical and Experimental Ecology Station in France, told the French National Center for Scientific Research “many ecologists have long had this intuition” that setting reliable, empirically situated tipping-points “was difficult to verify until now for lack of sufficient computing power to carry out a wide-ranging analysis.” But that has now changed.
So no, there is no reliable science behind the new seven-years-to-the-point-of-no-return countdown of the Climate Clock in Union Square, nor for Rep. Alexandria Ocasio-Cortez’s infamous “The world is going to end in 12 years if we don’t act now” scare, or Thunberg’s just-10-years-til-inevitable-doom drum pounding. Such claims simply do not — and cannot — be firmly grounded in any scientific knowledge we currently possess.
Evidence for this conclusion, however, goes beyond the aforementioned conclusive new study. 2020 saw the publication of two extremely important books from leading, mainstream environmental-climate scholars on what science says about the earth’s future.
The first is Michael Shellenberger, a Time magazine “Hero of the environment” who explains in his book “Apocalypse Never: Why Environmental Alarmism Hurts Us All” that nearly every piece of scare data presented by the likes of AOC, Leonardo DiCaprio, and Thunberg is not only incorrect but tells a story that is the opposite of the scientific truth. Not only is the world not going to end due to climate change, but in many important ways, the environment is getting markedly better.
Another major environmentalist voice challenging hysteria is Bjorn Lomborg of the Copenhagen Consensus Center think tank, listed by the UK’s liberal Guardian newspaper as one of the 50 people who could save the planet. In his book “False Alarm,” he explains how “climate change panic” is not only unfounded, it’s also wasting trillions of dollars globally, hurting the poor, and failing to fix the very problems it warns us about.
So, what science genuinely telling us? “Science shows us that fears of a climate apocalypse are unfounded.” Lomborg explains, admitting that while “global warming is real … it is not the end of the world.” “It is a manageable problem” he adds. He is dismayed that we live in a world “where almost half the population believes climate change will extinguish humanity” and do so under the mistaken assumption that science concludes this. It doesn’t, and he is vexed this mantra parades under the banner of enlightenment.
It’s imperative we properly steward this beautiful planet God has gifted to us. It was the second command He gave to humanity, after the charge to populate it with generation after generation of new people. But hysteria is not what is called for in this work. Shellenberger, Lomborg, and these nine other international ecologists tell us that not only is there no empirical basis for the apocalyptic prognostications so needlessly disturbing the dreams of the world’s young people.
This essay extends the previously published evaluation of CMIP5 climate models to the predictive and physical reliability of CMIP6 global average air temperature projections.
Before proceeding, a heartfelt thank-you to Anthony and Charles the Moderator for providing such an excellent forum for the open communication of ideas, and for publishing my work. Having a voice is so very important. Especially these days when so many work to silence it.
I’ve previously posted about the predictive reliability of climate models on Watts Up With That (WUWT), here, here, here, and here. Those preferring a video presentation of the work can find it here. Full transparency requires noting Dr. Patrick Brown’s (now Prof. Brown at San Jose State University) video critique posted here, which was rebutted in the comments section below that video starting here.
Those reading through those comments will see that Dr. Brown displays no evident training in physical error analysis. He made the same freshman-level mistakes common to climate modelers, which are discussed in some detail here and here.
In our debate Dr. Brown was very civil and polite. He came across as a nice guy, and well-meaning. But in leaving him with no way to evaluate the accuracy and quality of data, his teachers and mentors betrayed him.
Lack of training in the evaluation of data quality is apparently an educational lacuna of most, if not all, AGW consensus climate scientists. They find no meaning in the critically central distinction between precision and accuracy. There can be no possible progress in science at all, when workers are not trained to critically evaluate the quality of their own data.
The best overall description of climate model errors is still Willie Soon, et al., 2001Modeling climatic effects of anthropogenic carbon dioxide emissions: unknowns and uncertainties. Pretty much all the described simulation errors and short-coming remain true today.
Jerry Browning recently published some rigorous mathematical physics that exposes at their source the simulation errors Willie et al., described. He showed that the incorrectly formulated physical theory in climate models produces discontinuous heating/cooling terms that induce an “orders of magnitude” reduction in simulation accuracy.
These discontinuities would cause climate simulations to rapidly diverge, except that climate modelers suppress them with a hyper-viscous (molasses) atmosphere. Jerry’s paper provides the way out. Nevertheless, discontinuities and molasses atmospheres remain features in the new improved CMIP6 models.
In the 2013 Fifth Assessment Report (5AR), the IPCC used CMIP5 models to predict the future of global air temperatures. The up-coming 6AR will employ the up-graded CMIP6 models to forecast the thermal future awaiting us, should we continue to use fossil fuels.
CMIP6 cloud error and detection limits: Figure 1 compares the CMIP6-simulated global average annual cloud fraction with the measured cloud fraction, and displays their difference, between 65 degrees north and south latitude. The average annual root-mean-squared (rms) cloud fraction error is ±7.0%.
This error calibrates the average accuracy of CMIP6 models versus a known cloud fraction observable. Average annual CMIP5 cloud fraction rms error over the same latitudinal range is ±9.6%, indicating a CMIP6 27% improvement. Nonetheless, CMIP6 models still make significant simulation errors in global cloud fraction.
Figure 1 lines: red, MODIS + ISCCP2 annual average measured cloud fraction; blue, CMIP6 simulation (9 model average); green, (measured minus CMIP6) annual average calibration error (latitudinal rms error = ±7.0%).
The analysis to follow is a straight-forward extension to CMIP6 models, of the previous propagation of error applied to the air temperature projections of CMIP5 climate models.
Errors in simulating global cloud fraction produce downstream errors in the long-wave cloud forcing (LWCF) of the simulated climate. LWCF is a source of thermal energy flux in the troposphere.
Tropospheric thermal energy flux is the determinant of tropospheric air temperature. Simulation errors in LWCF produce uncertainties in the thermal flux of the simulated troposphere. These in turn inject uncertainty into projected air temperatures.
For further discussion, see here — Figure 2 and the surrounding text. The propagation of error paper linked above also provides an extensive discussion of this point.
The global annual average long-wave top-of-the-atmosphere (TOA) LWCF rms calibration error of CMIP6 models is ±2.7 Wm⁻² (28 model average obtained from Figure 18 here).
I was able to check the validity of that number, because the same source also provided the average annual LWCF error for the 27 CMIP5 models evaluated by Lauer and Hamilton. The Lauer and Hamilton CMIP5 rms annual average LWCF error is ±4 Wm⁻². Independent re-determination gave ±3.9 Wm⁻²; the same within round-off error.
The small matter of resolution: In comparison with CMIP6 LWCF calibration error (±2.7 Wm⁻²), the annual average increase in CO2 forcing between 1979 and 2015, data available from the EPA, is 0.025 Wm⁻². The annual average increase in the sum of all the forcings for all major GHGs over 1979-2015 is 0.035 Wm⁻².
So, the annual average CMIP6 LWCF calibration error (±2.7 Wm⁻²) is ±108 times larger than the annual average increase in forcing from CO2 emissions alone, and ±77 times larger than the annual average increase in forcing from all GHG emissions.
That is, a lower limit of CMIP6 resolution is ±77 times larger than the perturbation to be detected. This is a bit of an improvement over CMIP5 models, which exhibited a lower limit resolution ±114 times too large.
Analytical rigor typically requires the instrumental detection limit (resolution) to be 10 times smaller than the expected measurement magnitude. So, to fully detect a signal from CO2 or GHG emissions, current climate models will have to improve their resolution by nearly 1000-fold.
Another way to put the case is that CMIP6 climate models cannot possibly detect the impact, if any, of CO2 emissions or of GHG emissions on the terrestrial climate or on global air temperature.
This fact is destined to be ignored in the consensus climatology community.
Emulation validity:Papalexiou et al., 2020 observed that, the “credibility of climate projections is typically defined by how accurately climate models represent the historical variability and trends.” Figure 2 shows how well the linear equation previously used to emulate CMIP5 air temperature projections, reproduces GISS Temp anomalies.
Figure 2 lines: blue, GISS Temp 1880-2019 Land plus SST air temperature anomalies; red, emulation using only the Meinshausen RCP forcings for CO2+N2O+CH4+volcanic eruptions.
The emulation passes through the middle of the trend, and is especially good in the post-1950 region where air temperatures are purportedly driven by greenhouse gas (GHG) emissions. The non-linear temperature drops due to volcanic aerosols are successfully reproduced at 1902 (Mt. Pelée), 1963 (Mt. Agung), 1982 (El Chichón), and 1991 (Mt. Pinatubo). We can proceed, having demonstrated credibility to the published standard.
CMIP6 World: The new CMIP6 projections have new scenarios, the Shared Socioeconomic Pathways (SSPs).
These scenarios combine the Representative Concentration Pathways (RCPs) of the 5AR, with “quantitative and qualitative elements, based on worlds with various levels of challenges to mitigation and adaptation [with] new scenario storylines [that include] quantifications of associated population and income development … for use by the climate change research community.“
Increasingly developed descriptions of those storylines are available here, here, and here.
Emulation of CMIP6 air temperature projections below follows the identical method detailed in the propagation of error paper linked above.
The analysis here focuses on projections made using the CMIP6 IMAGE 3.0 earth system model. IMAGE 3.0 was constructed to incorporate all the extended information provided in the new SSPs. The IMAGE 3.0 simulations were chosen merely as a matter of convenience. The paper published in 2020 by van Vuulen, et al conveniently included both the SSP forcings and the resulting air temperature projections in its Figure 11. The published data were converted to points using DigitizeIt, a tool that has served me well.
Here’s a short descriptive quote for IMAGE 3.0: “IMAGE is an integrated assessment model framework that simulates global and regional environmental consequences of changes in human activities. The model is a simulation model, i.e. changes in model variables are calculated on the basis of the information from the previous time-step.
“[IMAGE simulations are driven by] two main systems: 1) the human or socio-economic system that describes the long-term development of human activities relevant for sustainable development; and 2) the earth system that describes changes in natural systems, such as the carbon and hydrological cycle and climate. The two systems are linked through emissions, land-use, climate feedbacks and potential human policy responses. (my bold)”
On Error-ridden Iterations: The sentence bolded above describes the step-wise simulation of a climate, in which each prior simulated climate state in the iterative calculation provides the initial conditions for subsequent climate state simulation, up through to the final simulated state. Simulation as a stepwise iteration is standard.
When the physical theory used in the simulation is wrong or incomplete, each new iterative initial state transmits its error into the subsequent state. Each subsequent state is then additionally subject to further-induced error from the operation of the incorrect physical theory on the error-ridden initial state.
Critically, and as a consequence of the step-wise iteration, systematic errors in each intermediate climate state are propagated into each subsequent climate state. The uncertainties from systematic errors then propagate forward through the simulation as the root-sum-square (rss).
Pertinently here, Jerry Browning’s paper analytically and rigorously demonstrated that climate models deploy an incorrect physical theory. Figure 1 above shows that one of the consequences is error in simulated cloud fraction.
In a projection of future climate states, the simulation physical errors are unknown because future observables are unavailable for comparison.
However, rss propagation of known model calibration error through the iterated steps produces a reliability statistic, by which the simulation can be evaluated.
The above summarizes the method used to assess projection reliability in the propagation paper and here: first calibrate the model against known targets, then propagate the calibration error through the iterative steps of a projection as the root-sum-square uncertainty. Repeat this process through to the final step that describes the predicted final future state.
The final root-sum-square (rss) uncertainty indicates the physical reliability of the final result, given that the physically true error in a futures prediction is unknowable.
This method is standard in the physical sciences, when ascertaining the reliability of a calculated or predictive result.
Emulation and Uncertainty: One of the major demonstrations in the error propagation paper was that advanced climate models project air temperature merely as a linear extrapolation of GHG forcing.
Figure 3, panel a: points are the IMAGE 3.0 air temperature projection of, blue, scenario SSP1; and red, scenario SSP3. Full lines are the emulations of the IMAGE 3.0 projections: blue, SSP1 projection, and red, SSP3 projection, made using the linear emulation equation described in the published analysis of CMIP5 models. Panel b is as in panel a, but also showing the expanding 1 s root-sum-square uncertainty envelopes produced when ±2.7 Wm⁻² of annual average LWCF calibration error is propagated through the SSP projections.
In Figure 3a above, the points show the air temperature projections of the SSP1 and SSP3 storylines, produced using the IMAGE 3.0 climate model. The lines in Figure 3a show the emulations of the IMAGE 3.0 projections, made using the linear emulation equation fully described in the error propagation paper (also in a 2008 article in Skeptic Magazine). The emulations are 0.997 (SSP1) or 0.999 (SSP3) correlated with the IMAGE 3.0 projections.
Figure 3b shows what happens when ±2.7 Wm⁻² of annual average LWCF calibration error is propagated through the IMAGE 3.0 SSP1 and SSP3 global air temperature projections.
The uncertainty envelopes are so large that the two SSP scenarios are statistically indistinguishable. It would be impossible to choose either projection or, by extension, any SSP air temperature projection, as more representative of evolving air temperature because any possible change in physically real air temperature is submerged within all the projection uncertainty envelopes.
An Interlude –There be Dragons: I’m going to entertain an aside here to forestall a previous hotly, insistently, and repeatedly asserted misunderstanding. Those uncertainty envelopes in Figure 3b are not physically real air temperatures. Do not entertain that mistaken idea for a second. Drive it from your mind. Squash its stirrings without mercy.
Those uncertainty bars do not imply future climate states 15 C warmer or 10 C cooler. Uncertainty bars describe a width where ignorance reigns. Their message is that projected future air temperatures are somewhere inside the uncertainty width. But no one knows the location. CMIP6 models cannot say anything more definite than that.
Inside those uncertainty bars is Terra Incognita. There be dragons.
For those who insist the uncertainty bars imply actual real physical air temperatures, consider how that thought succeeds against the necessity that a physically real ±C uncertainty requires a simultaneity of hot-and-cold states.
Uncertainty bars are strictly axial. They stand plus and minus on each side of a single (one) data point. To suppose two simultaneous, equal in magnitude but oppositely polarized, physical temperatures standing on a single point of simulated climate is to embrace a physical impossibility.
The idea impossibly requires Earth to occupy hot-house and ice-house global climate states simultaneously. Please, for those few who entertained the idea, put it firmly behind you. Close your eyes to it. Never raise it again.
And Now Back to Our Feature Presentation: The following Table provides selected IMAGE 3.0 SSP1 and SSP3 scenario projection anomalies and their corresponding uncertainties.
Table: IMAGE 3.0 Projected Air Temperatures and Uncertainties for Selected Simulation Years
1 Year (C)
10 Years (C)
50 Years (C)
90 years (C)
Not one of those projected temperatures is different from physically meaningless. Not one of them tells us anything physically real about possible future air temperatures.
Several conclusions follow.
First, CMIP6 models, like their antecedents, project air temperatures as a linear extrapolation of forcing.
Second, CMIP6 climate models, like their antecedents, make large scale simulation errors in cloud fraction.
Third, CMIP6 climate models, like their antecedents, produce LWCF errors enormously larger than the tiny annual increase in tropospheric forcing produced by GHG emissions.
Fourth, CMIP6 climate models, like their antecedents, produce uncertainties so large and so immediate that air temperatures cannot be reliably projected even one year out.
Fifth, CMIP6 climate models, like their antecedents, will have to show about 1000-fold improved resolution to reliably detect a CO2 signal.
Sixth, CMIP6 climate models, like their antecedents, produce physically meaningless air temperature projections.
Seventh, CMIP6 climate models, like their antecedents, have no predictive value.
As before, the unavoidable conclusion is that an anthropogenic air temperature signal cannot have been, nor presently can be, evidenced in climate observables.
I’ll finish with an observation made once previously: we now know for certain that all the frenzy about CO₂ and climate was for nothing.
All the anguished adults; all the despairing young people; all the grammar school children frightened to tears and recriminations by lessons about coming doom, and death, and destruction; all the social strife and dislocation. All of it was for nothing.
All the blaming, all the character assassinations, all the damaged careers, all the excess winter fuel-poverty deaths, all the men, women, and children continuing to live with indoor smoke, all the enormous sums diverted, all the blighted landscapes, all the chopped and burned birds and the disrupted bats, all the huge monies transferred from the middle class to rich subsidy-farmers:
All for nothing.
Finally, a page out of Willis Eschenbach’s book (Willis always gets to the core of the issue), — if you take issue with this work in the comments, please quote my actual words.
The Guardian is working itself up into a lather over the Arctic again!
For the first time since records began, the main nursery of Arctic sea ice in Siberia has yet to start freezing in late October.
The delayed annual freeze in the Laptev Sea has been caused by freakishly protracted warmth in northern Russia and the intrusion of Atlantic waters, say climate scientists who warn of possible knock-on effects across the polar region.
Ocean temperatures in the area recently climbed to more than 5C above average, following a record breaking heatwave and the unusually early decline of last winter’s sea ice.
The trapped heat takes a long time to dissipate into the atmosphere, even at this time of the year when the sun creeps above the horizon for little more than an hour or two each day.
Graphs of sea-ice extent in the Laptev Sea, which usually show a healthy seasonal pulse, appear to have flat-lined. As a result, there is a record amount of open sea in the Arctic.
“2020 is another year that is consistent with a rapidly changing Arctic. Without a systematic reduction in greenhouse gases, the likelihood of our first ‘ice-free’ summer will continue to increase by the mid-21st century,’ he wrote in an email to the Guardian.
The warmer air temperature is not the only factor slowing the formation of ice. Climate change is also pushing more balmy Atlantic currents into the Arctic and breaking up the usual stratification between warm deep waters and the cool surface. This also makes it difficult for ice to form.
“This continues a streak of very low extents. The last 14 years, 2007 to 2020, are the lowest 14 years in the satellite record starting in 1979,” said Walt Meier, senior research scientist at the US National Snow and Ice Data Center. He said much of the old ice in the Arctic is now disappearing, leaving thinner seasonal ice. Overall the average thickness is half what it was in the 1980s.
The downward trend is likely to continue until the Arctic has its first ice-free summer, said Meier. The data and models suggest this will occur between 2030 and 2050. “It’s a matter of when, not if,” he added.
1) As Walt Meier notes, all of these so-called “records” only date back to 1979, in the middle of the period when the Arctic was undergoing substantial cooling and a massive increase in sea ice extent, as HH Lamb observed:
HH Lamb: Climate, History & The Modern World
The idea that the 1970s and 80s represent some kind of norm, either in the short or long term, is unscientific and absurd.
2) The article also notes:
The warmer air temperature is not the only factor slowing the formation of ice. Climate change is also pushing more balmy Atlantic currents into the Arctic and breaking up the usual stratification between warm deep waters and the cool surface.
In fact, the influx of warmer Atlantic waters is key to the recent warming of the Arctic, just as it was in a similar period of Arctic warming between the 1920s and 50s,
It is that factor which is increasing air temperatures, and there is no evidence that this influx has been caused by global warming.
3) Once again, we see the nonsense about “ice free Arctics”, which keep getting put back another decade or two. Previous scares have not materialised, and these latest one won’t either for a very good reason. The Arctic is a very cold place from autumn through to spring when the sun goes down, and as a consequence there is always far too much sea ice around by June for it to melt away in the short Arctic summer.
Now to the current situation.
Ice growth has just begun in the Laptev, about a week later than last year:
However, if we compare the whole of the Arctic basin with the same date last year, we find that sea ice is much more extensive this year on the western side, off the Canadian coast,. Also ice is much thicker in the central Arctic currently than it was last year.
As a result, sea ice volume is actually up on last year:
In other words, swings and roundabouts.
One final consideration. At this time of year, virtually no heat from the sun enters the Laptev Sea. Instead, open seas mean that a lot of the heat escapes into the atmosphere, and thence lost to space.
Low ice extent in the Arctic actually cools the earth, not the opposite. It is one of the ways in which the earth’s climate regulates itself.
Winter is coming in hard and strong this year, and it’s taking names ACROSS the Lower-48. Hundreds of new low temperature records have been set over the past few days alone, but all have been eclipsed by the “biggie” set Sunday in Montana.
HUNDREDSof cold and snow records have fallen of late: from Texas to Montana, many of the lowest temperatures and the highest snowfalls ever recorded at this time of year are not only being broken, they’re being SMASHED.
Serving as just a few examples:
The National Weather Service reported two broken snowfall records at their Marquette office: “We recorded 8.3 inches [on Sunday], which breaks the old record of 3.1 inches set in 1976 [solar minimum of cycle 20]! This recent snowfall also established a new monthly snowfall record for the month of October at our office. Total snowfall recorded for the month stands at19.2 inches! This breaks the old record of 18.6 inches set in 1979.”
Eastern Idahoans woke to bone-chilling weather Monday morning, reports eastidahonews.com. According to NWS data, Idaho Falls saw a low of just 1F, utterly shattering the previous record of 17F. In addition, Pocatello reached 3F, smashing its previous record low of 13F. The previous day, Sunday, also saw new record lows of 8 degrees in Idaho falls and 11 degrees in Pocatello.
As detailed within the articles linked above, the record books have been rewritten from Texas to Montana–but it’s that latter state which claimed the “biggie” during the early hours of Sunday morning, October 25, 2020.
According to NWS data, and as reported by ABCnews.com (one of only a few MSM outlets covering this, but even they’ve buried it under the Cali wildfires): “the temperature in Montana fell to a record breaking 29 degrees below zero, the lowest temperature measured at an official climate station anywhere in the lower 48 states so early in the season in any year.”
That’s a rather ugly, long-winded sentence — so I’ll break it down for you: “the Grand Solar Minimum is upon us, so get your s**t together already!”
The Washington Post has since covered it too, to be fair (though they don’t run it as the headline), writing late Monday evening:“Temperatures throughout much of the Rockies dipped below zero to start the week, falling as low as minus-29.2 in Potomac, Mont., early Sunday — the coldest temperature ever observed this early in the season across the Lower 48.”
The WP calls the ongoing cold “off the charts, with an air mass more typical of December or January than late October.”
Corby Dickerson, a meteorologist at the NWS in Missoula, notes that the U.S. historical temperature database contains 14.5 million observations from Oct 1 to Oct 25, and that Potomac’s reading on Sunday morning was the coldest!
“It’s truly remarkable,” said Dickerson. “There’s no other way to describe it.”
Der Weltklimarat gibt regelmäßig Klimazustandsberichte sowie Spezialberichte zu Sonderthemen heraus, geschrieben von tausenden von Wissenschaftlern. Wer überprüft eigentlich die Richtigkeit der IPCC-Texte? Funktioniert die Qualitätssicherung oder drücken bestimmte Gruppierungen den Berichten ihren persönlichen Stempel auf? Ein neues Video von Sebastian Lüning erläutert das Begutachtungsverfahren der IPCC-Berichte und analysiert Stärken und Schwächen.
Wenn Ihnen das Video gefällt, abonnieren Sie doch den Kanal „Klimawandel Crashkurs“. Hierzu klicken Sie auf „abonnieren“ auf der Youtube-Seite des Clips.
Das Video schließt inhaltlich an diesen vorherigen Clip an:
Klimaneutral bis 2050: Der Plan vom „grünen Wirtschaftswunder“ hat einen entscheidenden Makel. So titelt die WELT. Daniel Wetzel nimmt sich der Pläne an und stellt fest, dass eine grundsätzliche Machbarkeit noch kein Plan ist. Sogar Klima-Thinktanks pflichten ihm bei.
„Damit trifft die Kritik, die Klimaschutz-Experten jüngst an einer ähnlichen Studie der Fridays-for-Future-Bewegung geäußert hatten, im Prinzip auch auf die neue Agora-Studie zu: Einfach nur die Vervielfachung aller Öko-Technologien zu fordern, ist für sich genommen noch kein Plan. Erneut werden lediglich Ziele gesetzt, ohne konkrete Umsetzungsschritte zu nennen.“… „Das Wuppertal Institut hatte in der Woche zuvor für Fridays-for-Future berechnet, was das noch ambitioniertere Ziel einer Klimaneutralität schon bis 2035 bedeuten würde. Die Generalsekretärin des Mercator Research Instituts on Climate Change (MCC), Brigitte Knopf, hatte die Aussagekraft dieser Berechnungen auf Twitter in Zweifel gezogen: „Es fehlt praktisch komplett eine Analyse der ökonomischen Machbarkeit.“
Brasilianische Wälder brennen wieder oder soll man sagen, immer noch? Der SPIEGEL berichtet. Bisher blieb der große öffentlich Aufschrei aus, obwohl die Zahl der Brände ein neues Hoch erreicht hat. Laut Spiegel hat der Präsident Brasiliens die Feuerwehrleute zurückbeordert. Wälder sind wichtige Kohlenstoffsenken, sie zu verlieren ist in mehrfacher Hinsicht schlecht. Der Vorschlag von Professor Hans-Werner Sinn die EU solle den Amazonas kaufen, mutet auf den ersten Blick eigenartig an. Auf den zweiten Blick aber schon nicht mehr so sehr.
Konsequent wäre es zudem, auf das Verbrennen von Holz weltweit zu verzichten, denn Wälder sind überall wichtige Klimafaktoren. Welche katastrophalen Schäden in den USA durch das Abholzen der Wälder und das anschließende Verbrennen in sogenannten Biomasse-Kraftwerken anrichtet, schildert der Film Burned. Auf ihn weisen wir immer wieder gern hin. Auch bei uns in Deutschland gibt es Bestrebungen, dass Holz die neue Kohle wird. Selbst Studien, wie die kürzlich vorgestellte des Wuppertal-Instituts sehen es vor. Der Nabu in Deutschland hat eine klare Haltung dazu.
Man kann als Verbraucher übrigens auch etwas machen. Wer mit Fleisch aus der näheren Umgebung statt aus Südamerika vorliebnimmt, macht es unattraktiver, dass Wälder für Rinderzucht oder Sojabohnenanbau abgefackelt werden.
Ein kleines Video illustriert wie das dann in gr,oßer Skalierung aussieht. Kann man sich hier ansehen und überlegen, ob das der richtige Weg ist.
Tesla genießt den Ruf eines Heilsbringers. Bislang waren Rückrufe von Autos eher ein Privileg der traditionellen Hersteller. Nun, Elon Musk kann auch nicht über Wasser gehen und Tesla muss gerade in China 30.000 Fahrzeuge zurückrufen. Weiterlesen in der LA Times.
Ist Holz die neue Kohle? In Hamburg wird allen Ernstes geprüft, Buschholz aus Namibia zur Stromerzeugung zu verfeuern. Namibia wiederum importiert Kohle zur eigenen Stromgewinnung. Geht es noch verrückter? Robin Wood protestiert gegen dieses Vorhaben.
Der Spiegel bereitet das Thema Klima multimedial auf. Im dritten Teil der Serie geht es um Wälder, genauer den Verlust von Wäldern. Das sieht ohne Zweifel beeindruckend aus und es ist auch sehr informativ. Aber wie man es schafft, das Thema Verbrennen von Bäumen aus den westlichen Wäldern (auch bekannt als Biomasse) komplett außen vor zu lassen, das verwundert schon. Der Schwerpunkt liegt allein auf der Landwirtschaft in Südamerika und Asien, die am Waldverlust Schuld sind. Kein Wort über Studien wie die von Fridays For Future, die Biomasse (Holz verbrennen) vorsehen oder auch Lobbyisten, die keine Probleme mit Holzverbrennungen daheim haben, gern aber auf jeden Waldbrand dieser Welt zeigen und meinen, dass mehr Windräder in Deutschland sofort für das Erlöschen sorgen. Es ist eine Kunst, auf diesem Auge blind zu sein, oder hat der Spiegel einen toten Winkel?
Peter Unfried geht in einer Kolumne in der TAZ auf die Radikalisierung bei Klimabewegungen ein, denen er wenig abgewinnen kann.
„Falls die Klimapolitikbewegung aber nur sich selbst radikalisiert, wird auch sie im elitären Nirvana routinierter Berufsbesserwisser enden.“
Zweite Folge vom Quaschning Podcast. Studiogast diesmal: Reiner Wahlkampf.
Ein außerordentlich lesenswertes Interview hat der Buchautor Dr. Daniel Stelter mit Prof. Dr. Ing Holger Watter geführt. Watter ist Professor für Systemtechnik an der Fachhochschule in Flensburg. In dem Interview geht er auf viele wissenschaftliche bzw. physikalische Fakten ein, die mit Sicherheit einigen Protagonisten der Energiewende nicht schmecken dürften, die ansonsten sehr gern auf die Wissenschaft zeigen. Stelter nennt diese Menschen Alchimisten der Moderne.
„Die Hauptherausforderung besteht in der gesellschaftlichen Diskussionsfähigkeit, weil breite Schichten der Bevölkerung und ein Großteil der vermeintlichen Experten nicht zwischen “kW” und “kWh” unterscheiden können und die Herausforderungen grob fahrlässig vereinfachen. Dies ermöglicht politische und wirtschaftliche Geschäftsmodelle, die Lobbyinteressen vertreten, Gewinne privatisieren, Risiken sozialisieren und nicht zur Problemlösung beitragen.“
„Der bekannteste und bis vor Kurzem größte Akkuspeicher der Welt ist der Hornsdale Power Reserve in Australien mit einer Kapazität von 194 MWh (für ca. 100 Mio. €). Nehmen wir an, dieser Speicher solle Deutschland durch eine windstille Nacht bringen, bei einer geringen Last von ca. 50 GW – das würde bedeuten 194 MWh/50.000 MW = 3,88 x 10-3 Std. = 14 Sekunden! Schlussfolgerung: Breite Anwendungen finden Akkus nur bei kleinen mobilen Geräten mit wenig Leistungsaufnahme (im Milliwattbereich)…“
Zu diesem wirklich sehr spannenden Interview (es gibt auch einen Podcast) geht es hier.
Die vielen Farben des Wasserstoffs. Jetzt kommt eine weitere Farbe dazu, nämlich weiß. Das ist derjenige Wasserstoff, der auf natürliche Weise entsteht und wie Gas gefördert werden kann. In Europa gibt es Vorkommen in Skandinavien und begrenzt auch in Deutschland. Die Kosten bei der Förderung betragen nur 20% der Kosten, die bei der Elektrolyse anfallen. In der WELT geht Daniel Wetzel auf diesen Wasserstoff ein. Der Artikel steht hinter einer Bezahlschranke.
Before I start, I must confess that I am no Sherlock Holmes. What is more, my understanding of virology extends no further than is to be expected after having caught influenza more than once. Nevertheless, such experience alone should be sufficient to instil a healthy fear of what SARS-CoV-2 may do to an ailing and aging male body – no matter how sceptical that body may be. But when one witnesses and experiences the civic and economic damage that a government is prepared to inflict upon its people in order to manage a pandemic, the fear can become anything but healthy.
Given such mental health challenges, one certainly would not welcome any further distress arising from the simple desire to understand the case statistics upon which governments are basing their decision-making. Unfortunately, that is exactly the position I am in. There are things I think I know for certain, and there are things that have happened that appear to flatly contradict those certainties. This is all very destabilizing. I’ll start, if I may, with the widely understood certainties, after which you are invited to follow me down the rabbit hole.
Firstly, when interpreting a medical diagnostic test result, one has to take into account the possibility of false negatives (i.e. tests that fail to detect the presence of a disease) and false positives (i.e. tests that record the presence of the disease, notwithstanding its absence). These are respectively referred to as the sensitivity and specificity of the test. RT-PCR testing is no exception to this rule. Indeed, Lancet has advised that the specificity of RT-PCR testing is such that between 0.8% and 4% of positive test results are likely to be false positives. When the a priori probability of the disease is high (for example, when testing those who are presenting symptoms or have been in contact with a confirmed case) the number of false positives will be significantly exceeded by true positives, and so a positive test result is highly significant. However, once testing becomes more random, the a priori probability drops and the false positives start to dominate, to the extent that the test results become pretty meaningless. All of this is all very uncontroversial; it is just standard Bayesian statistics and a reminder of the dangers of base rate neglect. Indeed, the British Medical Journal has produced an online tool that enables anyone to try various a priori probabilities to see how this affects the reliability of RT-PCR test results.
“We know the specificity of our test must be very close to 100%”
Their logic was impeccable. If, as they claimed, only 159 positive test results were found in a sample of 208,000, then the least that the specificity could be was 99.92% — a full order of magnitude more specific than the most optimistic figure quoted by Lancet. Given the random nature of the ONS testing, and the relatively low prevalence of Covid-19 within the broader community, the specificity suggested by Lancet would have meant encountering far more false positive test results than genuine ones, and it seems more than a little convenient to me that this had not proven to be the case with the ONS survey. Even more puzzling was the apparent lack of curiosity within the scientific and journalistic communities. Rather than question these results, everyone seemed happy to assume that the ONS was using some especially accurate test technology, despite there being nothing on the ONS website to justify such an assumption. On the contrary, the ONS academic partners have confirmed there was nothing out of the ordinary about their testing arrangements:
“The nose and throat swabs are sent to the National Biosample Centre at Milton Keynes. Here, they are tested for SARS-CoV-2 using reverse transcriptase polymerase chain reaction (RT-PCR). This is an accredited test that is part of the national testing programme.”
On the face of it, a team of top-class statisticians were working back from their data to deduce a test specificity that flew in the face of all of the known science regarding RT-PCR testing, and no one seemed the least bit concerned about this.
Normally, in these circumstances, it is safe to assume that one is missing something very significant. It would only require someone to point out my mistake and I would be able to move on, albeit somewhat chastened and embarrassed. I have tried to resolve the mystery myself, but the best I have come up with is the rather outlandish theory that the ONS sample size of 208,000 was completely misleading. If (let’s say, due to quality control problems) the effective number was nearer to 50,000, then the small number of positive results can still be reconciled with the expected Covid-19 prevalence and a more plausible RT-PCR specificity. But other than to point to the fact that survey participants from 12 years old upwards were allowed to self-administer the swabs, I could think of no credible excuse for assuming that such a catastrophic failure in quality control had taken place. I had no alternative but to live with the prima facie contradiction and get on with life. But then I came across the New Zealand Ministry of Health’s Covid-19 statistics.
If New Zealand is to be believed, by early May, only 25 of its 1,138 Covid-19 cases had been asymptomatic. That represents only 2.2% of the cases, and it contrasts sharply with the statistics arising in other countries (e.g. 40% in US nursing homes and 90% in Northumbria University). Just as problematic is the fact that the New Zealand figures were determined as a result of extensive community testing, i.e. circumstances where false positives would be certain to dominate the asymptomatic Covid-19 headcount, and single-handedly account for far more than 25 individuals. Not only does New Zealand owe the world an explanation for its low asymptomatic count, it also needs to explain how, like the UK’s ONS, they were able to achieve near 100% specificity with RT-PCR testing. Furthermore, there is this online statement to be accounted for:
“When tests were done on samples without the virus, the tests correctly gave a negative result 96% of the time.”
This is a far from impressive specificity, and one which should result in a significant false positive problem for the NZ Ministry of Health to deal with. And yet, only a couple of paragraphs later they say:
“We expect very few (if any) false positive test results…”
And yet, despite this completely illogical expectation, they are proven correct? This is beginning to make the ONS conundrum look perfectly straightforward in comparison.
I trust that you can now see why I should be left so utterly confused. Two organisations that we should presume to be above reproach are making statements that just do not add up. It is no wonder that I am beginning to doubt my own rationality and powers of comprehension. I am hugely sceptical regarding the ONS and New Zealand figures but I feel obliged to be simultaneously sceptical of my own scepticism. Sir Arthur Conan Doyle famously believed in fairies, so I ought to feel in good company. However, I can’t help but suspect that entertaining such cognitive dissonance for any length of time is the sure path to madness. If someone doesn’t rush to my rescue soon and point out where I am going wrong I may end up in an institution listening to the sceptical voices in my head.
NWS Kansas meteorologists warn of a “widespread killing freeze” after unprecedented October cold and snow laid waste to the record books.
As reported by kansas.com, the National Weather Service (NWS) in Wichitaissued a winter weather advisory on Sunday running through 1 a.m. Tuesday for central, south-central and southeast Kansas. The forecast called for snow, sleet and freezing rain: “Plan on slippery road conditions,” reads the advisory. “The hazardous conditions will impact the morning and evening commutes.”
The city wasn’t able pre-treat its roads with salt on Sunday due to wet conditions, but efforts belatedly began in the early hours of Monday: “We did activate our full response as of midnight,” Ben Nelson, a city public works administrator, said Monday morning. “Once our crews got on scene, we deployed all 60 of our trucks and began to apply the salt and the sand mix across all 1,500 lane miles of arterial (roads) and the 300 lane miles of our secondary and school routes,” Nelson said.
The flakes started falling early Monday morning, as forecast — however, that original NWS advisory vastly underestimated the volume. The snow continued throughout the morning, to levels far greater than city crews had expected.
So much snow fell that city workers needed to use the plows on the front of the dump trucks to clear the roads, something crews try to avoid because 1) it significantly slows down the trucks, and 2) it runs the risk of scraping the already applied salt off the road.
After initially forecasting just a trace, NWS “officially” measured 1.3 inches of snow as of 10:50 a.m. Monday–although the scene on the ground looked far worse in places. Still, that official reading of 1.3 inches almost tripled the previous Oct 26 record of 0.5 inches set way back in 1913 (solar minimum of cycle 14).
Monday’s snow also set another, even more impressive record. According an NWS tweet, Monday witnessed “the most snow Wichita has ever received this early in the season.”
This beat-out the previous earliest 1+inch of snow, set on Oct 28, 1905:
Recordcold accompanied the record snow, further hampering city clearing efforts. Monday morning’s low of 24F broke the Wichita record for coldest ever low for the date — the old mark being the 25F set in 1957.
The city also broke its lowest-max for Oct 26, busting the 32F, also set in 1957 — though this record has yet to be officially logged.
Looking forward, the NWS Wichita hazardous weather outlook predicts “a widespread killing freeze” Monday night, to be followed by a wintry mix of precipitation across much of the area on Tuesday, continuing into early Wednesday morning.
Additional snow and ice accumulations are possible through Wednesday afternoon, and as kansas.com points out: “Any measurable snowfall on Tuesday in Wichita would set a record, as the weather service has never recorded snow accumulations on Oct 27. The record low temperature of 23 degrees, set in 1957, and the coolest high of 37 degrees, set in 1911, are both in jeopardy.”
Any change is bad? according to a new study, warming in cold climates or cooling in warm climates increases the risk of animals getting sick, which in turn increases human exposure to dangerous new pathogens.
Global warming likely to increase disease risk for animals worldwide
Date: November 23, 2020 Source: University of Notre Dame Summary: Changes in climate can increase infectious disease risk in animals, researchers found — with the possibility that these diseases could spread to humans, they warn.
Changes in climate can increase infectious disease risk in animals, researchers found — with the possibility that these diseases could spread to humans, they warn.
The study, conducted by scientists at the University of Notre Dame, University of South Florida and University of Wisconsin-Madison, supports a phenomenon known as “thermal mismatch hypothesis,” which is the idea that the greatest risk for infectious disease in cold climate-adapted animals — such as polar bears — occurs as temperatures rise, while the risk for animals living in warmer climates occurs as temperatures fall.
The hypothesis proposes that smaller organisms like pathogens function across a wider range of temperatures than larger organisms, such as hosts or animals.
“Understanding how the spread, severity and distribution of animal infectious diseases could change in the future has reached a new level of importance as a result of the global pandemic caused by SARS-CoV-2, a pathogen which appears to have originated from wildlife,” said Jason Rohr, co-author of the paper published in Science and the Ludmilla F., Stephen J. and Robert T. Galla College Professor and chair of the Department of Biological Sciences at Notre Dame. “Given that the majority of emerging infectious disease events have a wildlife origin, this is yet another reason to implement mitigation strategies to reduce climate change.”
Divergent impacts of warming weather on wildlife disease risk across climates
Jeremy M. Cohen1,2,*, Erin L. Sauer1,2, Olivia Santiago1,†, Samuel Spencer1,†, Jason R. Rohr
Climate change alters disease risks
Climate change appears to be provoking changes in the patterns and intensity of infectious diseases. For example, when conditions are cool, amphibians from warm climates experience greater burdens of infection by chytrid fungus than hosts from cool regions. Cohen et al.undertook a global metanalysis of 383 studies to test whether this “thermal mismatch” hypothesis holds true over the gamut of host-pathogen relationships. The authors combined date and location data with a selection of host and parasite traits and weather data. In the resulting model, fungal disease risk increased sharply under cold abnormalities in warm climates, whereas bacterial disease prevalence increased sharply under warm abnormalities in cool climates. Warming is projected to benefit helminths more than other parasites, and viral infections showed less obvious relationships with climate change.
The researchers’ inference that distress experienced by animals during unusual weather conditions can tell you anything about the impact of climate change seems dubious.
Why would animals which withstand seasonal temperature variations in the 10s of degrees will suddenly all sicken because of a rate of climate change which can barely be detected?
Even on the edge of the tropics where I live Winter is around 5-10C colder than Summer.
A proportion of animals are always at the edge of their range, they continuously move about and probe new ranges. It seems a big leap to infer that the gradual global warming we are experiencing would significantly increase the number of animals experiencing range distress. Global warming of 0.1C / decade is the climatic equivalent of moving South a few miles every year. Even a mouse can out walk climate change.
Deluded climate miserablists discover the infinite money tree, which their doom-laden dogmas demand, doesn’t exist. The tidal wave of debt now coming in takes precedence over far-fetched assertions about human-caused weather events. – – – Outraged climate activists are blaming Rishi Sunak, the UK Chancellor, of eroding Boris Johnson’s plans for a ‘green industrial revolution’.
In his so-called Spending Review, Rishi Sunak, the UK Chancellor, yesterday announced that Britain’s ‘economy emergency has only just begun’ and that it will negatively affect Britain’s finances for decades to come.
Obviously, Sunak hardly mentioned the climate issue at all.
The Spending Review and its relegation of green issues to the bottom of priorities confirms reports that the Treasury is at odds with Boris Johnson’s green hobby horse.
Ten days ago, the Observer reported about the growing row about Boris’s green agenda.
Fast-charging of electric batteries can ruin their capacity after just 25 charges, researchers have said, after they ran experiments on batteries used in some popular electric cars.
High temperatures and resistance from fast charging at commercial stations can cause cracks and leaks, said the engineers from the University of California, Riverside.
The team charged one set of discharged lithium-ion batteries using the same industry fast-charging method found at motorway stations.
The researchers also charged a set using a new fast-charging algorithm based on the battery’s internal resistance, which interferes with the flow of electrons. The internal resistance of a battery fluctuates according to temperature, charge state, battery age and other factors. High internal resistance can cause problems during charging.
The algorithmic charging method – known as internal resistance charging – is adaptive, learning from the battery by checking its internal resistance during charging. It rests when internal resistance kicks in, to prevent loss of charge capacity.
For the first 13 charging cycles, the battery storage capacities for both charging techniques reportedly remained similar. After that, however, the industry fast-charging technique caused capacity to fade much faster – after 40 charges the batteries only had 60% of their storage capacity.
At 80% capacity, rechargeable lithium-ion batteries have reached the end of ‘use life’ for most purposes. Batteries charged using the industry method reached this point after 25 charging cycles, while batteries charged with internal resistance charging were good for 36 cycles.
“Industrial fast-charging affects the lifespan of lithium-ion batteries adversely because of the increase in the internal resistance of the batteries, which in turn results in heat generation,” said doctoral student and co-author Tanner Zerrin.
Even worse effects came after 60 charging cycles using fast industry charging. Electrodes and electrolytes were exposed to the air, increasing the risk of fire or explosion. High temperatures of 60ºC accelerated the damage and the risk.
“Capacity loss, internal chemical and mechanical damage, and the high heat for each battery are major safety concerns,” said researcher Mihri Ozkan.
Internal resistance charging reportedly resulted in much lower temperatures and no damage.
“Our alternative, adaptive, fast-charging algorithm reduced capacity fade and eliminated fractures and changes in composition in the commercial battery cells,” said researcher Cengiz Ozkan.
The technique could be used to improve safety and lifespan of car batteries.
The researchers have applied for a patent on the algorithm, which could be licensed by battery and car manufacturers. In the meantime, the team recommended minimising the use of commercial fast chargers, recharging before the battery is completely drained, and preventing overcharging.
It is not clear how fast the “fast chargers” are which the study tested. They talk of motorway service station standards, which in the UK tend to be 50KW. I don’t know if the US is much different.
However, their trials suggest two hour charging, which would imply 50KW as well.
If so, this will be a huge blow for anybody who needs to use public chargers regularly. You may be able to get away with the occasional rapid charge when you go on a long trip. But for those unable to charge at home, or who travel long distances regularly, anything slower than 50KW is a non starter.
Below is the chart from the study, showing how rapidly the battery capacity deteriorates. Even algorithmic charging method (IR) loses capacity quickly as well, dipping well below the 80% benchmark after about 40 cycles.
This whole saga highlights how electric cars are being pushed forward with no thought about the consequential problems.
In the normal world, technologies only take off once the obstacles have been resolved.
According to news reports, large parts of £billions in subsidies paid by UK households for the construction of the Dogger Bank offshore wind farm will go to factories in Poland and Belgium.
The contract for manufacture and supply of monopiles and transition pieces has gone to Smulders, the Belgian subsidiary of Eiffage Métal, as part of a consortium with Sif (a Dutch company specialised in offshore foundations).
As a result approximately 260,000 tonnes of steelwork for first two phases of the Dogger Bank offshore wind farm project in England will be produced in Smulders’ facilities in Poland and Belgium.
The contract is subject to financial close on the two phases, which is expected soon.
The Dogger Bank wind farm, a joint venture between SSE Renewables and Equinor, will be erected in the North Sea, 130km off from the Yorkshire coast of England. At 3.6 GW, it will be the largest offshore wind farm in the world, and is being developed in three phases: Dogger Bank A, B and C.
The first two phases, Dogger Bank A and B, will require 190 foundations in total. Each foundation comprises a monopile and a transition piece in water depths varying from 18 to 63 metres.
For this contract, Smulders will manufacture the secondary steel of the transition pieces, and will assemble, coat and test the fully equipped transition pieces. Sif will manufacture and supply the monopiles and primary steel for the transition pieces, and marshal all foundation components.
Production in Smulders’ facilities in Poland and Belgium will begin in May 2021. The assembly, which will be done at the Belgian Hoboken facility, is scheduled to last approximately 10 months. The first phase, Dogger Bank A, is expected to be operational in 2023.
A new analysis by Drs. Wijngaarden and Happer (2020) suggests the “self-interference” saturation of all greenhouse gases in the current atmosphere substantially reduces their climate forcing power.
At the current concentrations, the forcing power for greenhouse gases like CO2 (~400 ppm) and CH4 (1.8 ppm) are already saturated. Therefore, even doubling the current greenhouse gas concentrations may only increase their forcings “by a few percent” in the parts of the atmosphere where there are no clouds. When clouds are present, the influence of greenhouse gases is even further minimized.
While the “consensus” model view is that doubling CO2 from 280 ppm to 560 ppm results in a surface forcing of 3.7 W/m², Wijngaarden and Happer find doubling CO2 concentrations from 400 to 800 ppm increases climate forcing by 3 W/m². This warms the surface by 1.4 K as it “hypothetically” cools the upper atmosphere by 10 K.
Equilibrium climate sensitivity (when positive feedback with water vapor is included) is identified as 2.2 K, which is only a 10% different than multiple other analyses.
Outraged climate activists are blaming Rishi Sunak, the UK Chancellor, of eroding Boris Johnson’s plans for a ‘green industrial revolution’.
In his so-called Spending Review, Rishi Sunak, the UK Chancellor, yesterday announced that Britain’s ‘economy emergency has only just begun’ and that it will negatively affect Britain’s finances for decades to come. Obviously, Sunak hardly mentioned the climate issue at all.
The Spending Review and its relegation of green issues to the bottom of priorities confirms reports that the Treasury is at odds with Boris Johnson’s green hobby horse. Ten days ago, the Observer reported about the growing row about Boris’s green agenda.
Boris Johnson’s plans to relaunch his premiership with a blitz of announcements on combating climate change and the creation of tens of thousands of new green jobs are meeting stiff resistance from the cash-strapped Treasury, the Observer has been told.
Senior figures in Whitehall and advisers to the government on environmental issues say negotiations on the content of a major environmental speech by the prime minister are still ongoing between No 10, the Treasury and the Department for Business, Energy and Industrial Strategy with just days to go before Johnson delivers the keynote address. […]
But many of these pledges involve long-term financial commitments of funding and subsidy which the Treasury is reluctant to make until the extent of the bills from the Covid crisis are better known.
“The Treasury is fighting back hard against a lot of the green plans and there is a battle going on with No 10,” said a source close to the talks. “The PM wants to get on with it, with plans for the long term, but he is meeting a lot of resistance. You would expect that from the Treasury but with Covid it is of another order.”
With the economic and financial crisis accelerating and an astronomical debt mountain building up fast, it is becoming absolutely obvious that Britain won’t have the finances for years to come to splash out on costly Net Zero plans.
Even BBC’s in-house climate campaigners are beginning to realise that the financial constraints will ultimately lead to further dilution, delays and U-turns.
The UK chancellor’s Spending Review has been accused of undermining the prime minister’s “green” vision by pushing ahead with a £27bn roads programme.
After several speeches in which Boris Johnson pledged to rescue the economy by “building back greener”, Rishi Sunak’s speech on Wednesday barely mentioned the climate.
He said he was pursuing the nation’s priorities.
Mr Sunak put detailed numbers on the PM’s recent green technology plan.
But he offered no increase on the £12bn Mr Johnson says the government has mobilised to tackle climate change – even though the sum is much less than what’s been agreed in France, Germany and others.
Environment groups are most angry at the roads programme. The chancellor said it would ease congestion, improve commute times and “keep travel arteries open.” It was essential, he said, because people are shunning public transport during the Covid-19 pandemic.
Campaigners said it would attract more traffic and increase emissions, when the PM says they should be falling.
Friends of the Earth’s Mike Childs said: “He (Mr Sunak) has completely undermined the Prime Minister.
“With billions of pounds earmarked for a climate-wrecking road-building programme and inadequate funding for home insulation, eco-heating, buses and cycling this strategy falls woefully short.
“We need to head off the climate emergency. Ministers must ensure every major development is in line with meeting the net zero target.”
The union boss Manuel Cortes general secretary of the Transport Salaried Staffs’ Association (TSSA) accused the government of “abandoning all pretence of ambition over decarbonisation”.
He said: “The Spending Review was a moment to unleash the green economic revolution, but Sunak failed.
“Instead of grasping the nettle and resetting our country on an economic course based around green jobs and investment – we had barely a mention on the climate crisis we face.”
Just when you thought “STEVE” couldn’t get any weirder, a new paper published in the journal AGU Advances reveals that the luminous purple ribbon is often accompanied by green cannonballs of light that streak through the atmosphere at 1000 mph.
Below is an abridged version of Dr. Tony Phillip’s excellent article available on his equally excellent website spaceweatherarchive.com, dated November 22, 2020.
STEVE (Strong Thermal Velocity Enhancement) is a relatively recent discovery, first spotted and photographed by Canadian citizen scientists around 10 years ago. It looks like an aurora, but it is not. The purple glow is caused by hot (3000 °C) rivers of gas flowing through Earth’s magnetosphere faster than 13,000 mph. This distinguishes it from auroras, which are ignited by energetic particles raining down from space.
“Citizen scientists have been photographing these green streaks for years,” says Joshua Semeter of Boston University, lead author of the new paper. “Now we’re beginning to understand what they are.”
There is a dawning realization that STEVE is more than just a purple ribbon, as photographers routinely catch it flowing over a sequence of green vertical pillars known as the “picket fence” (example shown below).
These aren’t auroras either.
And now, Semeter’s team has identified yet another curiosity: “Beneath the picket fence, photographers often catch little horizontal streaks of green light,” explains Semeter: “This is what we studied in our paper.”
Entitled The Mysterious Green Streaks Below STEVE, Semeter’s research involved gathering as many images of these little horizontal streaks as possible, and citizen scientists across North America and New Zealand were only too happy to help:
In a few cases, the same streaks were captured by widely-separated photographers, allowing a triangulation of their position:
Upon analyzing dozens of high-quality images, the researchers came to these three conclusions:
1. The streaks are not in fact streaks, they are instead point-like balls of gas moving horizontally through the sky. In photos, the ‘green cannonballs’ are smeared into streaks by the exposure time of the cameras.
2. The cannonballs are typically 350 meters wide, and located about 105 km above Earth’s surface.
3. The color of the cannonballs is pure green–much moreso than ordinary green auroras, reinforcing the conclusion that they are different phenomena.
So, what exactly are STEVE’s green cannonballs?
Semeter and his team believe they are a sign of turbulence: “During strong geomagnetic storms, the plasma river that gives rise to STEVE flows at extreme supersonic velocities. Turbulent eddies and whirls dump some of their energy into the green cannonballs.”
This idea may explain their prevalence of late: given ongoing waning of Earths magnetic field (thought to be tied to a GSM and Pole Shift), geomagnetic storms could-well be having a bigger impact closer to the ground, with streams of plasma penetrating deeper into Earth’s atmosphere.
Semeter’s musings may also explain their pure color, writes Dr. Phillips. Auroras tend to be a mixture of hues caused by energetic particles raining down through the upper atmosphere. The ‘rain’ strikes atoms, ions, and molecules of oxygen and nitrogen over a wide range of altitudes. A hodge-podge of color naturally results from this chaotic process. STEVE’s cannonballs, on the other hand, are monochromatic. Local turbulence excites only oxygen atoms in a relatively small volume of space, producing a pure green at 557.7 nm; there is no mixture.
“It all seems to fit together, but we still have a lot to learn,” concludes Semeter. “Advancing this physics will benefit greatly from the continued involvement of citizen scientists.”
According the mainstream position, however, when it comes to science, “you must not do your own research“. By this logic, citizen scientists around the world should immediately cease all endeavors and their groups be disbanded.
But why? What do the powers-that-be deem so threatening and dangerous with people exploring the reality around them, and what business is it of theirs how a person searches for the truth?
The elite’s plan, it now seems obvious, is to herd society down one very specific thought-path, a path which offers a very narrow set of answers to all life’s questions and problems. People are misled into thinking science is definitive, that there is one simplistic answer to each and every query–but science doesn’t work on consensus, and even the most seemingly far-reaching hypothesis, if properly and honestly devised, has about as much chance being proven correct as any widely-held theory.
Nothing at all is settled.
Questioning everything is key, and mistakes are part-and-parcel.
This is science.
Social Media channels are restricting Electroverse’s reach: Twitter are purging followers while Facebook are labeling posts as “false” and have issued crippling page restrictions.
Be sure to subscribe to receive new post notifications by email (the box is located in the sidebar >>> or scroll down if on mobile).
Obliterated daily record and finished 3rd on the list of snowiest November days on record.
On the back of its warning that the Northwest Territories will suffer a “colder-than-average winter” with “more snow,” Environment and Climate Change Canada (ECCC) reports that 19.4 cm (7.64 inches) of powder fell at Pearson airport Sunday, which shattered the previous Nov. 22 record of 7.6 cm (2.99 inches) set in 2007 (solar minimum of cycle 23).
For a similarly significant amount of snowfall during the month of November you have flip the record books back 80 years, to Nov. 26, 1940, when the city saw 16.5 cm (6.5 inches). Sunday’s totals finished 3rd in the list of snowiest November days on record, behind Nov. 30, 1940, and Nov. 24, 1950.