Toronto shatters daily snowfall record

Almost triple previous daily record, and almost triple average snowfall for the entire month of November.

Some 19.4 cm of snowfall hit Toronto on Sunday. This shattered the record of 7.6cm on the same date (22 Nov) in 2007.

Snow started falling early Sunday morning and began accumulating in some areas rather quickly. By 1 p.m. Brampton already had 8-11cm with totals of close 25cm in the northeast area of the city.

The typical first 10cm snowfall of the winter is in mid-December. The earliest November snowfall of 10cm or more was seen on Nov. 2, 1966.

The average November snowfall is only 7.5cm. By Monday morning snowfall accumulation in Toronto had reached 21cm, almost triple average snowfall for the entire month of November.

Toronto shatters a daily snowfall record on Sunday

Thanks to Clay Olson for this link

The post Toronto shatters daily snowfall record appeared first on Ice Age Now.

via Ice Age Now

https://ift.tt/2HB9AcC

November 24, 2020 at 01:18PM

The U.S. National Temperature Index, is it based on data? Or corrections?

By Andy May

The United States has a very dense population of weather stations, data from them is collected and processed by NOAA/NCEI to compute the National Temperature Index. The index is an average temperature for the nation and used to show if the U.S. is warming. The data is stored by NOAA/NCEI in their GHCN or “Global Historical Climatology Network” database. GHCN-Daily contains the quality-controlled raw data, which is subsequently corrected and then used to populate GHCN-Monthly, a database of monthly averages, both raw and final. I downloaded version 4.0.1 of the GHCN-Monthly database on October 10, 2020. At that time, it had 27,519 stations globally and 12,514 (45%) of them were in the United States, including Alaska and Hawaii. Of the 12,514 U.S. stations, 11,969 of them are in “CONUS,” the conterminous lower 48 states. The current station coverage is shown in Figure 1.

Figure 1. The GHCN weather station coverage in the United States is very good, except for northern Alaska. There are two stations in the western Pacific that are not shown.

We have several questions about the land-based temperature record, which dominates the long-term (~170-year) global surface temperature record. The land-based measurements dominate because sea-surface temperatures are very sparse until around 2004 to 2007, when the ARGO network of floats became complete enough to provide good data. Even in 2007, the sea-surface gridding error was larger than the detected ocean warming.

Ocean Warming

We have estimated that the oceans, which cover 71% of the Earth’s surface, are warming at a rate of 0.4°C per century, based on the least squares linear trend shown in Figure 2. This is a very rough estimate and based only on data from 2004 to 2019 and temperatures from the upper 2,000 meters of the oceans. The data before 2004 is so sparse we didn’t want to use it. The error in this estimate is roughly ±0.26°C, from the surface to 2,000 meters and unknown below that.

Argo measurements of ocean temperature at 2,000 meters are a fairly constant 2.4°C. So, we assumed a temperature of 0.8°C at the average ocean depth of 3,688 meters (12,100 feet) and below. For context, the freezing point of seawater at 2900 PSI (roughly 2,000 meters or 2,000 decibars) is -17°C. The value of 0.8°C is from deep Argo data as described by Gregory Johnson and colleagues (Johnson, Purkey, Zilberman, & Roemmich, 2019). There are very few measurements of deep ocean temperatures and any estimate has considerable possible error (Gasparin, Hamon, Remy, & Traon, 2020). The anomalies in Figure 2 are based on those assumptions. The calculated temperatures were converted to anomalies from the mean of the ocean temperatures from 2004 through 2019. The data used to make Figure 2 is from Jamstec. An R program to read the Jamstec data and plot it can be downloaded here, the zip file also contains a spreadsheet with more details. Our calculations suggest an overall average 2004-2019 ocean temperature of 4.6°C.

Figure 2. A plot of the global grid of ocean temperatures from JAMSTEC. It is built from ARGO floats and Triton buoy data mostly. Jamstec is the source of the grid used to compute these anomalies.

Observed ocean warming is not at all alarming and quite linear, showing no sign of acceleration. The oceans contain 99.9% of the thermal energy (“heat”) on the surface of the Earth, the atmosphere contains most of the rest. This makes it hard for Earth’s surface to warm very much, since the oceans act as a thermal regulator. Various calculations and constants regarding the heat stored in the oceans and atmosphere are in a spreadsheet I’ve prepared here. References are in the spreadsheet. The oceans control warming with their high heat capacity, which is the amount of thermal energy required to raise the average ocean temperature one degree. The thermal energy required to raise the temperature of the atmosphere 1,000 degrees C would only raise the average ocean temperature one degree.

I only mention this because, while the land-based weather stations provide us with valuable information regarding the weather, they tell us very little about climate change. Longer term changes in climate require much more information than we currently have on ocean warming. That said, let us examine the GHCN data collected in the United States.

The GHCN station data
In the U.S., and in the rest of the world, the land-based weather stations comprise most of the average temperature record in the 19th and 20th centuries. Knowing how accurate they are, and the influence of the corrections applied relative to the observed warming is important. Lots of work has been done to document problems with the land-based data. Anthony Watts and colleagues documented numerous problems with station siting and equipment in 2011 with their surface stations project. Important information on this study by John Neison-Gammon can be seen here and here. The Journal of Geophysical Research paper is here. Many of the radical changes in NOAA’s U.S. temperature index and in the underlying database in the period between 2009 and 2014 are due to the work done by Watts and his colleagues as described by NOAA’s Matthew Menne in his introductory paper on version 2 of the U. S. Historical Climatology Network (USHCN):

“Moreover, there is evidence that a large fraction of HCN sites have poor ratings with respect to the site classification criteria used by the U.S. Climate Reference Network (A. Watts 2008 personal communication; refer also to www.surfacestations.org).” (Menne, Williams, & Vose, 2009)

Menne, et al. acknowledged Watt’s and colleagues in their introductory paper to the revised USHCN network of stations, this suggests that the surface stations project was an important reason for the revision. USHCN was a high-quality subset of the full NOAA Cooperative Observer program (COOP) weather station network. The USHCN stations were chosen based upon their spatial coverage, record length, data completeness and historical stability, according to Matthew Menne. A set of quality control checks and corrections were developed to clean up the selected records and these are described in Matthew Menne and colleague’s publications. The main paper is cited above in the boxed quote, but he also wrote a paper to describe their Pairwise Homogenization algorithm, abbreviated “PHA” (Menne & Williams, 2009a). Stations with problems were removed from USHCN as they were found and documented by Watts, et al. As a result, the original 1218 USHCN stations dwindled to ~832 by 2020. The dismantled stations were not replaced, the values were “infilled” statistically using data from neighboring stations.

In early 2014, USHCN subset was abandoned as the source data for the National Temperature Index and replaced with a gridded instance of GHCN, but the corrections developed for USHCN were kept. They were just applied to all 12,514 U.S. GHCN stations, rather than the smaller 1,218 station (or fewer) USHCN subset.

NOAA appears to contradict this in another web page on GHCN-Daily methods. On this page they say that GHCN-Daily does not contain adjustments for historical station changes or time-of-day bias. But they note that GHCN-Monthly does. Thus, it seems that the corrections are done after extracting the daily data and while building the monthly dataset. NOAA does not tamper with the GHCN-Daily raw data, but when they extract it to build GHCN-Monthly, they apply some dramatic corrections, as we will see. Some NOAA web pages hint that the time-of-day bias corrections have been dropped for later releases of GHCN-Monthly, but most explicitly say they are still being used, so we assume they are still in use. One of the most worrying findings was how often, and how radically, NOAA appears to be changing their “correction” procedures.

The evolving U.S. Temperature Index
The current U.S. “National Temperature Index,” draws data from five-kilometer grids of the GHCN-Monthly dataset. The monthly gridded dataset is called nClimGrid, and is a set of map grids, not actual station data. The grids are constructed using “climatologically aided interpolation” (Willmott & Robeson, 1995). The grids are used to populate a monthly average temperature dataset, called nClimDivnClimDiv is used to create the index.

Currently, the NOAA base period for nClimDiv, USHCN, and USCRN anomalies is 1981-2010. We constructed our station anomalies, graphed below, using the same base period. We accepted all stations that had at least 12 monthly values during the base period and rejected stations with fewer. This reduced the number of CONUS stations from 11,969 to 9,307. No stations were interpolated or “infilled” in this study.

Some sources have suggested data outside the GHCN-Daily dataset might be used to help build the nClimDiv monthly grids and temperature index, especially some nearby Canadian and Mexican monthly averages. But NOAA/NCEI barely mention this on their websitenClimDiv contains climate data, including precipitation, and a drought index, as well as average monthly temperature. As mentioned above, the same corrections are made to the GHCN station data as were used in the older USHCN dataset. From the NOAA website:

“The first (and most straightforward) improvement to the nClimDiv dataset involves updating the underlying network of stations, which now includes additional station records and contemporary bias adjustments (i.e., those used in the U.S. Historical Climatology Network version 2)” source of quote: here.

Besides the new fully corrected GHCN-Monthly dataset and the smaller USHCN set of corrected station data, there used to be a third dataset, the original NOAA climate divisional dataset. Like GHCN-Daily and nClimDiv, this older database used all the COOP network of stations. However, the COOP data used in the older Climate Division dataset (called “TCDD” in Fenimore, et al.) was uncorrected. This is explained in a white paper by Chris Fenimore and colleagues (Fenimore, Arndt, Gleason, & Heim, 2011). Further, the data in the older dataset was simply averaged by climate division and state, it was not gridded, like nClimDiv and USHCN. There are some new stations in nClimDiv, but most are the same as in TCDD. The major difference in the two datasets are the corrections and the gridding. Data from this earlier database is plotted as a blue line in Figures 6 and 7 below.

The simple averages used to summarize TCDD, ignored changes in elevation, station moves and other factors that introduced spurious internal trends (discontinuities) in many areas. The newer nClimDiv monthly database team claims to explicitly account for station density and elevation with their “climatologically aided interpolation” gridding method (Fenimore, Arndt, Gleason, & Heim, 2011). The methodology produces the fully corrected and gridded nClimGrid five-kilometer grid dataset.

nClimDiv is more useful since the gradients within the United States in temperature, precipitation and drought are more accurate and contain fewer discontinuities. But, as we explained in previous posts, when nClimDiv is reduced to a yearly conterminous U.S. (CONUS) temperature record, it is very similar to the record created by the older, official temperature record called USHCN, when both are gridded the same way. This may be because, while nClimDiv has many more weather stations, the same corrections are applied to them as were applied to the USHCN stations. While USHCN has fewer stations, they are of higher quality and have longer records. The additional nClimDiv stations, when processed the same way as the USHCN stations, do not change things, at least on a national and yearly level. As noted in a previous post, stirring the manure faster, with more powerful computers and billions of dollars, doesn’t really matter for widespread averages.

There are good reasons for all the corrections that NOAA applies to the data. The gridding process undoubtably improves the usefulness of the data internally. Artificial mapping discontinuities are smoothed over and trends will be clearer. But the corrections and the gridding process are statistical in nature, they do nothing to improve the accuracy of the National Temperature Index. If a specific problem with a specific thermometer is encountered and fixed, accuracy is improved. If the cause is not known and the readings are “adjusted” or “infilled” using neighboring thermometers or a statistical algorithm, the resulting maps will look better, but they are no more accurate.

The move from USHCN to nClimDiv for the National Temperature Index
How much of the National Temperature Index trend is due to actual warming and how much is due to the corrections and the gridding method? How much error is in the final temperature anomaly estimates? Decades of criticism and NOAA’s revisions of the calculation have not answered this question or changed the result. Figure 3 shows the National Temperature Index, extracted from the NOAA web site on November 18, 2020. Both the USHCN and the nClimDiv computations are plotted. Remember the slope of the least squares line, 1.5°C per century, it will be important later in the post.

Figure 3. The nClimDiv and USHCN climate anomalies from the 1981-2010 average. The data was downloaded from their web page. Both datasets plotted are from grids, not station data. CONUS is an abbreviation for the lower 48 states, the conterminous states.

It has long been known that the National Temperature Index does not follow the underlying published data. Anthony Watts has reported this, as have Jeff Masters, Christopher Burt, and Ken Towe. The problems exist in both the GHCN data and in the USHCN data as reported by Joseph D’Aleo. Brendan Godwin suspects that the “homogenization” algorithms (see the discussion of PHA above) in use today are to blame. When the “corrected” data has a very different trend than the raw data, one should be skeptical.

Anthony Watts does not believe that the underlying problems with the full COOP network of weather stations have been fixed as he explained here last year. He believes that NOAA is “sweeping the problem under the rug.” The data plotted in Figure 3 is fully corrected and gridded, it is not a plot of station data. In Figure 4 we plot the fully corrected station data in blue and the raw station data in orange from the CONUS portion of GHCM-Monthly. This is the same data used to build the nClimDiv curve plotted in Figure 3, but Figure 4 is actual station data.

Figure 4. The orange line is the uncorrected monthly mean temperature, which is “qcu” in NOAA terminology. The blue line is corrected, or NOAA’s “qcf.”

Figure 4 shows the actual measurements from the stations, these are not anomalies and the data are not gridded. The raw data shows CONUS is cooling by 0.3°C per century, while the corrected data shows CONUS is warming by 0.3°C degrees per century. These lines, like all the fitted lines in this post, are Excel least squares trend lines. The lines are merely to identify the most likely linear trend in the data, thus the R2 is irrelevant, we are not trying to demonstrate linearity.

The difference between the two curves in Figure 4 is shown in Figure 5. The slope of the difference is a warming trend of 0.57°C per century. This is the portion of the warming in Figure 3 directly due to the corrections to the measurements.

Figure 5. This plots the difference (Final-Raw) between the two actual station temperature curves in Figure 4. As you can visually see, the difference between the final and raw curve trends, since 1890, is about 0.8°C, roughly the claimed warming of the world over that period.

To many readers Figure 4 will look familiar. Steven Goddard’s Real Science blog published a 1999 NASA GISS version of the CONUS raw data anomalies in 2012. The dataset he used has since been deleted from the NASA website, but a copy can be downloaded here and is plotted in Figure 6, along with the current (October 2020) GHCN-M raw data. We are switching from the actual temperature measurements in Figure 4 to weather station anomalies from the 1981-2010 mean in Figure 6.

Figure 6. The 1999 NASA GISS raw CONUS temperature anomalies compared to the 2020 GHCN-M raw CONUS anomalies. The 1999 NASA anomalies are shifted down .32°C so the means from 1890 to 1999 match. This is to compensate for the base line differences. Notice the least squares trends match very closely. Hansen’s data shows a warming trend of 0.25°C per century and the modern data shows warming of 0.26°C per century. The equations for the lines are in the legend. See the text for the data sources.

Both the current data and the 1999 data show about 0.25°C per century of warming. Figure 7 shows the same GISS 1999 raw data anomalies compared to the 2020 GHCN-M final temperature anomalies. All three plots suggest it was as warm or warmer in 1931 and 1933 in the conterminous U.S. states as today. The various corrections applied to the raw data and turning the actual temperatures into anomalies have the effect of lessening the difference between the 1930s and today, but they don’t eliminate it, at least not in the station data itself. When the data is gridded, as it was to make Figure 3, the trend is fully reversed, and modern temperatures are suddenly much warmer than in the 1930s. The 1999 data again shows warming of 0.25°C per century, but the corrected data shows warming of 0.6°C per century. This is very similar to the warming seen in Figure 5, that is the warming due to the corrections alone.

Figure 7. The 2020 GHCN-M final and fully corrected station data is compared to the 1999 NASA/GISS CONUS anomalies. The equations for the lines are in the legend.

The blue 1999 GISS anomaly lines in Figures 6 and 7 are identical, the orange line in Figure 6 is raw data and the orange line in Figure 7 is final, corrected data. The largest corrections are in the earlier times and the smaller corrections are in the recent temperatures.

The WUWT resident wit, and all-around good guy, Dave Middleton, commented on this in 2016:

“I’m not saying that I know the adjustments are wrong; however anytime that an anomaly is entirely due to data adjustments, it raises a red flag with me.” Middleton, 2016

I agree, logic and common sense suggest Dave is correct to be skeptical.

James Hansen wrote about this issue in 1999:

“What’s happening to our climate? Was the heat wave and drought in the Eastern United States in 1999 a sign of global warming?

Empirical evidence does not lend much support to the notion that climate is headed precipitately toward more extreme heat and drought. The drought of 1999 covered a smaller area than the 1988 drought, when the Mississippi almost dried up. And 1988 was a temporary inconvenience as compared with repeated droughts during the 1930s “Dust Bowl” that caused an exodus from the prairies, as chronicled in Steinbeck’s Grapes of Wrath.” Source.

For once, I agree with James Hansen.

Zeke, at rankexploits.com, the “Blackboard,” tried to defend the corrections in 2014. Zeke tells us that USHCN and GHCN are first corrected for time-of-measurement bias (“TOB”), then the stations are compared to their neighbors, and a pairwise homogenization algorithm (PHA) is used to smooth out suspected anomalies. These are presumably due to station moves, changes in the station environment, or equipment changes. Finally, missing station data are filled in using neighboring stations as a guide. The last step to make nClimDiv is to grid the data.

Zeke notes that the TOB and PHA corrections are not really necessary since the gridding process alone will probably do the same thing. Not understanding all the details of all these statistical data smoothing operations, I won’t offer an opinion on Zeke’s comment. But, from a general mapping perspective he has a point. You want to map a dataset that is as close to the measurements as possible. When you apply three smoothing algorithms to the measurements before you contour them and grid them, what do you have? What does it mean?

We will not get into the details of the NOAA corrections here, they are statistical, and not corrections to specific instruments to correct for known problems. Thus, they are different flavors of smoothing operations applied sequentially to the measurements. The TOB correction is described by Thomas Karl and colleagues (Karl, Williams, Young, & Wendland, 1986). NOAA averages minimum and maximum daily temperatures to derive the average daily temperature, so it matters whether the two temperature readings are recorded from the min-max thermometer at midnight or some other time of the day. When calculations are done using monthly averages this difference is very small. Some NOAA web pages suggest that the TOB correction has been dropped for more recent versions of GHCN-Monthly, others say it is still used. Either way it probably doesn’t make much difference in GHCN-Monthly or nClimDiv.

The second correction is the pairwise homogenization algorithm or PHA. This algorithm compares each station to its neighbors to determine if there are unusual anomalies and then attempts to fix them. This process is purely a statistical smoothing algorithm. It is described by Matthew Menne and Claude Williams (Menne & Williams, 2009a). This process is definitely being used in the most recent version of GHCN-Monthly.

The final step in the smoothing process is the infilling of missing values using neighboring station data. This is done prior to gridding so more grid cells are populated. Infilling is probably still being done in the most recent version.

Zeke makes the point that graphing actual temperatures, as we did in Figure 4, can be misleading. Over the course of the past 130 years, stations have moved, been added, removed, and the spatial distribution of stations has changed. The mean elevation of the stations has changed over time. These changes affect station anomalies less than the absolute temperatures. True enough, and this accounts for some of the difference between Figure 4 and Figures 6 and 7. Beyond a certain point the number of stations doesn’t matter, as can be seen in Figure 3. We start our plots in 1890 or 1895 because this is when we assume that sufficient stations in CONUS exist to get a meaningful average. The USHCN dataset has 143 stations in 1890 and 608 in 1895 and these are the stations with the longest records and the best placement.

Discussion and Conclusions
Zeke’s next point is that Goddard did not grid his data. Thus, he did not deal with the uneven distribution of stations and the changing distribution of stations over time. These are real problems and they do affect internal trends within CONUS but gridding and the other corrections only smooth the data. None of these operations improve accuracy. In fact, they are more likely to reduce it. If we were using maps of CONUS data to identify trends within the country, I would agree with Zeke, smooth the data. But here we are concerned only about the National Temperature Index, which is external to CONUS. The index is an average temperature for the whole country, no statistical smoothing or gridding operation will improve it. Using anomalies, versus actual temperatures, is important, otherwise no.

An average of the station data anomalies is more appropriate than using a grid to produce a national average temperature trend. The average is as close to the real observations as you can get. The corrections and the gridding remove us from the measurements with several confounding steps.

If the corrections fixed known problems in the instruments, that would help accuracy. But they are statistical. They make the station measurements smoother when mapped and they smooth over discontinuities. In my opinion, NOAA has overdone it. TOB, PHA, infilling and gridding are overkill. This is easily seen in Figure 7 and by comparing Figure 3 to Figure 6 or Figure 5. Does the final trend in Figure 3 more closely resemble the measurements (Figure 6) or the net corrections in Figure 5? The century slope of the data is 0.25°, the corrections add 0.35° to this and the “climatological gridding algorithm” adds 0.9°! It is worth saying again, the type of statistical operations we are discussing do nothing to improve the accuracy of the National Temperature Index, and they probably reduce it.

CONUS is a good area to use to check the viability of the “corrections” to the station data and the efficacy of the temperature gridding process. The current station coverage is very dense, as seen in Figure 1, and one would expect the gridded data to match the station data quite well. Figure 3 looks like the orange “final” curve in Figure 7, but it is steeper somehow, and that tells you all you need to know.

Dave Middleton and I have been (in my case “was”) in the oil and gas business for a long time. Between us we have seen more mapped BS than you could find in the Kansas City stockyards. My internal BS meter red-lines when I hear a laundry list of smoothing algorithms, correction algorithms, bias adjustments, etc. I want to scream “keep your &#$@ing maps and calculations as close to the real data as possible!”

In the first part of this post, I pointed out that to study climate change, we need to know more about ocean warming and the distribution and transport of thermal energy in the oceans. Land-based weather stations help predict the weather, but not climate. We argue a lot about relatively small differences in the land-surface temperatures. These arguments are interesting, but they don’t matter very much from the standpoint of climate change. The oceans control that, the atmosphere above land has little to do with it. Taking the raw data from GHCN-Daily and running it through four different smoothing algorithms (TOB, PHA, infilling and gridding) is, with all due respect, ridiculous. My recommendation? Don’t believe any of it, not that it matters much as far as climate is concerned.

A better indicator of climate change or global warming is the trend of ocean warming, shown in Figure 2. Notice the trend over the past 16 years is only 0.4°C per century. Compare this to the CONUS land-based measurements over the past 130 years, they predict 0.25°C, as shown in Figure 6, but NOAA’s fully “corrected” value is 1.5°C, as shown in Figure 3. Truly, which do you believe?

I used R to do the calculations plotted in the figures, but Excel to make the graphs. If you want to check the details of my calculations, you can download my GHCN R source code here.

None of this is in my new book Politics and Climate Change: A History but buy it anyway.

You can download the bibliography here.

via Watts Up With That?

https://ift.tt/3nTkslz

November 24, 2020 at 01:01PM

The Green New Deal In Bulgaria

Posted on November 24, 2020 by tonyheller

Over 1/3 of Bulgarians cannot afford to heat their homes adequately – News

A total of 33.7% of Bulgarians in 2018 could not afford sufficient heating in their homes, shows data of the European statistics agency Eurostat. 

via Real Climate Science

https://ift.tt/3l092KT

November 24, 2020 at 11:55AM

80% of Republicans think the election was stolen and the media is the enemy

Donald Trump’s message of election fraud has reached millions of people even though Twitter, Facebook, and the media have said it was baseless every ten minutes since election day. Word is spreading so fast that even 12% of Democrats think the election was stolen.

Breitbart:

…Politico conducted its 2020 Voter Priorities Survey and results show that a vast majority of Trump voters believe the election results are not valid and that illegal voting and fraud took place during the election. Twenty-one percent of Trump voters believe the results are valid and 79 percent believe the election was stolen.

 This is a crisis for any democratic nation. The way to solve it is with a full and fair investigation which either cleans up the count or puts people in jail. Possibly nothing short of a new election (with paper ballots) will work.

….

Not surprisingly, 83% of Trump voters call the media “the enemy”.

….

The crisis in media trust is due to interviews like this one (below). Here is  Ari Melber of  MSNBC acting like the enemy…

What is he afraid of — presumably he is worried that the Democrat viewers will find out what is really going on. But even many normal polite democrats must find this interview bizarre.

 What’s holding the nation back

Republicans are consistent. When asked “what’s holding the country back” they said — you guessed it. “The Media.”

All roads lead to the media. If we had a better media, we;d get better politicians.

A nation divided

Because the media censors one half of the national conversation it drives Americans on both sides further apart. One side simply has no idea what the other half is thinking, this fuels the hate and simplistic explantions. Ignorant Democrats have no respect for a group they don’t understand. This in turn  feeds the distrust felt on the other side. Exiled Republicans are seeking out their own new forums and news sources.

There is no national conversation, just two tribes.

What did people vote for in 2020? Not climate change.

Of the sixteen choices, Republicans ranked Climate Change forth from the bottom. Even 4 out of 5 democrats didn’t rank it is the top three.

This is why conservative parties will fail if they pander to climate change.

Even half the Democrats realize their party is more extreme than it was four years ago.

Rating: 0.0/10 (0 votes cast)

via JoNova

https://ift.tt/3m5SUJ2

November 24, 2020 at 11:46AM

HEAVY SNOWFALL CLOSES 86-KM-LONG ROAD IN KASHMIR, NORTHERN INDIA

 CAP ALLON

The historic 86km-(53.5-mile)-long Mughal road, which connects the northern Indian regions of Shopian in Kashmir with Rajouri and Poonch in Jammu, has been closed to all traffic on Monday following a foot+ of fresh snowfall.

The new snow combined with sub-zero temperatures and impressive accumulations already on the ground to deliver treacherous driving conditions, avalanches, and the suspension of all traffic in the region–a reality which means many remote towns and villages have now been completely cut off.

The National Highway Authority of India (NHAI) and Border Roads Organisation (BRO) have reportedly deployed two snow cats along the Mughal road, but have said further snowfall is hampering the clearing efforts.

#kashmir #india #srinagar #jammu #love #kashmirtourism #photography #instagram #nature #kashmirvalley #jammukashmir #snow #snowfall pic.twitter.com/tvYpA4w6rr— Ladakh News Updates (@LadakhNews) November 24, 2020


Sub divisional magistrate, Gurez Mudasir Ahmad told KNS that food and supplies have been delivered to the worst hit and hardest to reach regions.

Ahmad says medicines have also been stocked up in the Gurez Valley.

Snow-clearing efforts, Mughal Road, Razdan Top, Gurez.


The substantial snow cover in northern India is creating issues down south, too. The nation’s capital city, Delhi has suffered record low temperatures through the months of October and November: October averaged as Delhi’s coldest since 1962, while November has suffered its coldest temperature in 17 years.

As Mahesh Palawat, an expert at Skymet Weather explains: the dip in the capital’s mercury is due to the icy cold winds blowing in from the snow-laden northern mountains.

And looking forward, there seems to be no let-up to India’s cold and snow in sight–with heavy snowfall persisting today, Tuesday, November 24:

Gulmarg, other parts of Kashmir receive snow for 2nd day; cloudy skies in Srinagar#JammuKashmir #gulmarg #diprjk#snowfall pic.twitter.com/u1ImdJtl1O— United News of India (@uniindianews) November 24, 2020

J&K: Higher reaches of Pir Panjal Range receive heavy snowfall; Mughal Road blocked due to snow. pic.twitter.com/Xki5jom8BA— Prasar Bharati News Services पी.बी.एन.एस. (@PBNS_India) November 24, 2020


Northern India’s substantial snow cover is also assisting in driving the Northern Hemisphere’s Total Snow Mass to levels some 400 gigatons above the 1982-2012 average:

FMI [globalcryospherewatch.org].


The COLD TIMES are returning, the mid-latitudes are REFREEZING in line with historically low solar activitycloud-nucleating Cosmic Rays, and a meridional jet stream flow.

Both NOAA and NASA appear to agree, if you read between the lines, with NOAA saying we’re entering a ‘full-blown’ Grand Solar Minimum in the late-2020s, and NASA seeing this upcoming solar cycle (25) as “the weakest of the past 200 years”, with the agency correlating previous solar shutdowns to prolonged periods of global cooling here.

Furthermore, we can’t ignore the slew of new scientific papers stating the immense impact The Beaufort Gyre could have on the Gulf Stream, and therefore the climate overall.


Prepare accordingly— learn the facts, relocate if need be, and grow your own.

Social Media channels are restricting Electroverse’s reach: Twitter are purging followers while Facebook are labeling posts as “false” and have slapped-on crippling page restrictions.

Be sure to subscribe to receive new post notifications by email (the box is located in the sidebar >>> or scroll down if on mobile).

And/or become a Patron, by clicking here: patreon.com/join/electroverse, and/or consider “allowing ads” for http://www.electroverse.net if you use a blocker.

The site receives ZERO funding, and never has. So any way you can, help us spread the message so others can survive and thrive in the coming times.

Grand Solar Minimum + Pole Shift

The post Heavy Snowfall Closes 86-km-long Road in Kashmir, Northern India appeared first on Electroverse.

Heavy Snowfall Closes 86-km-long Road in Kashmir, Northern India – Electroverse

via Electroverse

Arctic Adds 3 Wadhams of Ice in November (so far)

After concerns over lackluster ice recovery in October, November is seeing ice roaring back.  The image above shows the last 3 weeks adding 3 M km2 of sea ice.  (The metric 1 Wadham = 1 M km2 comes from the professor’s predictions of an ice-free Arctic, meaning less than 1 M km2 extent) The Russian shelf seas on the left filled with ice early on.  On the CanAm side, Beaufort at the bottom center is iced over, Canadian Archipelago (center right) is frozen, and Baffin Bay is filling from the north down.  Hudson Bay (far right) first grew fast ice around the edges, and is now half iced over.  A background post is reprinted below, showing that in just 23 days, 2020 has added 3.1 M km2, 50% more than an average 30-day November.

The graph above shows November Arctic ice extents for the 13-year average and some other notable years.  Note 2020 starts the month 1.5 M km2 below average, and is now ~400k km2 down, with sharp gains in the last week.  SII and MASIE have been closely synchronized, with SII lagging behind lately, while MASIE 2020 is close to 2019. 

Background from Previous Post: Arctic October Pent-up Ice Recovery

Some years ago reading a thread on global warming at WUWT, I was struck by one person’s comment: “I’m an actuary with limited knowledge of climate metrics, but it seems to me if you want to understand temperature changes, you should analyze the changes, not the temperatures.” That rang bells for me, and I applied that insight in a series of Temperature Trend Analysis studies of surface station temperature records. Those posts are available under this heading. Climate Compilation Part I Temperatures

This post seeks to understand Arctic Sea Ice fluctuations using a similar approach: Focusing on the rates of extent changes rather than the usual study of the ice extents themselves. Fortunately, Sea Ice Index (SII) from NOAA provides a suitable dataset for this project. As many know, SII relies on satellite passive microwave sensors to produce charts of Arctic Ice extents going back to 1979.  The current Version 3 has become more closely aligned with MASIE, the modern form of Naval ice charting in support of Arctic navigation. The SII User Guide is here.

There are statistical analyses available, and the one of interest (table below) is called Sea Ice Index Rates of Change (here). As indicated by the title, this spreadsheet consists not of monthly extents, but changes of extents from the previous month. Specifically, a monthly value is calculated by subtracting the average of the last five days of the previous month from this month’s average of final five days. So the value presents the amount of ice gained or lost during the present month.

These monthly rates of change have been compiled into a baseline for the period 1980 to 2010, which shows the fluctuations of Arctic ice extents over the course of a calendar year. Below is a graph of those averages of monthly changes during the baseline period. Those familiar with Arctic Ice studies will not be surprised at the sign wave form. December end is a relatively neutral point in the cycle, midway between the September Minimum and March Maximum.

The graph makes evident the six spring/summer months of melting and the six autumn/winter months of freezing.  Note that June-August produce the bulk of losses, while October-December show the bulk of gains. Also the peak and valley months of March and September show very little change in extent from beginning to end.

The table of monthly data reveals the variability of ice extents over the last 4 decades.

Table 1 Monthly Arctic Ice rates of Extent Changes in M km2. Months with losses in pink, months with gains in blue.

The values in January show changes from the end of the previous December, and by summing twelve consecutive months we can calculate an annual rate of change for the years 1979 to 2019.

As many know, there has been a decline of Arctic ice extent over these 40 years, averaging 40k km2 per year. But year over year, the changes shift constantly between gains and losses.

Moreover, it seems random as to which months are determinative for a given year. For example, much ado has been printed about October 2020 being slower than expected to refreeze and add ice extents. As it happens in this dataset, October has the highest rate of adding ice. The table below shows the variety of monthly rates in the record as anomalies from the 1980-2010 baseline. In this exhibit a red cell is a negative anomaly (less than baseline for that month) and blue is positive (higher than baseline).

Note that the  +/ –  rate anomalies are distributed all across the grid, sequences of different months in different years, with gains and losses offsetting one another.  Yes, October 2020 recorded a lower than average gain, but higher than 2016. The loss in July 2020 was the largest of the year, during the hot Siberian summer.  The bottom line presents the average anomalies for each month over the period 1979-2020.  Note the rates of gains and losses mostly offset, and the average of all months in the bottom right cell is virtually zero.

A final observation: The graph below shows the Yearend Arctic Ice Extents for the last 30 years.

Note: SII daily extents file does not provide complete values prior to 1988.

Year-end Arctic ice extents (last 5 days of December) show three distinct regimes: 1989-1998, 1998-2010, 2010-2019. The average year-end extent 1989-2010 is 13.4M km2. In the last decade, 2009 was 13.0M km2, and ten years later, 2019 was 12.8M km2. So for all the the fluctuations, the net loss was 200k km2, or 1.5%. Talk of an Arctic ice death spiral is fanciful.

These data show a noisy, highly variable natural phenomenon. Clearly, unpredictable factors are in play, principally water structure and circulation, atmospheric circulation regimes, and also incursions and storms. And in the longer view, today’s extents are not unusual.

Illustration by Eleanor Lutz shows Earth’s seasonal climate changes. If played in full screen, the four corners present views from top, bottom and sides. It is a visual representation of scientific datasets measuring Arctic ice extents.

#arctic-sea-ice

via Science Matters

https://ift.tt/3fGWGXf

November 24, 2020 at 10:47AM

Dominion Exec: Trump is not going to win – I made f***ing sure of that

Video – One America (OANN.com) releases a bombshell discovery about a key member of Dominion’s leadership.

In the video, One America’s Chief White House Correspondent Chanel Rion spoke with the founder of FEC United, Joe Oltmann.

Oltmann, in turn, says he heard Eric Coomer, director of product strategy and security for Dominion Voting Systems, bragging about rigging the 2020 election against Donald Trump.

Here’s the OANN link:
https://www.oann.com/dominion-executive-trump-is-not-going-to-win-i-made-fing-sure-of-that/

Snopes has tried to pooh-pooh this information, but so far hasn’t convinced me.

“(It’s) not clear whether Coomer’s alleged remarks, if they did take place, were made in earnest, and therefore whether they should be interpreted as proof of election meddling,” says Snopes. “Oltmann told Malkin he had heard Coomer laughing after allegedly bragging about interfering in the election (“Trump is not going to win, I made fucking sure of that. Hahaha.”)”

“Furthermore, even if Coomer made the comment in earnest, he might have been lying, either to impress others on the call, to exaggerate his power and influence, or for some other reason.”

Here’s all of Snopes rather weak ‘rebuttal’:
https://www.snopes.com/fact-check/eric-coomer-dominion-trump/

The post Dominion Exec: Trump is not going to win – I made f***ing sure of that appeared first on Ice Age Now.

via Ice Age Now

https://ift.tt/2HvORGY

November 24, 2020 at 10:28AM

Stephen Moore: Paris climate treaty puts America last


Politicians may see votes in joining the current climate bandwagon. But what happens when the results of their extravagant policies hit home, power becomes less reliable and energy and travel costs soar, all for no discernible benefit?
– – –
Here we are in the midst of the second wave of a once-in-a-half-century pandemic, with the economy flattened and millions of Americans unemployed and race riots in the streets of our major cities.

And Joe Biden says that one of his highest priorities as president will be to…re-enter the Paris Climate Accord.

Trump kept his America First promise and pulled America out of this Obama-era treaty. Biden wants us back in — immediately. Why?

Paris is an unmitigated failure. You don’t have to take my word for it.

National Geographic, a supporter of climate change action, recently ran the numbers and admits in its recent headline: “Most Countries Aren’t Hitting 2030 Climate Goals.” That’s putting it mildly. Most haven’t even reached half their pledged target for emission reductions.

Robert Watson, the former chair of the Intergovernmental Panel on Climate Change, laments: “Countries need to double and triple their 2030 reduction commitments to be aligned with the Paris target.” Gee, this sounds like a treaty we definitely should be part of and pay the bills for.

The one country making substantial progress in reducing carbon emissions is the U.S. under President Donald Trump. Even though our gross domestic product is way up over the past four years, our carbon dioxide emissions are DOWN. Our air pollution levels and emissions of lead, carbon monoxide, and other pollutants are at record-low levels.

Meanwhile, Beijing is far and away the largest polluter. Year after year, it makes hollow promises to stop climate change while building dozens of new coal plants. India and its 1 billion people are hooked on coal, too.

Here is Paris in nutshell: We put our coal miners out of their jobs and cripple our $1 trillion oil and gas industry while China and India keep polluting and laugh at us behind our back.

These nations have bigger and more immediate development priorities than worrying about climate change models and their guestimates of the global temperature in 50 years.

Full article here.

via Tallbloke’s Talkshop

https://ift.tt/3frL6is

November 24, 2020 at 10:18AM

European ‘green recovery’ falters as car sales continue to tumble

Passenger car registrations in EU during the ten months ended October 2020 dipped around 27% year over year to 8 million units. This marked the worst decline, outpacing the slump during the 2009 financial crisis. 

Eleven million jobs at risk from EU Green Deal, trade unions warn

The auto market in Europe is in murky waters. Production and sales of vehicles in Europe had already taken a hit amid economic slowdown, challenges associated with electric vehicle rollout and detrimental effects of the U.S.-Sino trade tiff.

The coronavirus outbreak further compounded the woes. European car registrations fell from January to August 2020 amid the pandemic-led sluggish demand, followed by a surprising one-month reprieve in September. However, the rebound failed to sustain amid another wave of coronavirus. Sales for October regressed as a result of reimposition of restrictions in many countries amid rising coronavirus cases.

Per European Automobile Manufacturers Association, passenger car sales in Europe again slipped into the negative territory in October, reversing the first increase this year in September. Registration of new cars in the European Union (EU) totaled 953,615 units in October, down 7.8% year over year.

Demand of vehicles declined across all the four major markets in Europe last month. While car sales plummeted 21% year over year in Spain, France witnessed a drop of 9.5% in new passenger car registrations. Sales in Germany and Italy slid 3.6% and 0.2%, respectively, on a year-over-year basis. In fact, sales declined across all EU markets expect Romania and Ireland.

Full story 

The post European ‘green recovery’ falters as car sales continue to tumble appeared first on The Global Warming Policy Forum (GWPF).

via The Global Warming Policy Forum (GWPF)

https://ift.tt/39bbLyR

November 24, 2020 at 10:10AM

STEVE’s Cannonballs: Little Green Spheres of Light

.
.
Quote: ‘STEVE is a recently identified atmospheric phenomenon caused by supersonic plasma jets flowing at altitudes >100 km.’ Scientists continue to wrestle with its electromagnetic mysteries.

Spaceweather.com

Nov. 22, 2020: Just when you thought STEVE couldn’t get any weirder. A new paper published in the journal AGU Advances reveals that the luminous purple ribbon we call “STEVE” is often accompanied by green cannonballs of light that streak through the atmosphere at 1000 mph.

“Citizen scientists have been photographing these green streaks for years,” says Joshua Semeter of Boston University, lead author of the study. “Now we’re beginning to understand what they are.”

STEVE is a recent discovery. It looks like an aurora, but it is not. The purple glow is caused by hot (3000 °C) rivers of gas flowing through Earth’s magnetosphere faster than 13,000 mph. This distinguishes it from auroras, which are ignited by energetic particles raining down from space. Canadian aurora watchers first called attention to the phenomenon about 10 years ago, whimsically naming it STEVE; researchers have been studying it ever since.

There’s a dawning realization that STEVE is more than just a purple ribbon. Photographers often catch it flowing over a sequence of vertical pillars known as the “picket fence.” They’re not auroras either. And, now, Semeter’s team has identified yet another curiosity in their paper, entitled “The Mysterious Green Streaks Below STEVE.”

“Beneath the picket fence, photographers often catch little horizontal streaks of green light,” explains Semeter. “This is what we studied in our paper.”

Semeter’s team gathered pictures of the streaks taken by citizen scientists in Canada, the United States and New Zealand. In some cases, the same streaks were photographed by widely-separated photographers, allowing a triangulation of their position. Analyzing dozens of high-quality images, the researchers came to these conclusions:

1. The streaks are not streaks. They are actually point-like balls of gas moving horizontally through the sky. In photos, the ‘green cannonballs’ are smeared into streaks by the exposure time of the cameras.

2. The cannonballs are typically 350 meters wide, and located about 105 km above Earth’s surface.

3. The color of the cannonballs is pure green–much moreso than ordinary green auroras, reinforcing the conclusion that they are different phenomena.

Above: The pure green of STEVE’s cannonballs (upper left) is compared to the blue-green and other mixed colors of auroras. Credit: Joshua Semeter, Boston University

So, what are the cannonballs? Semeter believes they are a sign of turbulence. “During strong geomagnetic storms, the plasma river that gives rise to STEVE flows at extreme supersonic velocities. Turbulent eddies and whirls dump some of their energy into the green cannonballs.”

This idea may explain their pure color. Auroras tend to be a mixture of hues caused by energetic particles raining down through the upper atmosphere. The ‘rain’ strikes atoms, ions, and molecules of oxygen and nitrogen over a wide range of altitudes. A hodge-podge of color naturally results from this chaotic process. STEVE’s cannonballs, on the other hand, are monochromatic. Local turbulence excites only oxygen atoms in a relatively small volume of space, producing a pure green at 557.7 nm; there is no mixture.

“It all seems to fit together, but we still have a lot to learn,” says Semeter. “Advancing this physics will benefit greatly from the continued involvement of citizen scientists.”

If you’re an aurora photographer looking to contribute, be sure to read Semeter et al’s original research at https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2020AV000183.

via Tallbloke’s Talkshop

https://ift.tt/2J32q1i

November 24, 2020 at 09:06AM