From Watts Up With That?
by Chris Hall
Peer-reviewed scientific literature is quite likely the most boring form of prose known to man. I have some experience with this, having been an author, reviewer, and editor covering a wide array of fields in the Earth Sciences for the past 45 years. Whenever you try to sneak in a narrative, a joke, or even an active voice, somebody catches you and raps your knuckles. I tell you this to explain why I so much admire the posts on this blog by Willis Eschenbach, and I am so envious that he is allowed to write something that’s both readable and interesting. Now that I’m retired and don’t have to “play the game” any more, I promised myself that I could try to write a paper without the usual constraints, just for fun. Don’t worry. I’ll be as honest as possible (within reason) and give you the story almost as it happened.
My background is in argon geochronology and noble gas isotopic geochemistry. That may sound pretty esoteric, but by their very natures, these fields allow and even require you to collaborate with other researchers in an extremely broad array of fields. I have worked on geomagnetism, lunar geology, mining geology, tectonics, mantle processes, geothermal power, volcanology, petroleum geology, oceanography, the cryosphere, cloud physics, early man evolution, meteor impacts, and paleoclimate. Along the way, you pick up a lot, mostly by osmosis. Sadly, I don’t have any tales of living in the Solomon Islands (I’m very envious). Aside from a very occasional field trip, I’m mostly a lab based experimentalist with expertise in vacuum systems, lasers, mass spectrometry and data reduction. But, enough about me.
One area that I had not studied in detail is sea level rise. I’ve seen the arguments on both sides and although I’ve been skeptical of claims of gloom and doom, I didn’t know enough to make any contribution. I became interested when I heard that there were claims that sea level rise was accelerating. Most of the arguments about sea level rise have centered around trying to estimate the current rate of rise, and there seemed to a lot of methods and shoe-horning of different data sets into a single coherent story. This, of course, required “adjustments” and alarmists could always find things to be alarmed about, while skeptics could wag their fingers and minimize any discovered acceleration.
Here is a brief but not exhaustive list of articles on WUWT that address the issue:
My interest in the field started after I saw the excitement generated by Nerem et al. (2018), herein referred to as PNAS2018, that claimed to find “climate-change-driven” sea level acceleration over the past 25 years using satellite and tidal gauge records. Now, I have no doubt that climate change has driven sea level rise, but the big question is whether climate-change-driven equals anthropogenic. If it is, then acceleration should be low or non-existent until possibly the second half of the 20th century, followed by a sharp rise. Sort of like a hockey stick, or the rocket sled of the spaceship in the great George Pal sci-fi classic When Worlds Collide. If, however, climate-change-driven is natural, we would expect acceleration to be more oscillatory, kind of like a roller coaster. PNAS2018 does not address this directly, but to answer the question, we have to look into the past.
The Raw Material and Basic Assumptions
I found a treasure trove of tidal gauge data from the Permanent Service for Mean Sea Level
(Holgate et al., 2013; PSMSL, 2022). The monthly data from over 1,500 sites came in the form of a zip file (rlr_monthly.zip) with all of the data plus a very helpful Matlab script that can extract the data and organize it into a data structure suitable for further analysis. I’ve used Matlab a few times, but much prefer the open source Gnu Octave “work-alike” as it has a nicer interface (at least in Linux) and is not quite so resource hungry. The Matlab script that came with the PMSL data didn’t work “out of the box” in Octave, but a little simplification of some input format definitions sorted that out nicely.
Although the PMSL data are adjusted to give a uniform estimate of sea level, my hope was that there were not any adjustments to the data that would affect estimates of sea level acceleration. For any function, you can add or subtract a constant or linear trend without having an affect on the second derivative of the function, and therefore the amount of sea level acceleration in mm/yr/yr should remain unchanged with those kinds of adjustments. If adjustments were done piecewise, that should show up as discontinuities in the second derivative.
I also assumed that virtually any variation in sea level due to tectonic effects such as river delta compaction or glacial rebound, whether near or far field, should be nearly linear with respect to time over the period of interest, which is about the past century. Therefore, such effects should have no impact on estimated acceleration. Any anthropogenic signals, such as enhanced mining of water from deep aquifers, global warming of the oceans, local subsidence due to groundwater exploitation, should show up as a positive acceleration, and would constitute a true anthropogenic signal.
How I Did It
After perusing the PMSL data, it was clear that most of the sites have only spotty coverage over the past century or so. One is confronted with varying start and stop dates, along with frequent data outages, or missing data, the bane of working with large data sets collected by other people. My first job was to identify a subset of tidal gauge records that had decent coverage over a reasonable amount of time along with getting a standard time span over which I could try to tease out acceleration signals. I wound up picking the time period of 1925 to 2015, or a total of 90 years as my standard time period. Before 1925 and strangely enough after 2015, the coverage tends to drop off. I sorted out the records that had the fewest dreaded “NaNs” (Not A Number) and came up with the top 100 sites on the hit parade. About a quarter of those sites have perfect data coverage, and the worst have about 90%. The top 100 site locations are shown in Fig. 1.
As you can see, there’s very little coverage in the Southern Hemisphere, and the Atlantic Basin has a lot more data than the Pacific or Indian Oceans. Them’s the historical breaks, I’m afraid. I was just having to hope that a global sea level record would be, ummm, global and I just worked with what I could get.
Next came how to combine data sets to cancel out high frequency noise and “see” longer term accelerations? Remembering my geophysics courses from a lifetime ago, I figured that linearly combining, or “stacking”, the time series might help to cancel out some noise and reinforce the underlying global signals. However, mindful of the high density of sites in Europe and Eastern North America, I decided to divide the data sets into a 5×5 degree grid and perform area weighted averages. Within a 5×5 cell, a simple average of sites within the cell would represent the cell’s average value.
Along with figuring out how to get some sort of global average signal, it was important to also determine a reliable and non-subjective method of deriving a sea level acceleration time series.
It is at this point where I explored many different avenues, many of which wound up being “dead ends”. This included using one of Willis E’s most favorite tool, the CEEMD or Complete Ensemble Empirical Mode Decomposition in the R library “hht”.
This could nicely tease out the high and low frequency components of tidal gauge records, but there was no analytical means I could see to derive second derivatives from the Intrinsic Modal Frequencies, or IMFs (not Impossible Mission Force), so calculating acceleration would have to be done numerically.
Another issue was that it was not clear which IMFs should be used, leading to irksome subjectivity issues. Also explored were methods like Fourier analysis and convolving tidal records with Gaussian distributions. These “dead end” methods had the charm of yielding analytical second derivatives, but they, along with CEEMD, all had problems with what to do at the beginning and end of the records. “Edge” effects had a bad habit of introducing spurious “signals” near the beginnings and ends of records.
Figure 1: Red dots indicate the location of the 100 sites with the most complete coverage of data in the time period 1925 to 2015
I wound up using the same technique used in PNAS2018, which was to fit a quadratic polynomial over a time window. I copied their time duration of 25 years, but instead of just fitting a quadratic over the past 25 years, I had a sliding window centered around each monthly record, plus or minus 12.5 years. I cheated a bit by fitting the quadratic polynomial to 301 monthly data points (25 years + 1 month), so that the acceleration value derived could be properly centered about a monthly data point and not some midpoint between monthly tide estimates. To avoid spurious artifacts at the beginning and end of the records, I only calculated accelerations from 1937.5 to 2002.5, with 12.5 years chopped off the ends of the standard data time window. So acceleration records are only 65 years long and not the 90 years in the standard time window mentioned above.
To check that a data combination record worked properly, I created a second dataset with a synthetic “anthropogenic” signal added. Starting in 1970, an artificial acceleration of 0.084 mm/yr2 was added to the raw dataset. That value is the amount of “climate-change-driven” acceleration detected in the modern satellite record in PNAS2018. If a method of combining tidal records worked to derive acceleration with a high degree of fidelity, the difference between the acceleration record from the raw+synthetic and just raw data should equal the artificially inserted acceleration. Fig. 2 shows the results from a variety of methods that were tried out.
The faint black line is the actual acceleration “step function” added, but the smoother curve shown as the thick black line is the best estimate that you can achieve by using a moving 25 year quadratic polynomial fit. That’s because the fitting procedure effectively performs a moving average of the data. The orange line shows the result from first performing an area weighted average of all 100 sites and then fitting quadratic functions to the weighted average. It works well, but there is a slight deviation in the beginning of the record, and it’s my belief that this is an artifact of the existence of a larger proportion of the dreaded NaNs in that time period. For fun, I tried out this approach for the top 500 tide gauge sites (show as the blue line), and the NaNs made the whole exercise pointless.
I tried to eliminate the NaN problem by “filling in” missing data using the very clever algorithm in the missMDA library in R, which first does a Principal Component Analysis (PCA) on the complete dataset, picks out the top components, then uses them to eliminate any NaNs that might be lurking in the data. The results of that effort are shown in the green line. This improved the situation in the early part of the record, but introduced an artifact near the end. Sigh. Filling in missing data created artificial information, it seemed. Interestingly, only 5 components were needed to fill in the missing data, which given the fact that we are working with 100 records, it indicates a significant amount of correlation between individual tidal records.
Finally, like Swamp Castle, I found a method that stood up. By first fitting quadratic polynomials to the raw data, then doing an area weighted average of the individual acceleration records, it was possible to get a perfect recreation of the synthetic acceleration added to the original data. This record is shown in red. I believe that this worked only because the 301 data point window used for the fitting easily spanned all of the NaNs in the individual records, where no contiguous series of over about 100 or so NaNs occurs.
Figure 2: Comparison of the different methods explored to combine tide gauge data to measure sea level acceleration. An acceleration of 0.084 mm/yr2 was added to the raw data in 1970. This figure shows the difference between raw+synthetic minus the raw data. The thin dashed line shows the synthetic signal. the thick black dashed line shows the best achievable result from fitting quadratic function to a 25 yr span ot data. The best results were achieved by fitting quadratics before doing an area weighted average. See text for a description of the different methods.
OK, so how did we do? The main question I needed answered at this point was:
- Can we replicate the positive acceleration seen in PNAS?
Fig. 3 shows a blow-up of the averaged acceleration data near the end of the record, as it approaches the value estimated in PNAS2018. My record ends before their’s but as you can see, it is certainly approaching the value they got. So, yes, I’d say that around the year 2000, acceleration was positive and somewhat increasing. My acceleration record did not have any of the corrections used in PNAS2018, such as an ENSO correction and an estimate of interannual precipitation estimates, but my record is pointed in the right direction. I might have tried some corrections, but I was stymied by what I consider a poor “feature” of PNAS2018: a lack of online data and computer code. I’m guessing that if I spent a year or so digging around in various references, it might be possible to figure it out as someone not in the club, but my enthusiasm for that waned very quickly. I fault PNAS for this as many journals now, including the one where I was an associate editor, now require this. Bad show.
Fig. 3 also shows something interesting as there is a distinct annual signal (blue line), possibly due to varying Northern Hemisphere Terrestrial Water Storage (TWS) on land. The red line shows the lower frequency part of the acceleration record by removing the first 2 IMFs from the CEEMD decomposition of the acceleration average.
Figure 3: Close up of the results of performing an area weighted average of the top 100 sites by pre-fitting a moving quadratic funtion over a 25 yr span. The average acceleration is increasing and is headed toward the estimate in PNAS2018. The red line is the weighted average minus the two highest frequency IMFs derived from a CEEMD decomposition of the signal. This removes the annual variations seen in the data.
Figure 4: Full results of the area weighted average model. The dark blue line indicates the full model from all 100 sites. The light blue background is from a sensitivity test where 100 separate models were created from randomly selected subsets of 50 sites. This gives an indication of how sensitive the results are to the selection of sites. For comparison, the acceleration of the HasCRUT4 sea surface temperature record is shown in red.
The next question is:
- Is there a distinct anthropogenic signal in the area weighted average acceleration record?
The answer to this is shown in Fig. 4, and I suspect that it is definitely “no”. The blue line shows the apparent sea level acceleration over the entire 65 year time span. As you can clearly see, acceleration seems to have varied significantly during the 20th century, with both positive and negative values, whose absolute values far exceed the “climate-change-driven” values in PNAS2018. The 1940s seemed to be a time of reducing acceleration, with the 1950s having sea level deceleration. Acceleration resumed in the 1960s, followed by 1970s deceleration and smaller amplitude variations since then. I tried to do a sensitivity test by running 100 combinations of randomly selected groups of 50 sites (light blue shading in Fig. 4). This gives one a feel for how sensitive the final average is to any particular group of datasets. For fun, I also plotted the “acceleration” of the HadCRUT4 sea surface temperature record (SST) in red, which has some features suspiciously similar to the average sea level acceleration.
I noticed something rather strange when I looked at the results of the sensitivity analysis. The results did not show a Gaussian style grouping around the global average. Instead, there is a gap, where some groupings depart in an oscillatory fashion both above and below the average. This is particularly apparent in the 1950s dip, the 1960s peak and the 1970s valley. Looking into this further, it seems that the presence or absence of the small subset of Southern Hemisphere sites within a group was having a disproportionate effect on the results.
So I decided to calculate a purely Southern Hemisphere acceleration record and it has a maximal correlation with SST acceleration of 0.668, where sea level lags SST by 7 years. This is illustrated in Fig. 5, where SST acceleration is shifted to the right by 7 years. The time series were detrended with mean of zero and scaled to have a variance of one. My Mark I eyeball test suggests that these two time series just might be slightly related. It also suggests that although sea level was apparently decelerating in the Southern Hemisphere in the early 2000s, it is probably accelerating now, but things might be topping out as we speak. Also, the idea of a 7 year lag between SST acceleration and a response in the massive Southern Hemisphere ocean basin does not send my BS meter into sounding any alarms.
However, before we get too excited, it’s important to note that we are dealing with highly autocorrelated time series, and correlations could easily be spurious. I used the auto.arima function in the “forecast” R library to characterise the autocorrelation parameters for the average acceleration record. Then I made 200 randomly generated records having the same autocorrelation parameters using the sarina.sim function in the R “astsa” library. The average correlation coefficient with SST acceleration was 0.479, which is pretty high, so the correlation between the Southern Hemisphere and SST accelerations could definitely be spurious. I constrained the search for maxima to the region where sea level lags temperature, as I didn’t want to find out that the sun comes up because the rooster crows. But Fig. 5 is pretty, no?
So what can we conclude? It seems to me that the tidal gauge dataset suggests that over the last two thirds of the 20th century, apparent sea level acceleration may have oscillated about a mean of zero and an amplitude of roughly 0.4 mm/yr/yr. There’s a hint that sea level acceleration may be related to SST acceleraton, where SST leads sea level by about 7 years. This could clearly be called “climate-change-driven”, but there does not appear to me to be evidence for it to be anthropogenic, if that means that it is driven by the release of CO2. It’s even possible that the sharp peaks and valleys in Figs. 4 and 5 are due to “helpful” corrections to sea level rise data, the discontinuities hinted at earlier. I don’t know.
I welcome reasonable and constructive suggestions and criticisms. Please treat me as courteously as you treat Willis Eschenbach and follow his standard rules.
Figure 5: Comparison of the area weighted model from just the Southern Hemisphere sites and the SST acceleration record. Both records have been normalized to have zero means and unit variance. The SST record is shifted to the right by 7 yr. It is possible that seal level acceleration responds to accelerations in SST with about a 7 yr lag and this is most clearly seen in the Southern Hemisphere sites.
Nerem, R.S., Beckley, B.D., Fasullo, J.T., Hamlington, B.D., Masters, D. and Mitchum, G.T., 2018. Climate-change–driven accelerated sea-level rise detected in the altimeter era. Proceedings of the national academy of sciences, 115(9), pp.2022-2025.
Permanent Service for Mean Sea Level (PSMSL), 2022, “Tide Gauge Data”, Retrieved 09 May 2022 from http://www.psmsl.org/data/obtaining/.
Simon J. Holgate, Andrew Matthews, Philip L. Woodworth, Lesley J. Rickards, Mark E. Tamisiea, Elizabeth Bradshaw, Peter R. Foden, Kathleen M. Gordon, Svetlana Jevrejeva, and Jeff Pugh (2013) New Data Systems and Products at the Permanent Service for Mean Sea Level. Journal of Coastal Research: Volume 29, Issue 3: pp. 493 – 504. doi:10.2112/JCOASTRES-D-12-00175.1.
You must be logged in to post a comment.