When the IPCC outlined its so-called risk management framework in section 2.3 of AR5, Chapter 2, it drew a distinction between a descriptive analysis of decision making and a normative analysis. The former, subject of Part 3 of this series of articles, makes a great deal of the shortcomings of intuitive thinking when applied to the climate change problem. In contrast, a more deliberative approach was advocated, using an array of tools used to assist decision-making under risk and uncertainty. These are the supposedly normative techniques and are the subject of section 2.5 of AR5, Chapter 2. However, upon reading section 2.5, it becomes apparent that the IPCC’s division between flawed intuition, on the one hand, and deliberation as a normative on the other, is a misleading one. In practice, most of the intuitive errors can be made within the context of employing one or more of the deliberative and supposedly normative tools.

After reading section 2.5 of AR5, Chapter 2, I have to admit that I had difficulty determining the motive behind its writing. It comes across as a student dissertation, and not a terribly good one at that (remember, it was written by the same people who, in section 2.4, were unable to correctly distinguish between risk aversion and uncertainty aversion). Far from a presentation covering cutting edge developments in decision analysis, it is a rambling, flawed and incomplete account that could have been compiled by anyone with access to Wikipedia. There is nothing particularly ground breaking in it and there is plenty that is plain wrong.  If there is a discernible purpose to the account, then it seems to be that it introduces each of the deliberative methods that are available, before then highlighting a susceptibility to intuitive reasoning (implying limited application to the climate change problem). It then finally settles upon the IPCC’s deliberative tool of choice, i.e. the precautionary principle. Add to this an appreciative account of climate model ensembles and the virtues of structured expert judgement, and you have a reasonably accurate description of the IPCC’s conception of normative decision making.

Be that as it may, whilst the IPCC may deem the precautionary approach to be the most appropriate for the climate change problem, it is if anything the most intuitive and least deliberative of the normative approaches on offer in section 2.5. It also just so happens to be the one that comes closest to vindicating the declaration of a climate change emergency, so one cannot be too surprised to see it promoted as a normative.

Confusion Reigns

One of the main difficulties and confusions to be found in section 2.5 stems from its incorrect use of terminology. The section itself is titled ‘Tools and Decision Aids for Analysing Uncertainty and Risk’. As such, a more succinct title might have been ‘Decision Analysis’, since this is the term universally recognised as referring to the application of formal and systematic analysis in the support of decision making under uncertainty and risk. As such, decision analysis covers a wide range of analytical methods, embracing, for example, expected utility (E(U)) theory, decision tree analysis, multi-criteria decision analysis (MCDA), cost-benefit analysis (CBA) and cost-effectiveness analysis (CEA). The IPCC, however, despite covering most of the above, chooses not to use the term ‘decision analysis’ as the collective term, preferring instead to use it in reference to yet an additional analytical technique which, when detailed, appears to be indistinguishable from the previously described E(U) theory! At the same time, fundamentally important techniques such as decision trees and risk influence diagrams do not get a mention. For a group of experts trying to give the impression that they know all there is to know about the application of deliberative decision-making techniques, they don’t do a particularly good job.

Deliberation Under Uncertainty

It is difficult to escape the conclusion that this is a subject that the authors of Chapter 2 have read about, and perhaps even studied as academics, but they have never earned a living from implementing it. Even so, credit should be given for the fact that a number of the widely recognised limitations of the various methods of decision analysis are properly identified by the IPCC document. For example:

“At the same time, the limitations of E(U) must be clearly understood, as the procedures for determining an optimal choice do not capture the full range of information about outcomes and their risks and uncertainties.”  
“In the standard E(U) model, each individual has his / her own subjective probability estimates. When there is uncertainty on the scientific evidence, experts’ probability estimates may diverge from each other, sometimes significantly.”  
“For example, the uncertainty surrounding the potential impacts of climate change, including possible irreversible and catastrophic effects on ecosystems, and their asymmetric distribution around the planet, suggests CBA may be inappropriate for assessing optimal responses to climate change in these circumstances.”  
“A strong and recurrent argument against CBA (Azar and Lindgren, 2003; Tol, 2003; Weitzman, 2009, 2011) relates to its failure in dealing with infinite (negative) expected utilities arising from low-probability catastrophic events often referred to as ‘fat tails’. In these situations, CBA is unable to produce meaningful results, and thus more robust techniques are required.”  

These problems all basically boil down to the fact that any method that evaluates risk in order to identify the correct course of action cannot do so if there is insufficient information to reliably determine the probabilities involved. Furthermore, such decisions are often made with uncertainty in mind rather than risk. This is particularly problematic when dealing with low probability, high-consequence events, in which the avoidance of the worst case scenario imagined becomes the prime objective. But what does the IPCC have in mind when they say that ‘more robust techniques are required”?

The Precautionary Principle to the Rescue

Having recognised that decision analysis depends upon relatively complete information in order that risky alternatives may be evaluated, a climate science friendly solution to the problem is then offered by the IPCC:

“The precautionary principle allows policymakers to ban products or substances in situations where there is the possibility of their causing harm and / or where extensive scientific knowledge on their risks is lacking.”  

A lot has already been said about the precautionary principle and I do not wish to add too much here. The IPCC clearly felt the same way, and so they deemed it sufficient in section 2.5 to simply point out that:

“An influential statement of the precautionary principle with respect to climate change is principle 15 of the 1992 Rio Declaration on Environment and Development: “where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”  

After that bold declaration, all that remained in section 2.5 was to indicate how the precautionary principle could be seen as the ‘more robust technique’ called for in order to address the limitations of CBA, etc. The IPCC therefore makes the following connection:

“Robust decision making (RDM) is a particular set of methods developed over the last decade to address the precautionary principle in a systematic manner.”  

 ‘Robust’ is such a lovely word, and who wouldn’t want it to apply to their favourite approach? Unfortunately, there is nothing robust about the precautionary principle, and RDM was most certainly not developed for its benefit; the ‘R’ in RDM does not stand for ‘precautionary’. In fact, the RDM approach is precautionary in only one sense; it enables the decision maker to construct a strategy that keeps open options as long as possible, thereby minimising the regret function. Where standard decision analytics seek to optimise between a number of options, the RDM approach turns this on the head and seeks to satisfice. Nevertheless, RDM is still an approach in which alternatives are considered and compared, albeit in the context of deep uncertainty. The precautionary principle, on the other hand, is the focusing effect writ large.

RDM was not developed to ‘address the precautionary principle in a systematic manner’ – indeed, the RDM paper cited by the IPCC fails to mention the precautionary principle even once. Instead, RDM approaches decision analysis in a manner that addresses deep uncertainty, thereby providing a plausible alternative to the precautionary principle. What RDM says is that you don’t have to resort to the precautionary principle when confronted with deep uncertainty and if you do it may lead to regret. So unless by ‘to address the precautionary principle in a systematic manner’  the IPCC means ‘to highlight the shortcomings of the precautionary principle’, I have to say that, once again, it has got its basic facts wrong.

From Decision Analysis to Uncertainty Analysis

Having somewhat mauled the subject of decision analysis, section 2.5 of AR5 Chapter 2 then turns its attention to the subject of uncertainty analysis. Once again, I am left wondering what the IPCC’s motives for doing so were, other than to take the opportunity to extol the virtues of structured expert judgement and climate model ensembles. Uncertainty is a difficult concept to tackle and a comprehensive treatment of the philosophical difficulties involved would have seemed appropriate. But this seemed to have been either beyond the authors’ ability or outside their area of interest. Instead, the IPCC appears content with the assertion that uncertainty is not the sceptics’ friend before leaving it at that. In fact, according to Lewandowsky, uncertainty is ‘actionable knowledge’. So anything that establishes a high level of uncertainty, whilst doing nothing about it, has to be viewed approvingly by the climate activist.

Uncertainty as Opinion

Structured expert judgment has been around for some time (it originated in the nuclear power industry) and, as the IPCC enthuses, it is gaining employment in many sectors:

“As attested by a number of governmental guidelines, structured expert judgment is increasingly accepted as quality science that is applicable when other methods are unavailable (U. S.  Environmental Protection Agency, 2005).”  

The IPCC then eagerly draws attention to how such ‘quality science’ has found application in climate science:

“Structured expert judgments of climate scientists were recently used to quantify uncertainty in the ice sheet contribution to sea level rise, revealing that experts’ uncertainty regarding the 2100 contribution to sea level rise from ice sheets increased between 2010 and 2012 (Bamber and Aspinall, 2013).”  

This is all looking very good for the climate activist and very bad for the sceptic. Basically, the more experts are thrown at a problem, the worse things seem. But therein lies the key problem with structured expert judgment. Whilst it may be better to place more credence in those experts who have a better track record of estimating uncertainty (and that, when all is said and done, is all there is to it), one cannot escape the fact that structured expert judgment is just the science of consensus measurement. As such, it is more about modelling opinion than it is about modelling a physical system. It isn’t aleatory uncertainty that is being measured but the epistemic uncertainty regarding aleatory uncertainty. This is worth keeping in mind before one gets too excited about contributions to sea level rise.

Scenarios and Climate Model Ensembles

The IPCC concludes its survey of normative decision making by outlining the theory behind Representative Concentration Pathways (RCPs) and climate model ensembles. This is pretty standard stuff and so I won’t waste any time going into the detail. However, it would be remiss of me to point out that this is another of the areas where credit is actually due to the authors. It would have been easy for them to have presented the uncertainty analysis afforded by such scenarios and ensembles as being straightforward and conclusive, but they don’t. Despite having failed to explain the limitations of probabilistic approaches, despite having failed to mention alternatives such as possibility theory, despite having failed to clarify the importance of distinguishing between epistemic and aleatoric components of uncertainty before attempting its propagation, the IPCC is still commendably candid when it comes to the limitations of the scenario and ensemble approach. In its own words:

“On the downside, it is easy to read more into these analyses than is justified. Analysts often forget that scenarios are illustrative possible futures along a continuum. They tend to use one of those scenarios in a deterministic fashion without recognizing that they have a low probability of occurrence and are only one of many possible outcomes. The use of probabilistic language in describing the swaths of scenarios (such as standard deviations in Figure 2.4) may also encourage the misunderstandings that these represent science-based ranges of confidence.”  

Actually, I don’t think I could have put it any better myself. And this limitation is such a shame, seeing as how important a probabilistic interpretation of ensemble output is to extreme weather event attribution. Epistemic uncertainty is key here, and one should never forget the extent to which the validity of extreme weather attribution depends upon the uncertainties contained in the models upon which it is based.

And Finally, the Elephant not in the Room

In surveying the tools and methods available to the deliberative decision maker, the IPCC is guilty of a number of significant omissions, some of which I have already mentioned. It would be unfair, however, to have expected everything to be covered. Even so, there is one glaring omission that is particularly noteworthy. Given how much was made in section 2.4, regarding the intuitive thinker’s flawed evaluation of weather events, one would have thought that the IPCC would take particular care to mention that a deliberative approach to the evaluation of extreme weather risk had already been developed in the guise of Detection and Attribution (D&A). It’s not as if AR5 fails to cover D&A, since a whole chapter is devoted to the subject (Chapter 10). Section 10.6 of Chapter 10, in particular, describes how D&A techniques have been used to address extreme weather event attribution. Nevertheless, when it mattered, those AR5 authors tasked with the discussion of risk perception and how it should be managed failed to make that crucial connection in their own section.

Could it be that back in 2014, when AR5 was written, the IPCC was still more concerned with the precautionary principle and the promotion of the plausibility of future disaster scenarios? Was it still more concerned with establishing the collective credibility of the experts that formulated them? Is this an indication that the value of using D&A to create a narrative of emergency based upon present-day impact had not yet taken root? Indeed, had the IPCC produced the manifesto for establishing a climate emergency policy without even fully realising it at the time?

To explore that intriguing possibility one has to turn one’s attention to the final element of the IPCC’s risk management framework, in which the impact of uncertainty on the formulation of climate change policy is discussed. Accordingly, I shall cover the subject in the next, and final, article within this series.

via Climate Scepticism

https://ift.tt/2Nugt26

February 21, 2021 at 02:43AM