
From Watts Up With That?
[More crap from ideologically captured social scientists who couldn’t tell you the atomic weight of a Hydrogen atom. ~cr]
Reasoning about climate change
Bence Bago, David G Rand, Gordon Pennycook
PNAS Nexus, Volume 2, Issue 5, May 2023, pgad100,
https://doi.org/10.1093/pnasnexus/pgad100
Published: 02 May 2023
Abstract
Why is disbelief in anthropogenic climate change common despite broad scientific consensus to the contrary? A widely held explanation involves politically motivated (system 2) reasoning: Rather than helping uncover the truth, people use their reasoning abilities to protect their partisan identities and reject beliefs that threaten those identities. Despite the popularity of this account, the evidence supporting it (i) does not account for the fact that partisanship is confounded with prior beliefs about the world and (ii) is entirely correlational with respect to the effect of reasoning. Here, we address these shortcomings by (i) measuring prior beliefs and (ii) experimentally manipulating participants’ extent of reasoning using cognitive load and time pressure while they evaluate arguments for or against anthropogenic global warming. The results provide no support for the politically motivated system 2 reasoning account over other accounts: Engaging in more reasoning led people to have greater coherence between judgments and their prior beliefs about climate change—a process that can be consistent with rational (unbiased) Bayesian reasoning—and did not exacerbate the impact of partisanship once prior beliefs are accounted for.
Issue Section:
Psychological and Cognitive Sciences
Editor: Michele Gelfand
Significance Statement
It is commonly argued that reasoning exacerbates political bias via identity-protective cognition. This theoretical account has had a particular influence on the explanation of partisan differences in the context of global warming. According to this account, people exert mental effort to defend their political identities by disputing identity-inconsistent information. However, our results provide no support for this account over other accounts. Beyond raising theoretical questions about how people reason about climate change, our findings suggest a potential alternative pathway for addressing it. Instead of focusing on interventions that try to decrease partisanship saliency when communicating about science, interventions aimed at providing accurate information about climate change may be effective in the long run.
Introduction
Skepticism about climate change and its human origins represents a major impediment to the adoption of climate change mitigation policies (1–3). One of the most commonly cited reasons for climate change denial is political partisanship or ideologies (4). In the United States, for example, people on the political right are more likely to believe that climate change is a hoax or that it is not caused by human activities (2, 5–8). What is more, people with greater numerical ability and cognitive sophistication show more pronounced partisan differences in climate change beliefs, rather than greater agreement with the scientific consensus (9–13). That is, having stronger cognitive ability appears to not protect against climate misperceptions but instead bolster views that align with one’s political identity.
The most popular explanation of this result is provided by the motivated system 2 reasoning (MS2R) framework (11, 14–16). Motivated reasoning has been used in connection with a number of processes and motivations, but in this research, we specifically focus on political motivations, as they have been argued to be the primary drivers of climate change disbelief (11). This MS2R framework can be interpreted from the point of view of the dual-process perspective (17–19), which distinguishes between two types of reasoning processes: intuition (system 1) and deliberation (system 2). While intuition is considered a low-effort, quick, automatic response to stimuli, deliberation is a more effortful, time-consuming process. The MS2R framework asserts that cognitive abilities are linked to greater polarization because deliberation facilitates politically motivated reasoning: When faced with new evidence, engaging in deliberation better allows one to discredit the evidence if it is not congenial to one’s identity and partisan commitments (and vice versa when it is congenial). As a result, there are large partisan differences in what evidence is deemed credible, eventually leading to substantial polarization in beliefs. In the language of dual-process theory, deliberative reasoning processes are triggered to rationalize or justify identity-consistent intuitive impulses. In the context of climate change, this would mean that deliberation leads Republicans to reject evidence in favor of climate change (to protect their partisan identity), while deliberation leads Democrats to reject evidence questioning climate change (10, 11, 20–23). If more cognitively sophisticated people engage in more deliberation, they will be better at aligning their judgments of evidence about climate change with their respective political identities.
This theory has enormous practical importance because, if it is true, common strategies such as educating people or making them more reflective will not be effective against climate change denial. In fact, such strategies will only serve to increase partisan differences (10, 23, 24) (although there is evidence questioning this assumption (25–27)). Furthermore, from a theoretical perspective, this “MS2R” account stands in stark contrast to a common dual-process perspective—the “classical reasoning” view—whereby system 2 reasoning is thought to typically facilitate accuracy in a variety of decision-making tasks (18, 28, 29). Put differently, the classical reasoning account posits that when people engage in deliberation, they tend to form more accurate beliefs, regardless of the partisan or identity alignment of the propositions that they are deliberating about (29, 30).
However, there are two serious limitations of the prior empirical research in this area. First, political identity is correlated with—but meaningfully separable from—people’s prior beliefs about climate change (31). In particular, Democrats are much more likely to believe that climate change is caused by human activity than Republicans. Yet, many Republicans do believe in anthropogenic climate change, and some Democrats do not, meaning that partisanship and priors are meaningfully distinct constructs. For example, a recent Pew survey found that 53% of conservative Republicans believe that human activity contributes to global warming to at least some degree, while 8% of moderate Democrats think that it does not (5). Yet most studies claiming to provide evidence of politically motivated reasoning have not measured these prior beliefs, which is highly problematic for making strong claims about politically motivated reasoning (31–33). Although partisanship might influence prior beliefs, many other factors also contribute to beliefs, such as who people judge trustworthy as well as family environment or life experiences (12), and prior beliefs may also influence partisanship. Thus, effects driven by prior beliefs do not provide positive evidence in support of politically motivated reasoning.
Indeed, recent correlational work finds that controlling for prior beliefs related to climate change nullifies the correlation between cognitive sophistication and partisan bias; instead, higher cognitive reflection was associated with placing greater emphasis on prior beliefs when evaluating new information (31). While evaluating new evidence in light of prior beliefs is sometimes called “confirmation bias” and can be a vehicle for politically motivated reasoning in so much as political identities influence prior beliefs, it is also possible that such evaluation can be entirely rational and unbiased from a Bayesian perspective1 when there is uncertainty about the reliability of sources (34–38). When considering evidence that is inconsistent with your prior beliefs, it can be rational to conclude that it is more likely that the information source is unreliable than it would be to take the stance that everything that (or much of what) you know about a topic is wrong. It is therefore essential to account for prior beliefs when attempting to test for politically motivated reasoning. Any relationships with identity that are not robust to controlling for prior beliefs do not provide positive evidence for politically motivated reasoning because they can be consistent with either political or accuracy motivations. Indeed, distinguishing the effects of prior beliefs and partisanship is important and common in the literature, even among proponents of the MS2R account, best described by Kahan (39): “Under [motivated system 2 reasoning], the signature feature of this form of information processing is the opportunistic adjustment of the weight-assigned evidence conditional on its conformity to positions associated with membership in identity-defining affinity groups. In Bayesian terms, there is an endogenous relationship between the likelihood ratio and a person’s political predispositions. It is this entanglement that distinguishes politically motivated reasoning from a normative conception of Bayesian information processing, in which the weight (likelihood ratio assigned) evidence is determined on the basis of valid, truth-seeking criteria independent of an individual’s cultural identity. [Motivated system 2 reasoning] also distinguishes politically motivated reasoning from cognitively biased forms of information processing in which the likelihood ratio is endogenous to some non-truth-seeking influence other than identity protection, such as an individuals’ priors in the case of confirmation bias,” although the effects of prior beliefs and partisanship have not been sufficiently empirically investigated in the context of investigating the apparent role of deliberation (39, 40).
Second, past research on MS2R has relied upon correlating individual differences in cognitive sophistication (e.g. cognitive reflection, numeracy, and education) with the extent of partisan differences on politicized issues (9, 11, 41). Although it is generally thought that people scoring higher on cognitive sophistication scales are better at deliberation than people scoring lower on these scales, they also tend to differ in many other aspects. For example, they tend to generate different intuitions on many reasoning tasks (i.e. people who are more cognitively sophisticated also have different prior beliefs and knowledge than those who score lower (42, 43)). Thus, because this approach is correlational, it does not allow for the direct identification of causal effects of deliberation.
Current research
In the current research, we address both of these limitations. First, we provide a causal test of the role of intuition and deliberation on how people evaluate pro climate change and contra climate change arguments by forcing some participants to make judgments under cognitive load and time pressure. Second, we measure prior beliefs about climate change by asking how serious risk participants believe climate change to be and how much they agree that human activity causes climate change.
This paradigm allows us to shed new light on competing accounts of the role of deliberation in argument evaluation surrounding climate change: Does deliberation magnify partisan bias, consistent with the MS2R framework (11)? Or does it facilitate accurate assessments, consistent with a more classical perspective on reasoning (30, 31, 37)?
Furthermore, we specify a third alternative. Previous research (e.g. studying blatantly false political news posts (30)) has argued that the classical reasoning approach simply predicts that more deliberation will lead to increased objective accuracy, defined here as holding a position more consistent with the scientific consensus on climate change. However, most people do not actually have direct access to the information needed to know the objectively accurate answer, particularly in the context of complicated technical issues like climate change. Thus, the classical reasoning account would not necessarily predict that deliberation leads to more objectively accurate views. Instead, accuracy-motivated deliberation may lead to improved coherence between one’s existing directly relevant beliefs and the stimuli being presented. That is, deliberation may increase the extent to which one evaluates whether new information makes sense in light of the relevant beliefs/knowledge that one has developed based on previous information that one has encountered (a process that, as discussed above, can be consistent with unbiased, rational Bayesian updating (34–37)). In this case, deliberation should magnify differences based on prior beliefs. As a result, finding that deliberation increases coherence with prior beliefs could be consistent with either a motivated or rational account.
In our experiments, we asked participants to indicate how much they agreed with politically neutral arguments about climate change (meaning that there were no references in them to specific policies or to politics in any way). These arguments were taken from “procon.org,” a website that collects arguments that were made in real life about several different topics. Arguments were content counter-balanced, such that for each statement, we created a pro and contra version, one of which was randomly assigned to a given participant; participants never saw both the pro and contra versions of the same argument. Altogether, they were presented with six arguments (half contra and half pro). Table 1 shows the pro and contra versions of an example item from our experiment (for a complete set of statements, see Table S1).
You can read the rest of the article explaining why and how your reasoning is deficient and that you are an emotionally manipulated troglodyte here.
Have fun discussing this tripe.
You must be logged in to post a comment.