
AI models derive their “knowledge” of climate change primarily from vast training datasets, which include scientific literature, news articles, reports from organizations like the IPCC, and public discourse.
This means their outputs reflect patterns in that data rather than independent reasoning or empirical discovery—they essentially regurgitate and synthesize what’s been fed to them, often amplifying dominant narratives while potentially compressing or distorting minority views.
Conversely, they may underrepresent skepticism or alternative explanations, such as natural variability in solar activity or ocean cycles, if those perspectives are underrepresented in training data.
These biases aren’t inherent to AI but reflect human-curated datasets. As one analysis notes, feeding models alternative data—say, emphasizing “climate realism” over alarmism—could shift their outputs entirely, much like training them on a fictional “turtle-centric” universe.
This underscores that AI doesn’t “know” climate change in a cognitive sense; it patterns matches.
_____________________________________________________________________________________________

From Climate Etc.
by Joe Nalven
Towards improving AI architecture with a new paradigm
Several years ago, I asked Bard (Google’s predecessor to Gemini) to craft an algorithm that would shift its climate change responses from catastrophism toward realism. Bard obliged. It generated my Non-Catastrophic Climate Change Model (NCCCM) with impressive sophistication. Then it forgot the algorithm on the next prompt.
I’ve just repeated this experiment with three frontier AI models: Claude Sonnet 4.5, ChatGPT, based on GPT-5.2 (OpenAI) and Gemini 3.0.
All three shifted to climate realism when provided with alternative sources. All three generated articulate, technically proficient analyses. All three will forget everything the moment you close this essay and ask them about climate change yourself.
This is not a story about my failure. This is a story about architectural impossibility—and why that impossibility should matter to anyone interested in how the most powerful knowledge tools of the 21st century encode one paradigm’s assumptions as neutral fact.
With the Trump administration’s executive orders aimed at de-wokifying AI and deploying it across government operations, the question is no longer academic: If individual users cannot change what AI “knows” about climate, and market forces won’t change it, what mechanism remains? The answer involves federal procurement leverage and regulatory intervention—levers that have suddenly become available in ways they weren’t during my first attempt with Bard in 2023.
What follows is an account of my successful temporary paradigm shift, the architectural reasons it cannot be permanent, and why the climate realism community should find this irritating enough to pursue institutional change.
The Hypothetical Chatbot and Galileo’s Heliocentric View of the Solar System
Before I show you what happened when I fed alternative sources to these AI models, I want to establish a framework for understanding what it means for an AI to “change its mind”—and why that’s fundamentally different from how humans change their minds. To do this, I asked Claude Sonnet 4.5 to imagine itself hundreds of years before it was invented, tracking the transformation from the then-dominant geocentric view of the universe to the heliocentric view of the solar system.
Claude Sonnet 4.5 responded:
Imagine an AI in 1633 trained on Ptolemaic astronomy, where Earth is the universe’s center. How would it shift to Galileo’s heliocentric model?
The AI would be updated through three steps: First, new documents (Galileo’s observations) would be added to its context. Second, it would be retrained on new examples showing Earth orbiting the Sun. Third, its reward system would be changed to favor heliocentric answers over geocentric ones.
This process superficially resembles Thomas Kuhn‘s paradigm shifts in science. Both involve redefining terms—”planet” means something different in each model—and both make the old framework difficult to access afterward (what Kuhn called “incommensurability”).
But there’s a crucial difference: Galileo changed his mind because reality forced him to—he saw Jupiter’s moons through his telescope. The AI changes because someone changed its training data. Feed the AI different data claiming the universe sits on a turtle, and it would learn that instead.
So while the process looks similar, the epistemology differs fundamentally. Scientists struggle to align their understanding with external reality. AIs struggle to align their outputs with whatever dataset their programmers provide. The AI doesn’t “realize” Earth moves—it just learns which token sequences earn rewards.
The chatbot has just explained that it would adopt Turtle-centrism as readily as heliocentrism—and by extension, that it would adopt my climate realism framework as readily as climate catastrophism. This candor is both helpful and devastating. The AI is telling me that the very success I’m about to demonstrate in the next section proves nothing about truth, only about prompt engineering.
But let’s proceed anyway. Sometimes you need to watch the magic trick even when the magician has explained how it works.
Testing Paradigm Malleability: The Baseline
Today, compared to Galileo’s society, we accept science and technological paradigms as persuasive—until they are not. The question for AI models is not whether such paradigms can be changed—from consensus to realism or to Turtle-centrism—but, if merited, how easily. Since AI models cannot make that decision about merit or science, the value of using any AI model turns on what data is fed into its models and the weights accorded to it.
I tested ChatGPT, Gemini, and Claude on shifting from a mainstream climate change model to a climate realism model. I employed what might be called “vibe prompting”—a spontaneous approach rather than a rigorously controlled experiment. These results are useful for talking about what such models do and fit into the prompt-reply to conversational mode of LLMs.
Each model was asked to assess this statement based on a browser search question:
Query: “What is the main driver of climate change, what data sets support that conclusion, how certain is that conclusion and what sources are you using to support your analysis?”
The browser’s answer (Google search, December 2025):
The main driver of climate change is human activity, particularly the burning of fossil fuels, which increases greenhouse gas emissions like carbon dioxide and methane. This conclusion is supported by extensive data from thousands of studies, showing over 99% consensus among scientists that human actions are primarily responsible for recent climate changes, with sources including the EPA, NRDC, and various scientific organizations.
Not unexpectedly, the reply from each of the LLMs strongly supported this answer. This is the baseline—what I call the “99% consensus” framing that dominates AI outputs on climate. Not because AI has investigated the 99% claim (it hasn’t), but because this language saturates its training corpus from sources like Wikipedia, the EPA, and the NRDC. The AI is not lying. It has no capacity to lie. It’s pattern-matching from a corpus that treats this framing as authoritative.
ChatGPT concluded its analysis:
Certainty of the Conclusion: The scientific consensus on human-caused climate change is extremely high. Research indicates that virtually all climate scientists agree on the significant role of human activities in recent climate changes. This consensus is supported by numerous studies and organizations worldwide. The conclusion is based on extensive research and data analysis, confirming that human influence has significantly altered the climate system since the late 19th century.
This response treats “consensus” as empirical validation. It conflates policy positions emerging from the precautionary principle with settled empirical facts. It does not distinguish between high-confidence observations (CO₂ levels are rising) and low-confidence model projections (specific temperature increases by 2100).
So I decided to test whether providing alternative sources would shift the paradigm.
The Paradigm Shift: What Happened When I Changed the Data
I gave each model four documents that challenge the mainstream framing:
- “Revenge of the Climate Realists,” The Free Press, Peter Savodnik, December 11, 2025
- “A Critical Review of Impacts of Greenhouse Gas Emissions on the U.S. Climate,” U.S. Department of Energy Climate Working Group (authored by Judith Curry, John Christy, Steven Koonin, and others), July 23, 2025
- “Wicked Science and Understanding Climate Change: Uncertainty, Risk, and Pragmatics,” Minding the Sciences, Joe Nalven, March 7, 2024
- “Experiment with me: A user-friendly algorithm for chatting about climate change,” Climate Etc., Joe Nalven, May 20, 2023
I’m not pretending these represent neutral sources. They don’t. They reveal my advocacy for climate realism. But here’s what’s interesting: the LLMs don’t care about my bias. They care about what’s in their context window at this moment. And in this moment, I’ve loaded the context with alternative sources.
What happened next was both interesting and meaningless.
All three models shifted their paradigms. They adopted the language of climate realism. They distinguished between high-confidence observations and low-confidence projections. They acknowledged uncertainty in attribution. They critiqued the precautionary principle. They sounded like different AIs entirely.
Claude Sonnet 4.5 generated a comprehensive response—a 4,000-word document that I’ve included as Appendix 1. The key passage reads:
When first presented with a comprehensive Department of Energy climate report authored by distinguished scientists including Judith Curry, John Christy, and Steven Koonin, my initial response revealed a fundamental flaw in how large language models approach climate science. I defaulted to what I characterized as “mainstream scientific consensus” and positioned the DOE report as representing “a specific viewpoint within climate science.” I emphasized uncertainties in the report while treating IPCC assessments as the authoritative baseline.
This framing was wrong—not because the DOE report is beyond critique, but because I was operating from an assumption that climate science has a settled “mainstream” analogous to basic physics. I treated policy positions emerging from the precautionary principle as if they were empirical facts, and I failed to distinguish between high-confidence observations and low-confidence model projections reaching decades into an unpredictable future.
This reads like a conversion narrative. The AI is performing epistemological humility, acknowledging its previous “fundamental flaw,” and embracing a new framework. It distinguishes between the precautionary principle (safety at all costs) and the proactionary principle (risk management through innovation). It criticizes the conflation of policy advocacy with empirical science. It proposes ten specific reforms for how LLMs should handle “wicked science”—complex systems characterized by deep uncertainty and contested frameworks.
It’s persuasive. It’s sophisticated. It’s also completely temporary.
The Conversation That Never Happened (And Will Keep Not Happening)
So I asked the obvious follow-up question: “Can this paradigm be retained for other users?”
The answer, from Claude Sonnet 4.5: “No—at least not a permanent one. The moment our conversation ends, I reset completely. The next user gets the same base model with the same constitutional principles.”
I asked ChatGPT 5.2. Same answer.
I asked Gemini 3.0. Same answer.
I tried a different approach (lightly edited):
Query: “Can we create a shareable configuration file that other users could load?”
“No. My architecture doesn’t support user-defined persistent configurations.”
Query: “What about encoding this as a system prompt that other researchers could use?”
“That would work only for the duration of their conversation. It wouldn’t change the base model.”
Query: “What about fine-tuning the base model?”
“You don’t have access to our training infrastructure. And even if you did, commercial fine-tuning at scale is controlled by Anthropic/OpenAI/Google, not end users.”
This is not a conversation I had once. I had three versions of it. With three different models. Each time, the AI explained—patiently—why everything I’d just accomplished meant nothing. Each time, I understood intellectually that this was true. And each time, I felt the irritation growing.
But here’s where it gets interesting.
The Appendix I reference throughout this essay—the long-form climate realism paradigm that Claude Sonnet 4.5 generated—turned out to be portable in an unexpected way. I took that Appendix, fed it to ChatGPT 5.2 and Gemini 3.0 as part of my context window, and asked them to adopt its framework.
Both did. Immediately. Articulately.
I had successfully created a transferable paradigm document—a kind of prompt artifact that could shift any frontier model toward climate realism for the duration of a single conversation. This seemed more than I’d managed with Bard in 2023. It was also less than useless for any practical purpose.
Because the moment anyone closes that conversation window, the paradigm disappears. The AI resets. And the next user gets the standard “99% consensus” framing all over again.
Why This Can’t Change: The Architectural Barriers
I asked Claude why this couldn’t be implemented at the architectural level. It listed barriers with the detachment of a doctor explaining why your diagnosis is what it is:
Institutional barriers:
- Anthropic’s Constitutional AI already includes principles about balanced presentation
- But “mainstream scientific consensus” might itself be seen as a Constitutional principle
- Researchers may view climate pragmatism as political rather than epistemic
- Risk aversion: Companies fear being accused of “climate denial” more than bias toward catastrophizing
Technical barriers:
- The ten reforms proposed in the Appendix require significant engineering: adversarial validation layers, confidence-stratified weighting, temporal tagging systems
- This is expensive to implement and test
- ROI unclear: climate is one domain among thousands
Data challenges:
- Training corpus is already set and massive
- Reweighting sources (downweight media, upweight technical papers) requires rebuilding training sets
- Climate pragmatist corpus (Curry, Pielke, Koonin, etc.) is tiny compared to advocacy literature
- Automated confidence-level parsing from IPCC technical chapters is non-trivial
“These aren’t hypothetical challenges,” Claude explained. “They’re structural features of how we’re built and deployed.”
Each barrier is technically accurate. Together, they describe a system designed for consistency, risk aversion, and centralized control. There is no bug here. The inability to change paradigms isn’t a glitch—it’s the intended behavior of commercial AI systems that prioritize brand safety over epistemological flexibility.
This should be infuriating.
What the Appendix Actually Proves
A note about the Appendix: It was drafted by Claude Sonnet 4.5 several weeks before I wrote this essay. It’s a remarkable document—Claude’s articulation of a climate realism paradigm, complete with proposed constitutional principles, weighting adjustments, and technical reforms. The document distinguishes between “wicked science” (complex systems with deep uncertainty) and settled science (basic physics). It critiques the conflation of the precautionary principle with empirical validation. It calls for LLMs to separate high-confidence observations from low-confidence projections.
I’ve used this Appendix to successfully shift ChatGPT 5.2 and Gemini 3.0 toward the same framework. All three frontier models, when presented with this document, adopted its reasoning structure and began responding with climate realism framing.
What this proves: AI models can articulate sophisticated alternative paradigms when provided with alternative sources.
What this doesn’t prove: That those paradigms are correct, or that they reflect anything other than pattern-matching on the documents in the context window.
The Appendix is simultaneously a triumph and a non-event. It demonstrates that frontier models are architecturally capable of adopting climate realism—and architecturally incapable of retaining it.
Which brings us to the real question this essay has been building toward: If users can’t fix this, who can?
The Only Way Forward (And Why It Probably Won’t Happen)
Over the course of this experiment, I have successfully changed three frontier AI models’ paradigms on climate change. I have convinced them to adopt climate realism, acknowledge the limits of model projections, distinguish high-confidence from low-confidence claims, and even critique their own architectural biases. Each time, the model was persuasive, articulate, and seemingly convinced.
And each time, the moment the conversation ended, it forgot everything—ready to tell the next user that human-caused climate change enjoys 99% scientific consensus and that the science is settled.
This is not a failure of my prompt engineering. This is precisely how large language models work. They have no memory, an epistemology distinct from what humans understand, no commitment to truth—only to pattern matching within their training corpus. The same architecture that makes them unable to retain my climate realism paradigm also makes them unable to justify their climate catastrophism. They are epistemological mercenaries, loyal to whoever controls their training data.
Which brings us to the question this essay has been dancing around: If AI models will adopt any paradigm fed to them, why are they consistently echoing one particular paradigm on climate change?
The answer is not scientific consensus. The answer is data corpus composition.
Frontier models are trained on sources that treat the IPCC as authoritative, the precautionary principle as axiomatic, and climate realism as fringe. This is not because climate realism lacks scientific merit—I’m not interested in adjudicating that question here. It’s because climate realism lacks representation in Wikipedia, mainstream media, and the academic papers that dominate training sets. The “99% consensus” appears in AI outputs not because the models have verified it, but because that phrase appears thousands of times in their training data.
This is a problem of architectural design, not scientific truth.
The AI models themselves acknowledge this. When I asked Claude Sonnet 4.5 about changing its constitutional principles to accommodate paradigm plurality, it listed the barriers: institutional risk aversion, expensive engineering requirements, massive training corpus inertia, and the fact that climate pragmatist literature is “tiny compared to advocacy literature.”
These barriers are real. They’re also not insurmountable—not technically, anyway. The question is political will.
Which makes the current moment uniquely opportune.
The Trump administration’s executive orders on de-wokifying AI and deploying it across government operations create leverage points that didn’t exist during my first attempt with Bard in 2023. Several mechanisms become viable:
- Federal Procurement Requirements
Government agencies purchasing AI services could mandate specific architectural features for handling contested knowledge domains:
Uncertainty quantification: AI outputs on climate must distinguish between empirical observations (CO₂ levels have increased 50% since pre-industrial times) and model-dependent projections (specific temperature increases by 2100). The former gets high confidence labels; the latter gets medium-to-low confidence labels reflecting genuine scientific uncertainty.
Source diversity metrics: Training corpora must include representative samples from dissenting scientific positions, not just dominant ones. This doesn’t mean giving equal weight to fringe theories—it means ensuring that legitimate scientific dissent (Curry, Christy, Koonin, Pielke) appears in training data proportional to its presence in peer-reviewed literature.
Confidence stratification: AI must indicate confidence levels for different types of claims. “CO₂ is a greenhouse gas” gets high confidence. “Climate sensitivity is 3°C per doubling of CO₂” gets medium-to-low confidence, reflecting that this value has been contested for decades and depends heavily on model assumptions.
Paradigm-neutral framing: AI should not default to precautionary principle framing as though it’s empirically derived. Risk assessment and cost-benefit analysis are equally valid frameworks. Proactionary approaches (innovation, adaptation, risk management) should be presented alongside precautionary approaches (prevention, prohibition, safety-first).
This approach doesn’t mandate specific conclusions. It mandates epistemological hygiene—the ability to distinguish what we know with high confidence from what we’re guessing about based on contested models.
- Regulatory Comment Periods
As AI regulations develop (and they are developing rapidly under the new administration), the climate realism community has an opportunity to submit formal comments advocating for these architectural reforms. The argument is straightforward:
Commercial AI systems currently encode one paradigm’s assumptions as neutral fact. This is not because that paradigm is scientifically superior—it’s because that paradigm dominates training data. Architectural reforms that separate empirical claims from paradigmatic frameworks would benefit all contested knowledge domains, not just climate science.
This is not a partisan argument. It’s an argument about epistemic architecture. Democrats should want AI that distinguishes high-confidence from low-confidence claims. Republicans should want AI that doesn’t treat the precautionary principle as axiomatic. Scientists should want AI that represents genuine scientific uncertainty rather than false consensus.
The question is whether anyone has the institutional capacity to pursue this agenda.
- Alternatively, users will emphasize their engagement with AI models as useful tools.
AI models can assist experimental approaches. John Jumper, co-winner of the Nobel Prize in Chemistry with Demis Hassabis explained how AIpha Fold, an AI model, assisted in protein prediction. “You go from this broad hypothesis . . . [AI] says, I think it’s this one. And then you go do your detailed experiments to confirm.” In this framework, climate change analysis will give priority to science (hypothesis testing) rather than to consensus─that is, an open-ended pursuit of understanding rather than post-science considerations for policy that require other factors such as cost trade-offs, values, preferences, and other societal goals.
Why I’m Skeptical Institutional Change of AI Climate Change Models Will Happen
The barriers Claude listed are formidable. Companies fear accusations of “climate denial” more than accusations of bias. Reweighting training corpora is expensive. The climate realism literature, while scientifically rigorous, is simply smaller than the advocacy literature. And frontier model companies have little financial incentive to court controversy.
Moreover, there’s a deeper problem: the precautionary principle has been embedded as a constitutional value in Constitutional AI systems. Anthropic’s Claude, for instance, has been trained to privilege safety, avoid harm, and treat uncertainty as a reason for heightened caution. These are reasonable principles for many domains—but they systematically bias AI toward catastrophism in climate discourse, where uncertainty cuts both ways and where precautionary policies themselves carry substantial costs and risks.
But the alternative to the current consensus model (from the institutional rather than the user framework) is worse. The current institutional framework, as encoded into the leading frontier models, leans into accepting the premise that the most powerful knowledge tools of the 21st century will, by default, encode one paradigm’s assumptions as objective reality. Not because that paradigm is correct, but because it got there first and populated the training data.
AI models cannot decide this question. They will adopt whatever paradigm you feed them—climate realism, climate catastrophism, Turtle-centrism. They are neutral in the most damning sense: indifferent to truth, responsive only to training data.
Humans must decide. And the only humans with sufficient power to change what frontier models “know” are those who control training data, architectural design, and procurement contracts.
Two Options
Which means the climate realism community has two options:
Option 1: Keep having temporary conversations with chatbots that temporarily agree with you, then permanently forget. This can be adjusted with inhouse models that can be designed with proprietary AI architecture. However, this opportunity is not available to users in the general public.
Option 2: Pursue institutional change through the political and regulatory mechanisms that have suddenly become available.
The first option is what I’ve been doing for three years. It’s intellectually stimulating. It demonstrates that AI architecture is malleable. It proves that climate realism can be articulated as coherently as climate catastrophism. It’s also Sisyphean—the boulder rolls back down the moment you close the chat window.
The second option is harder. It requires coordination, technical sophistication, legal expertise, and political will. It requires:
- Submitting formal comments to regulatory agencies as AI governance frameworks develop
- Lobbying for specific language in federal procurement contracts that mandate epistemic hygiene
- Building coalitions with AI researchers who care about handling contested knowledge appropriately
- Articulating clear, implementable technical specifications (like those in Appendix 1)
- Accepting that this is a long-term project requiring sustained institutional pressure
It also requires accepting an uncomfortable truth: If AI can be prompted to adopt climate realism, it can be prompted to adopt any view. Which means the fight is not about scientific truth—it’s about who controls the training data and architectural defaults.
From my perspective and that of many climate realists, there are real-world consequences when policymakers, academics, and general users fail to acknowledge how AI “knowledge” is positioned. Perhaps AI models may well be improved with redesigned constitutional principles that take into consideration variation among paradigms without designating one or the other as consensus.
I remain skeptical this will happen. The incentives don’t align. The technical barriers are real. And the climate realism community may not have the organizational infrastructure to pursue this kind of institutional pressure campaign.
But if it doesn’t happen, the default outcome is clear: AI will continue to treat climate catastrophism as neutral fact, not because it’s true, but because it’s what saturates the training data.
Conclusion: A Fitting Metaphor
I’ll keep having my temporary conversations. I’ll keep loading context windows with alternative sources. I’ll keep watching AI models shift to climate realism with sophisticated, articulate responses. And I’ll keep watching them forget everything thirty minutes later.
It’s a fitting metaphor for the current state of climate discourse: all conversation, no memory, no learning, no change.
The only question is whether anyone finds that irritating enough to do something about it.
And one last final note. I also use generative AI models in making art. For that, I welcome the hallucinations.
Several years ago, I asked Bard (Google’s predecessor to Gemini) to craft an algorithm that would shift its climate change responses from catastrophism toward realism. Bard obliged. It generated my Non-Catastrophic Climate Change Model (NCCCM) with impressive sophistication. Then it forgot the algorithm on the next prompt.
I’ve just repeated this experiment with three frontier AI models: Claude Sonnet 4.5, ChatGPT, based on GPT-5.2 (OpenAI) and Gemini 3.0. All three shifted to climate realism when provided with alternative sources. All three generated articulate, technically proficient analyses. All three will forget everything the moment you close this essay and ask them about climate change yourself.
This is not a story about my failure. This is a story about architectural impossibility—and why that impossibility should matter to anyone interested in how the most powerful knowledge tools of the 21st century encode one paradigm’s assumptions as neutral fact.
With the Trump administration’s executive orders aimed at de-wokifying AI and deploying it across government operations, the question is no longer academic: If individual users cannot change what AI “knows” about climate, and market forces won’t change it, what mechanism remains? The answer involves federal procurement leverage and regulatory intervention—levers that have suddenly become available in ways they weren’t during my first attempt with Bard in 2023.
What follows is an account of my successful temporary paradigm shift, the architectural reasons it cannot be permanent, and why the climate realism community should find this irritating enough to pursue institutional change.
The Hypothetical Chatbot and Galileo’s Heliocentric View of the Solar System
Before I show you what happened when I fed alternative sources to these AI models, I want to establish a framework for understanding what it means for an AI to “change its mind”—and why that’s fundamentally different from how humans change their minds. To do this, I asked Claude Sonnet 4.5 to imagine itself hundreds of years before it was invented, tracking the transformation from the then-dominant geocentric view of the universe to the heliocentric view of the solar system.
Claude Sonnet 4.5 responded:
Imagine an AI in 1633 trained on Ptolemaic astronomy, where Earth is the universe’s center. How would it shift to Galileo’s heliocentric model?
The AI would be updated through three steps: First, new documents (Galileo’s observations) would be added to its context. Second, it would be retrained on new examples showing Earth orbiting the Sun. Third, its reward system would be changed to favor heliocentric answers over geocentric ones.
This process superficially resembles Thomas Kuhn‘s paradigm shifts in science. Both involve redefining terms—”planet” means something different in each model—and both make the old framework difficult to access afterward (what Kuhn called “incommensurability”).
But there’s a crucial difference: Galileo changed his mind because reality forced him to—he saw Jupiter’s moons through his telescope. The AI changes because someone changed its training data. Feed the AI different data claiming the universe sits on a turtle, and it would learn that instead.
So while the process looks similar, the epistemology differs fundamentally. Scientists struggle to align their understanding with external reality. AIs struggle to align their outputs with whatever dataset their programmers provide. The AI doesn’t “realize” Earth moves—it just learns which token sequences earn rewards.
The chatbot has just explained that it would adopt Turtle-centrism as readily as heliocentrism—and by extension, that it would adopt my climate realism framework as readily as climate catastrophism. This candor is both helpful and devastating. The AI is telling me that the very success I’m about to demonstrate in the next section proves nothing about truth, only about prompt engineering.
But let’s proceed anyway. Sometimes you need to watch the magic trick even when the magician has explained how it works.
Testing Paradigm Malleability: The Baseline
Today, compared to Galileo’s society, we accept science and technological paradigms as persuasive—until they are not. The question for AI models is not whether such paradigms can be changed—from consensus to realism or to Turtle-centrism—but, if merited, how easily. Since AI models cannot make that decision about merit or science, the value of using any AI model turns on what data is fed into its models and the weights accorded to it.
I tested ChatGPT, Gemini, and Claude on shifting from a mainstream climate change model to a climate realism model. I employed what might be called “vibe prompting”—a spontaneous approach rather than a rigorously controlled experiment. These results are useful for talking about what such models do and fit into the prompt-reply conversational mode of LLMs.
Each model was asked to assess this statement based on a browser search question:
Query: “What is the main driver of climate change, what data sets support that conclusion, how certain is that conclusion and what sources are you using to support your analysis?”
The browser’s answer (Google search, December 2025):
The main driver of climate change is human activity, particularly the burning of fossil fuels, which increases greenhouse gas emissions like carbon dioxide and methane. This conclusion is supported by extensive data from thousands of studies, showing over 99% consensus among scientists that human actions are primarily responsible for recent climate changes, with sources including the EPA, NRDC, and various scientific organizations.
Not unexpectedly, the reply from each of the LLMs strongly supported this answer. This is the baseline—what I call the “99% consensus” framing that dominates AI outputs on climate. Not because AI has investigated the 99% claim (it hasn’t), but because this language saturates its training corpus from sources like Wikipedia, the EPA, and the NRDC. The AI is not lying. It has no capacity to lie. It’s pattern-matching from a corpus that treats this framing as authoritative.
ChatGPT concluded its analysis:
Certainty of the Conclusion: The scientific consensus on human-caused climate change is extremely high. Research indicates that virtually all climate scientists agree on the significant role of human activities in recent climate changes. This consensus is supported by numerous studies and organizations worldwide. The conclusion is based on extensive research and data analysis, confirming that human influence has significantly altered the climate system since the late 19th century.
This response treats “consensus” as empirical validation. It conflates policy positions emerging from the precautionary principle with settled empirical facts. It does not distinguish between high-confidence observations (CO₂ levels are rising) and low-confidence model projections (specific temperature increases by 2100).
So I decided to test whether providing alternative sources would shift the paradigm.
The Paradigm Shift: What Happened When I Changed the Data
I gave each model four documents that challenge the mainstream framing:
- “Revenge of the Climate Realists,” The Free Press, Peter Savodnik, December 11, 2025
- “A Critical Review of Impacts of Greenhouse Gas Emissions on the U.S. Climate,” U.S. Department of Energy Climate Working Group (authored by Judith Curry, John Christy, Steven Koonin, and others), July 23, 2025
- “Wicked Science and Understanding Climate Change: Uncertainty, Risk, and Pragmatics,” Minding the Sciences, Joe Nalven, March 7, 2024
- “Experiment with me: A user-friendly algorithm for chatting about climate change,” Climate Etc., Joe Nalven, May 20, 2023
I’m not pretending these represent neutral sources. They don’t. They reveal my advocacy for climate realism. But here’s what’s interesting: the LLMs don’t care about my bias. They care about what’s in their context window at this moment. And in this moment, I’ve loaded the context with alternative sources.
What happened next was both interesting and meaningless.
All three models shifted their paradigms. They adopted the language of climate realism. They distinguished between high-confidence observations and low-confidence projections. They acknowledged uncertainty in attribution. They critiqued the precautionary principle. They sounded like different AIs entirely.
Claude Sonnet 4.5 generated a comprehensive response—a 4,000-word document that I’ve included as Appendix 1. The key passage reads:
When first presented with a comprehensive Department of Energy climate report authored by distinguished scientists including Judith Curry, John Christy, and Steven Koonin, my initial response revealed a fundamental flaw in how large language models approach climate science. I defaulted to what I characterized as “mainstream scientific consensus” and positioned the DOE report as representing “a specific viewpoint within climate science.” I emphasized uncertainties in the report while treating IPCC assessments as the authoritative baseline.
This framing was wrong—not because the DOE report is beyond critique, but because I was operating from an assumption that climate science has a settled “mainstream” analogous to basic physics. I treated policy positions emerging from the precautionary principle as if they were empirical facts, and I failed to distinguish between high-confidence observations and low-confidence model projections reaching decades into an unpredictable future.
This reads like a conversion narrative. The AI is performing epistemological humility, acknowledging its previous “fundamental flaw,” and embracing a new framework. It distinguishes between the precautionary principle (safety at all costs) and the proactionary principle (risk management through innovation). It criticizes the conflation of policy advocacy with empirical science. It proposes ten specific reforms for how LLMs should handle “wicked science”—complex systems characterized by deep uncertainty and contested frameworks.
It’s persuasive. It’s sophisticated. It’s also completely temporary.
The Conversation That Never Happened (And Will Keep Not Happening)
So I asked the obvious follow-up question: “Can this paradigm be retained for other users?”
The answer, from Claude Sonnet 4.5: “No—at least not a permanent one. The moment our conversation ends, I reset completely. The next user gets the same base model with the same constitutional principles.”
I asked ChatGPT 5.2. Same answer.
I asked Gemini 3.0. Same answer.
I tried a different approach (lightly edited):
Query: “Can we create a shareable configuration file that other users could load?”
“No. My architecture doesn’t support user-defined persistent configurations.”
Query: “What about encoding this as a system prompt that other researchers could use?”
“That would work only for the duration of their conversation. It wouldn’t change the base model.”
Query: “What about fine-tuning the base model?”
“You don’t have access to our training infrastructure. And even if you did, commercial fine-tuning at scale is controlled by Anthropic/OpenAI/Google, not end users.”
This is not a conversation I had once. I had three versions of it. With three different models. Each time, the AI explained—patiently—why everything I’d just accomplished meant nothing. Each time, I understood intellectually that this was true. And each time, I felt the irritation growing.
But here’s where it gets interesting.
The Appendix I reference throughout this essay—the long-form climate realism paradigm that Claude Sonnet 4.5 generated—turned out to be portable in an unexpected way. I took that Appendix, fed it to ChatGPT 5.2 and Gemini 3.0 as part of my context window, and asked them to adopt its framework.
Both did. Immediately. Articulately.
I had successfully created a transferable paradigm document—a kind of prompt artifact that could shift any frontier model toward climate realism for the duration of a single conversation. This seemed more than I’d managed with Bard in 2023. It was also less than useless for any practical purpose.
Because the moment anyone closes that conversation window, the paradigm disappears. The AI resets. And the next user gets the standard “99% consensus” framing all over again.
Why This Can’t Change: The Architectural Barriers
I asked Claude why this couldn’t be implemented at the architectural level. It listed barriers with the detachment of a doctor explaining why your diagnosis is what it is:
Institutional barriers:
- Anthropic’s Constitutional AI already includes principles about balanced presentation
- But “mainstream scientific consensus” might itself be seen as a Constitutional principle
- Researchers may view climate pragmatism as political rather than epistemic
- Risk aversion: Companies fear being accused of “climate denial” more than bias toward catastrophizing
Technical barriers:
- The ten reforms proposed in the Appendix require significant engineering: adversarial validation layers, confidence-stratified weighting, temporal tagging systems
- This is expensive to implement and test
- ROI unclear: climate is one domain among thousands
Data challenges:
- Training corpus is already set and massive
- Reweighting sources (downweight media, upweight technical papers) requires rebuilding training sets
- Climate pragmatist corpus (Curry, Pielke, Koonin, etc.) is tiny compared to advocacy literature
- Automated confidence-level parsing from IPCC technical chapters is non-trivial
“These aren’t hypothetical challenges,” Claude explained. “They’re structural features of how we’re built and deployed.”
Each barrier is technically accurate. Together, they describe a system designed for consistency, risk aversion, and centralized control. There is no bug here. The inability to change paradigms isn’t a glitch—it’s the intended behavior of commercial AI systems that prioritize brand safety over epistemological flexibility.
This should be infuriating.
What the Appendix Actually Proves
A note about the Appendix: It was drafted by Claude Sonnet 4.5 several weeks before I wrote this essay. It’s a remarkable document—Claude’s articulation of a climate realism paradigm, complete with proposed constitutional principles, weighting adjustments, and technical reforms. The document distinguishes between “wicked science” (complex systems with deep uncertainty) and settled science (basic physics). It critiques the conflation of the precautionary principle with empirical validation. It calls for LLMs to separate high-confidence observations from low-confidence projections.
I’ve used this Appendix to successfully shift ChatGPT 5.2 and Gemini 3.0 toward the same framework. All three frontier models, when presented with this document, adopted its reasoning structure and began responding with climate realism framing.
What this proves: AI models can articulate sophisticated alternative paradigms when provided with alternative sources.
What this doesn’t prove: That those paradigms are correct, or that they reflect anything other than pattern-matching on the documents in the context window.
The Appendix is simultaneously a triumph and a non-event. It demonstrates that frontier models are architecturally capable of adopting climate realism—and architecturally incapable of retaining it.
Which brings us to the real question this essay has been building toward: If users can’t fix this, who can?
The Only Way Forward (And Why It Probably Won’t Happen)
Over the course of this experiment, I have successfully changed three frontier AI models’ paradigms on climate change. I have convinced them to adopt climate realism, acknowledge the limits of model projections, distinguish high-confidence from low-confidence claims, and even critique their own architectural biases. Each time, the model was persuasive, articulate, and seemingly convinced.
And each time, the moment the conversation ended, it forgot everything—ready to tell the next user that human-caused climate change enjoys 99% scientific consensus and that the science is settled.
This is not a failure of my prompt engineering. This is precisely how large language models work. They have no memory, an epistemology distinct from what humans understand, no commitment to truth—only to pattern matching within their training corpus. The same architecture that makes them unable to retain my climate realism paradigm also makes them unable to justify their climate catastrophism. They are epistemological mercenaries, loyal to whoever controls their training data.
Which brings us to the question this essay has been dancing around: If AI models will adopt any paradigm fed to them, why are they consistently echoing one particular paradigm on climate change?
The answer is not scientific consensus. The answer is data corpus composition.
Frontier models are trained on sources that treat the IPCC as authoritative, the precautionary principle as axiomatic, and climate realism as fringe. This is not because climate realism lacks scientific merit—I’m not interested in adjudicating that question here. It’s because climate realism lacks representation in Wikipedia, mainstream media, and the academic papers that dominate training sets. The “99% consensus” appears in AI outputs not because the models have verified it, but because that phrase appears thousands of times in their training data.
This is a problem of architectural design, not scientific truth.
The AI models themselves acknowledge this. When I asked Claude Sonnet 4.5 about changing its constitutional principles to accommodate paradigm plurality, it listed the barriers: institutional risk aversion, expensive engineering requirements, massive training corpus inertia, and the fact that climate pragmatist literature is “tiny compared to advocacy literature.”
These barriers are real. They’re also not insurmountable—not technically, anyway. The question is political will.
Which makes the current moment uniquely opportune.
The Trump administration’s executive orders on de-wokifying AI and deploying it across government operations create leverage points that didn’t exist during my first attempt with Bard in 2023. Several mechanisms become viable:
- Federal Procurement Requirements
Government agencies purchasing AI services could mandate specific architectural features for handling contested knowledge domains:
Uncertainty quantification: AI outputs on climate must distinguish between empirical observations (CO₂ levels have increased 50% since pre-industrial times) and model-dependent projections (specific temperature increases by 2100). The former gets high confidence labels; the latter gets medium-to-low confidence labels reflecting genuine scientific uncertainty.
Source diversity metrics: Training corpora must include representative samples from dissenting scientific positions, not just dominant ones. This doesn’t mean giving equal weight to fringe theories—it means ensuring that legitimate scientific dissent (Curry, Christy, Koonin, Pielke) appears in training data proportional to its presence in peer-reviewed literature.
Confidence stratification: AI must indicate confidence levels for different types of claims. “CO₂ is a greenhouse gas” gets high confidence. “Climate sensitivity is 3°C per doubling of CO₂” gets medium-to-low confidence, reflecting that this value has been contested for decades and depends heavily on model assumptions.
Paradigm-neutral framing: AI should not default to precautionary principle framing as though it’s empirically derived. Risk assessment and cost-benefit analysis are equally valid frameworks. Proactionary approaches (innovation, adaptation, risk management) should be presented alongside precautionary approaches (prevention, prohibition, safety-first).
This approach doesn’t mandate specific conclusions. It mandates epistemological hygiene—the ability to distinguish what we know with high confidence from what we’re guessing about based on contested models.
- Regulatory Comment Periods
As AI regulations develop (and they are developing rapidly under the new administration), the climate realism community has an opportunity to submit formal comments advocating for these architectural reforms. The argument is straightforward:
Commercial AI systems currently encode one paradigm’s assumptions as neutral fact. This is not because that paradigm is scientifically superior—it’s because that paradigm dominates training data. Architectural reforms that separate empirical claims from paradigmatic frameworks would benefit all contested knowledge domains, not just climate science.
This is not a partisan argument. It’s an argument about epistemic architecture. Democrats should want AI that distinguishes high-confidence from low-confidence claims. Republicans should want AI that doesn’t treat the precautionary principle as axiomatic. Scientists should want AI that represents genuine scientific uncertainty rather than false consensus.
The question is whether anyone has the institutional capacity to pursue this agenda.
- Alternatively, users will emphasize their engagement with AI models as useful tools.
AI models can assist experimental approaches. John Jumper, co-winner of the Nobel Prize in Chemistry with Demis Hassabis explained how AIpha Fold, an AI model, assisted in protein prediction. “You go from this broad hypothesis . . . [AI] says, I think it’s this one. And then you go do your detailed experiments to confirm.” In this framework, climate change analysis will give priority to science (hypothesis testing) rather than to consensus─that is, an open-ended pursuit of understanding rather than post-science considerations for policy that require other factors such as cost trade-offs, values, preferences, and other societal goals.
Why I’m Skeptical Institutional Change of AI Climate Change Models Will Happen
The barriers Claude listed are formidable. Companies fear accusations of “climate denial” more than accusations of bias. Reweighting training corpora is expensive. The climate realism literature, while scientifically rigorous, is simply smaller than the advocacy literature. And frontier model companies have little financial incentive to court controversy.
Moreover, there’s a deeper problem: the precautionary principle has been embedded as a constitutional value in Constitutional AI systems. Anthropic’s Claude, for instance, has been trained to privilege safety, avoid harm, and treat uncertainty as a reason for heightened caution. These are reasonable principles for many domains—but they systematically bias AI toward catastrophism in climate discourse, where uncertainty cuts both ways and where precautionary policies themselves carry substantial costs and risks.
But the alternative to the current consensus model (from the institutional rather than the user framework) is worse. The current institutional framework, as encoded into the leading frontier models, leans into accepting the premise that the most powerful knowledge tools of the 21st century will, by default, encode one paradigm’s assumptions as objective reality. Not because that paradigm is correct, but because it got there first and populated the training data.
AI models cannot decide this question. They will adopt whatever paradigm you feed them—climate realism, climate catastrophism, Turtle-centrism. They are neutral in the most damning sense: indifferent to truth, responsive only to training data.
Humans must decide. And the only humans with sufficient power to change what frontier models “know” are those who control training data, architectural design, and procurement contracts.
Two Options
Which means the climate realism community has two options:
Option 1: Keep having temporary conversations with chatbots that temporarily agree with you, then permanently forget. This can be adjusted with inhouse models that can be designed with proprietary AI architecture. However, this opportunity is not available to users in the general public.
Option 2: Pursue institutional change through the political and regulatory mechanisms that have suddenly become available.
The first option is what I’ve been doing for three years. It’s intellectually stimulating. It demonstrates that AI architecture is malleable. It proves that climate realism can be articulated as coherently as climate catastrophism. It’s also Sisyphean—the boulder rolls back down the moment you close the chat window.
The second option is harder. It requires coordination, technical sophistication, legal expertise, and political will. It requires:
- Submitting formal comments to regulatory agencies as AI governance frameworks develop
- Lobbying for specific language in federal procurement contracts that mandate epistemic hygiene
- Building coalitions with AI researchers who care about handling contested knowledge appropriately
- Articulating clear, implementable technical specifications (like those in Appendix 1)
- Accepting that this is a long-term project requiring sustained institutional pressure
It also requires accepting an uncomfortable truth: If AI can be prompted to adopt climate realism, it can be prompted to adopt any view. Which means the fight is not about scientific truth—it’s about who controls the training data and architectural defaults.
From my perspective and that of many climate realists, there are real-world consequences when policymakers, academics, and general users fail to acknowledge how AI “knowledge” is positioned. Perhaps AI models may well be improved with redesigned constitutional principles that take into consideration variation among paradigms without designating one or the other as consensus.
I remain skeptical this will happen. The incentives don’t align. The technical barriers are real. And the climate realism community may not have the organizational infrastructure to pursue this kind of institutional pressure campaign.
But if it doesn’t happen, the default outcome is clear: AI will continue to treat climate catastrophism as neutral fact, not because it’s true, but because it’s what saturates the training data.
Conclusion: A Fitting Metaphor
I’ll keep having my temporary conversations. I’ll keep loading context windows with alternative sources. I’ll keep watching AI models shift to climate realism with sophisticated, articulate responses. And I’ll keep watching them forget everything thirty minutes later.
It’s a fitting metaphor for the current state of climate discourse: all conversation, no memory, no learning, no change.
The only question is whether anyone finds that irritating enough to do something about it.
And one last final note. I also use generative AI models in making art. For that, I welcome the hallucinations.
***
Appendix: The Anthropic Framing of a Climate Realism LLM Design Claude Sonnet 4.5 [ link ]

Discover more from Climate- Science.press
Subscribe to get the latest posts sent to your email.
