Climate models: the limits in the sky

The debate about the role of clouds in climate — whether in isolation, or relative to other possible factors — rumbles on, and on, and adequate data is just not available. A rather large hole in the IPCC-claimed ‘settled science’, it seems.
– – –
Climate modellers hope machine learning can overcome persistent problems that still cloud their results, says E&T Magazine.

The discipline of climate modelling has entered its sixth decade. Large-scale analyses of Earth’s behaviour have evolved considerably but there remain significant gaps, some persistent.

One in particular helps illustrate challenges that are now being tackled by, almost inevitably, using artificial intelligence (AI) and machine learning (ML).

“How important the overall cloud effects are is, however, an extremely difficult question to answer. The cloud distribution is a product of the entire climate system, in which many other feedbacks are involved. Trustworthy answers can be obtained only through comprehensive numerical modelling of the general circulations of the atmosphere and oceans together with validation by comparison of the observed with the model-produced cloud types and amounts. Unfortunately, cloud observations in sufficient detail for accurate validation of models are not available at present.”

This passage comes from one of the cornerstone reports on climate change, the 1979 Woods Hall review by the US government’s Ad Hoc Study Group on Carbon Dioxide and Climate.

It was chaired by pioneering meteorologist Professor Jule Charney of MIT, the man who brought computers into weather forecasting with John von Neumann. Most of what Charney’s group said about clouds then stands today.

Clouds matter because more than 40 years later there is still scientific debate over the extent to which they sometimes warm and sometimes cool the planet, and what impact the balance has on global temperature.

Put simply, clouds that are higher in the atmosphere and thinner trap heat; those that are lower and thicker reflect the sun rays.

Research published in September from a joint team at the University of Liverpool, Imperial College London and the UK’s National Oceanography Centre highlighted that this lack of clarity is one leading reason why macro-scale models differ over what goals should be set for carbon emissions.

Moreover, the issue is today more pressing because as the Earth’s climate is already changing, so too are the proportions (high:low. thick:thin) and locations of the clouds and, by extension, their influence.

Clouds have proved hard to model because they defy the nature of the recognised macro-modelling strategies (as indeed do several other factors these models struggle to embrace, such as eddies in ocean currents).

The workhorse – used by the main contributors to the Assessment Reports released by The Intergovernmental Panel on Climate Change – is the general circulation model (GCM), a technique that pulls on fluid dynamics equations and thermodynamics, supplemented by parameterisation.

GCMs and their extensions are extremely complex, running to millions of lines of code. As an example of the 20 or so GCMs considered world-class, the HadCM3 coupled atmospheric and oceanic model (AO-GCM), developed by the Hadley Centre at the UK’s Met Office, can run simulations out across more than a thousand years.

In some other respects GCMs run at very low resolutions. They are based on imposing a 3D grid upon the sphere of the Earth. In earlier implementations, the grid’s boxes were several hundred kilometres square and had half a dozen or so vertical layers.

Some of the limitations were inherent in the complexity ceilings for the models as they evolved, but another major constraint has always been computational capacity. Climate modelling has tested and reached the limits of just about every generation of supercomputer, with every doubling in spatial resolution said to need a tenfold increase in processing.

As we move into the era of exascale supercomputers and quantum processing potentially moves out of the lab, resolutions are rising – as the Met Office notes, it is leveraging 256 times more crunching power today – but resolution maximums remain in a range between the lower hundreds-of-kilometres and upper tens.

Clouds, by contrast, are highly localised and comparatively brief events, requiring finer resolution to be addressed in the detail thought necessary. They still fall through the gaps. There is then a further complication.

We think of climate models in terms of the forecasts they produce. Alongside the high-profile targets such as keeping the rise in temperature below 2°C, every week seems to bring a new, more event-specific observation about sea levels, potential species extinction or migrations in population. And the need for this kind of more granular modelling is widely acknowledged.

Within the modelling community itself, another important task – particularly as models are refined and extended – involves looking into the past: does the model account for how the world’s climate has already behaved if you run it backwards? Of little interest to the public, this so-called hindcasting is very important for validation.

Again – and very likely because of the brevity and local nature of clouds – there is very little historical data available against which to compare cloud modelling.

The combination of a lack of resolution, knowledge and insight would appear to be fertile territory for machine learning, and a number of research projects are looking to leverage such techniques.

Full article here.

via Tallbloke’s Talkshop

https://ift.tt/3iONDmU

October 13, 2020 at 10:18AM

Author: uwe.roland.gross

Don`t worry there is no significant man- made global warming. The global warming scare is not driven by science but driven by politics. Al Gore and the UN are dead wrong on climate fears. The IPCC process is a perversion of science.