<< Chapter < Page Chapter >> Page >

These uncertainties, associated with natural variability, boundary conditions and model representation, are not independent of each other. To explore the full “parameter” space of the models requires the integration of these different model runs into a grand ensemble of ensembles!

Obviously, these climate model ensembles are extraordinarily computationally-intense. A global climate model might typically need to solve its equations for each grid point on a simulated 30 minute time-step. While weather forecasting might require simulated time to be of the order of a few days, climate forecasting requires simulated time to be of the order of decades. So each simulated time-step in the model must be repeated tens of thousands of times. As a result, accurately modelling atmospheric processes alone can require high performance computer runs of several days. Because of these constraints, until recently complex climate models have only been run on supercomputers. With the need for grand ensembles of model runs to explore climate uncertainty more fully, the growth of distributed computing approaches is appealing.

A public example of a grand ensemble has been the work of climateprediction.net , which used distributed computing resources of the general public to run ensembles of a version of the UK Meteorological Office Unified Climate Model to examine the implications of doubling levels of carbon dioxide in the atmosphere. Members of the public donate spare computing capacity on their personal computers to do one or more of the simulations. Public interest has been staggering. Over 100,000 people from 150 countries have taken part to date. More than 70,000 simulations of the climate model have been completed. In contrast, at the time climateprediction.net started, the largest model ensemble reported in the literature was 53.

Outputs from these model ensembles provide a snapshot of the range of uncertainty associated with future climate – whether internal model uncertainty or external uncertainty associated with future global greenhouse gas emissions. Although these outputs have been termed probabilistic predictions, they are not objective probabilities. Instead they are subjective probabilities, based on the quality and availability of information at the present time. They cannot capture all uncertainties about future climates: different climate models produce different results based on the same forcing scenarios. For example, one model might suggest that rainfall increases across the Amazon; another might suggest that rainfall will decrease. Different models are better at representing different elements of the climate system. When many model outputs converge, we might have more confidence that we are seeing a robust result from the many possible representations of the climate system.

In this way, “probabilistic” predictions do not reduce uncertainty in future climates, but they do make the range of possible futures more transparent. It should, in theory, make decision making more transparent. One recent example of such probabilistic predictions is the publication of the UK Climate Impacts Programme (UKCIP) climate scenarios, which provide probabilistic projections for seven overlapping 30 year time slices through until 2099 for 25km x 25km land grid squares in the UK.

Summary: the importance of collaborations and future challenges for climate research

Twenty years ago, it was not uncommon for a doctoral graduate student to be expected to produce a numerical model of some element of the climate system, for example an ice sheet or an ecosystem model, track down the necessary data to test the model, and use the model to examine a science question. Typically, the model details (parameter values; computer code) would not be published, and only sparse elements of the model outputs would be made available through published papers. To all intents and purposes, these model results cannot be replicated, since the model is likely to be discarded or developed through time, with poor notification of model versions and parameter values.

In time, modellers recognised the importance of benchmarking models against other models, to examine the strengths and weaknesses of their particular representation of the climate system. Funding was made available to bring together modellers and to ensure effective data and model management. More recently, community-wide models of different elements of the global climate system have been developed. For example, in the UK, the GLIMMER ice sheet model is available as a resource to the academic community. This model captures the learning developed over many years by different modelling groups across the UK. A new doctoral graduate student who is exploring science questions about ice sheets is likely to use this as the starting point, perhaps with the aim of improving physical processes within the model, or using the model to answer new science questions. These community model developments, enabled by distributed access to computer models and more effective model and data management, have changed the way modellers work together. Instead of a largely individual approach, collegiate approaches are now the norm.

e-Research methods have had a fundamental impact on the way in which climate science is undertaken. These methods have changed how individuals and modelling groups work together. They have changed the very science questions that can be posed. However, the very success of high performance or distributed computing to produce colourful ensemble model outputs has also disguised critical questions about what models can usefully offer and how the outputs are used by decision makers and politicians. To those outside the modelling community, “probabilistic predictions” might well be assumed to be objective probabilities of future events, rather than subjective assessments based on incomplete information. Such a perception will affect the decisions that are taken about managing future climate impacts. Yet, climate models are not truth machines; they are inherently partial. In practice, there is an asymmetry between explanation and prediction of complex systems. Satisfactory explanation of the future is possible even when absolute prediction is impossible.

Separately, extensive work by behavioural economists has shown that humans are inherently poor at calculating and managing probabilities when making monetary decisions. Yet the output of these ensemble model runs shows a wide range of probabilistic outcomes, from futures with little change to futures with catastrophic change. Taking the next step and enabling more effective decision making on the basis of these model outputs remains challenging.

While computer-enabled methods of research may not be able to address these problems of human decision making, they have enhanced climate science through the development of models that expand our knowledge of a range of possible future climates that could occur based on different variables.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Research in a connected world. OpenStax CNX. Nov 22, 2009 Download for free at http://cnx.org/content/col10677/1.12
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Research in a connected world' conversation and receive update notifications?

Ask