Journal metrics

Journal metrics

  • IF value: 4.252 IF 4.252
  • IF 5-year value: 4.890 IF 5-year 4.890
  • CiteScore value: 4.49 CiteScore 4.49
  • SNIP value: 1.539 SNIP 1.539
  • SJR value: 2.404 SJR 2.404
  • IPP value: 4.28 IPP 4.28
  • h5-index value: 40 h5-index 40
  • Scimago H index value: 51 Scimago H index 51
Volume 10, issue 6 | Copyright
Geosci. Model Dev., 10, 2379-2395, 2017
https://doi.org/10.5194/gmd-10-2379-2017
© Author(s) 2017. This work is distributed under
the Creative Commons Attribution 3.0 License.

Methods for assessment of models 28 Jun 2017

Methods for assessment of models | 28 Jun 2017

Skill and independence weighting for multi-model assessments

Benjamin M. Sanderson1, Michael Wehner2, and Reto Knutti3,1 Benjamin M. Sanderson et al.
  • 1National Center for Atmospheric Research, Boulder, CO, USA
  • 2Lawrence Berkeley National Laboratory, Berkeley, CA, USA
  • 3Institute for Atmospheric and Climate Science, ETH Zurich, Switzerland

Abstract. We present a weighting strategy for use with the CMIP5 multi-model archive in the fourth National Climate Assessment, which considers both skill in the climatological performance of models over North America as well as the inter-dependency of models arising from common parameterizations or tuning practices. The method exploits information relating to the climatological mean state of a number of projection-relevant variables as well as metrics representing long-term statistics of weather extremes. The weights, once computed can be used to simply compute weighted means and significance information from an ensemble containing multiple initial condition members from potentially co-dependent models of varying skill. Two parameters in the algorithm determine the degree to which model climatological skill and model uniqueness are rewarded; these parameters are explored and final values are defended for the assessment. The influence of model weighting on projected temperature and precipitation changes is found to be moderate, partly due to a compensating effect between model skill and uniqueness. However, more aggressive skill weighting and weighting by targeted metrics is found to have a more significant effect on inferred ensemble confidence in future patterns of change for a given projection.

Download & links
Publications Copernicus
Download
Short summary
How should climate model simulations be combined to produce an overall assessment that reflects both their performance and their interdependencies? This paper presents a strategy for weighting climate model output such that models that are replicated or models that perform poorly in a chosen set of metrics are appropriately weighted. We perform sensitivity tests to show how the method results depend on variables and parameter values.
How should climate model simulations be combined to produce an overall assessment that reflects...
Citation
Share