Articles | Volume 10, issue 1
https://doi.org/10.5194/gmd-10-19-2017
https://doi.org/10.5194/gmd-10-19-2017
Methods for assessment of models
 | 
02 Jan 2017
Methods for assessment of models |  | 02 Jan 2017

CPMIP: measurements of real computational performance of Earth system models in CMIP6

Venkatramani Balaji, Eric Maisonnave, Niki Zadeh, Bryan N. Lawrence, Joachim Biercamp, Uwe Fladrich, Giovanni Aloisio, Rusty Benson, Arnaud Caubel, Jeffrey Durachta, Marie-Alice Foujols, Grenville Lister, Silvia Mocavero, Seth Underwood, and Garrett Wright

Abstract. A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance.

Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance.

We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance.

We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).

Download
Short summary
Climate models are among the most computationally expensive scientific applications in the world. We present a set of measures of computational performance that can be used to compare models that are independent of underlying hardware and the model formulation. They are easy to collect and reflect performance actually achieved in practice. We are preparing a systematic effort to collect these metrics for the world's climate models during CMIP6, the next Climate Model Intercomparison Project.