Journal Metrics

  • IF value: 6.086 IF 6.086
  • IF 5-year<br/> value: 6.174 IF 5-year
    6.174
  • SNIP value: 1.812 SNIP 1.812
  • IPP value: 5.140 IPP 5.140
  • SJR value: 3.969 SJR 3.969
  • h5-index value: 29 h5-index 29
Geosci. Model Dev., 5, 611-618, 2012
www.geosci-model-dev.net/5/611/2012/
doi:10.5194/gmd-5-611-2012
© Author(s) 2012. This work is distributed
under the Creative Commons Attribution 3.0 License.
The ACCENT-protocol: a framework for benchmarking and model evaluation
V. Grewe1, N. Moussiopoulos2, P. Builtjes3,4, C. Borrego5, I. S. A. Isaksen6, and A. Volz-Thomas7
1Deutsches Zentrum für Luft- und Raumfahrt, Institut für Physik der Atmosphäre, Oberpfaffenhofen, Germany
2Department of Mechanical Engineering of the Aristotle University Thessaloniki, Thessaloniki, Greece
3TNO Environment and Geosciences, Utrecht, The Netherlands
4Institut für Meteorologie, Freie Universität Berlin, Germany
5Department of Environment an Planning, University of Aveiro, Portugal
6Center for International Climate and Environmental Research (CICERO), Oslo, Norway
7Institut für Energie- und Klimaforschung: Troposphäre, Forschungszentrum Jülich, Germany

Abstract. We summarise results from a workshop on "Model Benchmarking and Quality Assurance" of the EU-Network of Excellence ACCENT, including results from other activities (e.g. COST Action 732) and publications. A formalised evaluation protocol is presented, i.e. a generic formalism describing the procedure of how to perform a model evaluation. This includes eight steps and examples from global model applications which are given for illustration. The first and important step is concerning the purpose of the model application, i.e. the addressed underlying scientific or political question. We give examples to demonstrate that there is no model evaluation per se, i.e. without a focused purpose. Model evaluation is testing, whether a model is fit for its purpose. The following steps are deduced from the purpose and include model requirements, input data, key processes and quantities, benchmark data, quality indicators, sensitivities, as well as benchmarking and grading. We define "benchmarking" as the process of comparing the model output against either observational data or high fidelity model data, i.e. benchmark data. Special focus is given to the uncertainties, e.g. in observational data, which have the potential to lead to wrong conclusions in the model evaluation if not considered carefully.

Citation: Grewe, V., Moussiopoulos, N., Builtjes, P., Borrego, C., Isaksen, I. S. A., and Volz-Thomas, A.: The ACCENT-protocol: a framework for benchmarking and model evaluation, Geosci. Model Dev., 5, 611-618, doi:10.5194/gmd-5-611-2012, 2012.
 
Search GMD
Final Revised Paper
PDF XML
Citation
Discussion Paper
Share