The last deglaciation, which marked the transition between the last glacial and present interglacial periods, was punctuated by a series of rapid (centennial and decadal) climate changes. Numerical climate models are useful for investigating mechanisms that underpin the climate change events, especially now that some of the complex models can be run for multiple millennia. We have set up a Paleoclimate Modelling Intercomparison Project (PMIP) working group to coordinate efforts to run transient simulations of the last deglaciation, and to facilitate the dissemination of expertise between modellers and those engaged with reconstructing the climate of the last 21 000 years. Here, we present the design of a coordinated Core experiment over the period 21–9 thousand years before present (ka) with time-varying orbital forcing, greenhouse gases, ice sheets and other geographical changes. A choice of two ice sheet reconstructions is given, and we make recommendations for prescribing ice meltwater (or not) in the Core experiment. Additional focussed simulations will also be coordinated on an ad hoc basis by the working group, for example to investigate more thoroughly the effect of ice meltwater on climate system evolution, and to examine the uncertainty in other forcings. Some of these focussed simulations will target shorter durations around specific events in order to understand them in more detail and allow for the more computationally expensive models to take part.
The last deglaciation is a period of major climate change, when Earth
transitioned from its last full glacial state to the current interglacial
climate. The Last Glacial Maximum (LGM) marked the culmination of the last glacial cycle when
vast ice sheets covered large regions of the Northern Hemisphere, stretching
over North America and Eurasia (e.g.
Boulton et al., 2001; Dyke et al., 2002; Peltier et al., 2015; Svendsen et
al., 2004; Tarasov et al., 2012), and the Antarctic Ice Sheet expanded to
the edge of the continental shelf (Argus
et al., 2014; Briggs et al., 2014; Lambeck et al., 2014 and references
therein). Changes in the ice sheets resulted in a total sea-level rise of
Historically, the EPILOG group defined the LGM as having occurred 23–19 ka (21 ka centre point), when climate was generally cool and ice sheets were more or less at their largest, based on ice-core and sea-level records (Mix et al., 2001). It represents the time of maximum terrestrial ice volume. More recently, the last sea-level lowstand has been found to have occurred either around 26 ka (Peltier and Fairbanks, 2006) or 21 ka (Lambeck et al., 2014) with relatively stable (low) sea level between those dates. Nearly all ice sheets were at or close to their maximum extent between 26 and 19 ka (Clark et al., 2009).
During the LGM, global annual mean surface temperatures are estimated to
have been around 4.0
The last deglaciation: forcings and events.
Northern Hemisphere ice sheet elevation at 21, 18, 15, 12 and 9 ka
for
The last deglaciation is not only an interesting case study for understanding multi-millennial scale processes of deglaciation, but also provides the opportunity to study shorter and more dramatic climate changes. Superimposed over the gradual warming trend (EPICA Community Members, 2004; Jouzel et al., 2007; Petit et al., 1999; Stenni et al., 2011) are several abrupt climate transitions lasting from a few years to a few centuries (examples of which are given below) and it remains a challenge to reconstruct or understand the chain of events surrounding these instances of rapid cooling and warming.
Heinrich Event 1 (approx. 16.8 ka; Hemming, 2004) occurred
during the relatively cool Northern Hemisphere Heinrich Stadial 1
(
During the subsequent 14.2–14.7 ka interval, Northern Hemisphere
temperatures are seen to have risen by as much as 14.4
Ice-core records of
The next event of particular interest is the Younger Dryas cooling, when Northern Hemisphere
temperatures are thought to have dropped by several degrees at 12.8–11.7 ka
and most prominently in high latitudes (Buizert
et al., 2014; Heiri et al., 2007; Lea et al., 2003; Liu et al., 2012;
Simonsen et al., 2011; Steffensen et al., 2008). The event presents a
conceptual paradox; the magnitude of the cooling is difficult to reconcile
with rising atmospheric CO
In this description, we have sought to capture some of the last deglaciation's main climatic events, but there are others that could shape the focus of further study in the working group. For example, early on in the period there is evidence of around 10 m sea-level rise taking place in 500–800 years around 20–19 ka (Clark et al., 2004; Clark and Mix, 2002; De Deckker and Yokoyama, 2009; Yokoyama et al., 2001a, b). Whilst the event itself remains somewhat controversial (Cabioch et al., 2003; Hanebuth et al., 2000, 2009; Peltier and Fairbanks, 2006; Shennan and Milne, 2003), it could be the expression of accelerating deglacial ice melt following the Last Glacial Maximum. More recently, the Barbados record of relative sea-level history indicates that following the Younger Dryas cooling episode, there may have been another meltwater pulse (Fairbanks, 1989; Peltier and Fairbanks, 2006), referred to as Meltwater Pulse 1b. Significant debate surrounds the magnitude and timing of Meltwater Pulse 1b (Bard et al., 1996; Cabioch et al., 2003; Cutler et al., 2003; Edwards et al., 1993; Shennan, 1999; Stanford et al., 2011) and even its existence, because similar to the 19 ka event, it is not seen in all sea-level records spanning the interval (e.g. Bard et al., 1996, 2010; Hanebuth et al., 2000). However, evidence of rapid Antarctic retreat around the time of the event could provide a possible cause for this late deglacial rapid sea-level rise (Argus et al., 2014).
Transient modelling of the last deglaciation is valuable for examining dynamic and threshold behaviours (Braconnot et al., 2012) endemic to the Earth's non-stationary climate system, especially ice–ocean–atmosphere interactions. It is the best tool for reaching a comprehensive understanding of complex and interrelating climate processes with specific regard to chains of events.
Such simulations are useful for examining the effect of temporally varying climate forcings across the globe and in different environmental systems: what geographical patterns arise and how are they connected, how do these vary through time from seasonal to millennial timescales, and how long does it take before a change in forcing is manifested in a climate response? The spatial coherency of specific events can be investigated to identify processes for simultaneous change as well as lead/lag mechanisms. For example, Roche et al. (2011) investigated patterns of spatial variability in the deglaciation as caused by long-term changes in orbital parameters, atmospheric greenhouse gas concentrations, and ice sheet extent/topography. The results indicated a simultaneous onset of hemispheric warming in the north and south, showing that obliquity forcing was the main driver of the early deglacial warming. In the same investigation, it was found that sea-ice covered regions were the first parts of the world to exhibit significant rises in temperature, implying that a better knowledge of sea-ice evolution could be key to fully understanding the trigger for widespread deglaciation and warming feedbacks. A further example of the insights available into lead–lag relationships provided by long, transient climate simulations under glacial boundary conditions is provided by the previously referenced Dansgaard–Oeschger oscillation-related analyses of Peltier and Vettoretti (2014) and Vettoretti and Peltier (2015), which appear to mimic the Heinrich Stadial 1 to Bølling transition.
Through comparison to geological time series data, transient simulations enable the “fingerprinting” of specific climate processes to find out what mechanisms (in the model) can cause recorded climate signals. Comparing complex, global-scale models to combined geological records can provide multiple “fingerprints” in different variables from different archives and in different locations to help narrow down plausible scenarios. For example, Menviel et al. (2011) ran a suite of simulations, varying oceanic meltwater fluxes through the last deglaciation in order to identify which freshwater-forcing scenarios reproduce the Atlantic Ocean circulation state implied by sedimentary records of AMOC strength/depth and ventilation age (Gherardi et al., 2005; McManus et al., 2004 with ages shifted as per Alley, 2000; Thornalley et al., 2011) as well as the Northern Hemisphere surface climate (Alley, 2000; Bard, 2002; Bard et al., 2000; Heiri et al., 2007; Lea et al., 2003; Martrat et al., 2004, 2007). It was argued that such climate simulations could be used to improve constraints on the timing, duration, magnitude and location of meltwater inputs to the global ocean.
Liu et al. (e.g. 2009) used climate
“fingerprinting” to identify possible mechanisms for the abrupt Bølling
Warming event, finding that in their model, a forced cessation of freshwater
inputs to the North Atlantic (representing ice sheet melt) superimposed on a
steady increase in atmospheric CO
Thus, climate proxy fingerprinting can be useful for understanding the spatial coherency of climatic changes and their underlying mechanisms. However, correlation between model and geological data does not guarantee that the correct processes have been simulated; there is always the problem of equifinality, whereby the same end state can be reached by multiple means. In a process sense, this may be particularly uncertain when a model does not reproduce the full chain of events that led to a distinguishable climatic signal. For example, mechanisms for many of the major changes in oceanic freshwater inputs proposed by Liu et al. (2009) and Menviel et al. (2011) have not yet been directly simulated (e.g. by dynamic ice sheet models). In both studies, they are imposed as model boundary conditions. Further simulations with different forcing scenarios and from a range of models would help to address such uncertainties.
Transient simulations of the last deglaciation also provide necessary
boundary conditions for modelling a variety of Earth system components that
may not be interactively coupled to the climate model being used. For
example, Gregoire et al. (2015) drove a dynamic
ice sheet model with climate data produced by a similar set of simulations
to Roche et al. (2011). Using a low-resolution
general circulation model, individual climate forcings – including orbit, greenhouse gases, and
meltwater fluxes – were isolated so that their relative contribution to
melting the modelled North American ice sheets could be examined. The work
concluded that the last deglaciation was primarily driven by changes in
Northern Hemisphere insolation, causing around 60 % of the North American
Ice Sheet melt, whilst increasing CO
A further example is given by Liu et al. (2012), who carried out an asynchronous (or
“offline”) coupling between simulated sea surface temperatures and an
isotope-enabled atmospheric model to investigate the Younger Dryas cooling
event (
In addition, there are now transient simulations of the last deglaciation
from climate models that have been interactively coupled with dynamic ice
sheet models (Bonelli
et al., 2009; Heinemann et al., 2014) and isotope systems (Caley et al., 2014). Furthermore, a fast Earth system
model of intermediate complexity (EMIC), which includes an interactive ice
sheet model has been used to look at Earth system dynamics (the role of
orbital cycles, aeolian dust, subglacial regolith properties, the carbon
cycle, and atmospheric trace gases) on much longer, glacial–interglacial
timescales > 120 ka and encompassing the last deglaciation (Bauer
and Ganopolski, 2014; Brovkin et al., 2012; Ganopolski et al., 2010;
Ganopolski and Calov, 2011). However, the older, uncoupled climate–ice sheet
model approach discussed above remains useful because it enables a wider
suite of models to be employed than would otherwise be feasible due to
limited computational efficiency (e.g. of state-of-the-art, high-resolution/complexity models) or software engineering capability. It may
also allow for the same Earth system component model (e.g. of ice sheets or
With sufficient computational power to make long simulations of the last deglaciation a feasible undertaking, it is timely to coordinate new efforts to ensure that a framework exists to (i) utilise the cutting edge science in climate modelling and palaeoclimate reconstruction, and (ii) robustly intercompare simulations run with different models by different groups and palaeoclimatic data.
For more than 20 years, the Paleoclimate Modeling Intercomparison Project (PMIP) has been internationally coordinating multi-model simulations with complex climate models in order to evaluate model performance and better understand (past) climate changes (Braconnot et al., 2007, 2012; PMIP website, 2007). Currently entering its fourth phase, PMIP is a growing organisation that continues to contribute towards other coordinated efforts to understand present-day climate change, including the CMIP (Taylor et al., 2011a; e.g. 2011b) and the Intergovernmental Panel on Climate Change (IPCC) Assessment Reports (e.g. the Fifth Assessment Report; Flato et al., 2013; Masson-Delmotte et al., 2013). It encompasses a broad range of models, from very fast, lower resolution EMICS, through a range of coupled general circulation models to the latest generation of higher-resolution and complexity Earth system models. Thus, the main challenges for the fourth phase of PMIP include designing experiments that are suitable for all of its participants, addressing sufficiently fundamental questions to be of interest to the EMIC community, defining adequately focussed scope for the feasible participation of the latest generation of Earth system models, and prescribing flexible model set-ups that can be implemented in this range of models, whilst maintaining the ability to robustly compare results. In addition, a continuing challenge for PMIP is to assemble suitable palaeoclimatic data sets for comparison to model results.
One of the most recent working groups to be established in PMIP is the Last Deglaciation Working Group. With the aim of coordinating transient simulations of the last deglaciation, the challenge of including the full range of PMIP models is at the forefront of our experiment design. The experiment will be partitioned into three phases (Fig. 1b and Sect. 4), which will form milestones for managing its long duration (12 000 years) as well as for scheduling any shorter, alternative simulations to the Core.
The aim of this paper is to outline the model set-up for the transient Core experiment for the last deglaciation, specifically for the sub-period of 21–9 ka. Prescribed boundary conditions include orbital parameters, atmospheric trace gases and ice sheets. In association with the ice sheet reconstructions, we also provide bathymetric, orographic and land-sea mask evolution as well as make recommendations for freshwater forcing (or global ocean salinity changes) through the period.
One of the roles of PMIP has been to systematically study the ability of climate models to retrodict different past climates for which there are “observational” data from geological archives (e.g. Braconnot et al., 2000, 2007, 2012; Haywood et al., 2010; Joussaume et al., 1999; Kageyama et al., 2006; Kohfeld and Harrison, 2000; Masson-Delmotte et al., 2006; Otto-Bliesner et al., 2009; Weber et al., 2007). In this vein, many palaeoclimate model intercomparison projects have been designed to facilitate the robust comparison of results from the same “experiment” (i.e. simulation set) across a range of different models, usually taking a prescriptive approach to model set-up to ensure that any differences observed in the results are attributable to differences in model structure and not to differences in chosen “boundary conditions” and climate forcings. However, as Schmidt et al. (2011) pointed out, the choice of one particular configuration from a range of plausible boundary conditions and forcings is often arbitrary and does not account for uncertainties in the data used for developing the forcings/boundary conditions. Moreover, in designing the PMIP last deglaciation experiment, we have attempted to strike a balance between establishing a framework within which to assess model differences and performance, and taking the opportunity to utilise the full range of PMIP climate models (Earth system, general circulation and intermediate complexity) to examine uncertainties in deglacial forcings, trigger mechanisms and dynamic feedbacks. In short, when we do not precisely know the climate forcing for an event, or the temporal evolution of model boundary conditions, it is more efficient to compare the results from models that use different forcings with geological and palaeoclimatic data than to run one scenario with all models and all scenarios with all models. The aim is to use the results of the comparison to narrow down the range of uncertainty in the forcings/boundary conditions and reach a better understanding of underlying climate mechanisms.
Summary of recommended model boundary conditions to spin-up the last deglaciation Core experiment version 1 (pre 21 ka); see text for details. Participants are not required to follow the recommendation for these boundary conditions, but must document the method used, including information on the simulation's state of spin-up at the point when the Core is started. Data are available from PMIP4 last deglaciation wiki (PMIP Last Deglaciation Working Group, 2016). Boundary condition group headings are in bold.
Consequently, forcings/boundary conditions that are relatively well established (atmospheric trace gases and orbital parameters) are tightly constrained in the Core experiment design. Others are given with multiple precisely described possibilities to choose from (ice sheet reconstructions) and the remainder (e.g. freshwater/salinity, aerosols and vegetation) are left to the discretion of individual participants. Recommendations will be made for the latter grouping of forcings/boundary conditions, for example, freshwater/global salinity fluxes that are consistent with the provided ice sheet evolutions, and the use of preindustrial aerosol and/or vegetation values when they are not model prognostics, but a flexible approach is advantageous not only scientifically (i.e. for examining the climatic response to uncertain forcings, see above), but also practically (for accommodating the wide range of participating models). Further to this, it will be left to the expert user to decide how often to make manual updates to those boundary conditions that cannot evolve automatically in the model, such as bathymetry, orography and land-sea mask. This is also necessary because of the specific technical and resource requirements associated with setting up and running each participant model.
In addition to the Core, we will coordinate a series of experiments that are
designed to
explore uncertainties in the boundary conditions and climate
forcings; test specific hypotheses for mechanisms of climate change and to explain
individual events; focus on shorter time periods (for example, abrupt events) and thus include
computationally expensive models for which a 12 000 year simulation is
unfeasible.
These optional simulations will be referred to as focussed experiments, and
participants are encouraged to contribute towards the design and
coordination of these simulations within the working group
(there is a dedicated wiki page to coordinate these:
PMIP Last Deglaciation Working Group, 2016;
The start date for the experiment has been chosen to be in line with PMIP's historical definition of the LGM: 21 ka (Abe-Ouchi et al., 2015; e.g. Braconnot et al., 2000; Kohfeld and Harrison, 2000). However, we are aware that some groups may prefer to begin their simulations from the earlier date of 26 ka (around the last sea-level lowstand; Clark et al., 2009; Lambeck et al., 2014; Peltier and Fairbanks, 2006) and both orbital and atmospheric trace gas parameters will be provided from this earlier date. Although the working group's focus will at least initially be 21–9 ka, boundary conditions for the Core simulations will be provided from 21 ka to the preindustrial (26 ka to the preindustrial for orbital insolation and trace gases).
The following is not meant to be an exhaustive review of climate forcing reconstructions through the last deglaciation. Instead, our intention is to consolidate the current knowledge in a practical experiment design for a range of climate models. Within this coordinated context, the aim is to explore the forcings and underlying feedback mechanisms for the rapid climate events that punctuated the gradual warming and deglaciation of the Earth.
The paper is structured so that Sect. 2 outlines the model boundary conditions and climate forcings for the Core experiment. Section 3 presents how we will ensure the feasible participation of a range of climate models with different complexity and computational efficiency, as well as the plan to run additional, targeted, hypothesis- and sensitivity-led simulations. Section 4 discusses the three phases of the long Core experiment.
Summary of required model boundary conditions for the last deglaciation Core experiment 21–9 ka version 1; optional boundary conditions are labelled as such. Data are available from PMIP4 last deglaciation wiki (PMIP Last Deglaciation Working Group, 2016). See text for details. Boundary condition group headings are in bold.
The Core simulations of the last deglaciation (version 1) will focus on the
period from 21 to 9 ka, although there will also be the option to spin-up
the simulation with time-evolving orbital and trace gas parameters from 26 ka and all boundary conditions will be available from 21 ka to the
preindustrial. Recommendations for the initialisation state at 21 ka are
summarised in Table 1 and described below (Sect. 2.1). Prescribed boundary
conditions include insolation via the Earth's astronomical parameters (Sect. 2.2), atmospheric trace gases (Sect. 2.3), ice sheets (Sect. 2.4), meltwater
fluxes (Sect. 2.5), and orography/bathymetry (Sect. 2.6), as summarised in
Table 2. Boundary condition data for version 1 of the Core experiment are
provided on the PMIP4 wiki (PMIP Last Deglaciation
Working Group, 2016;
There is a choice of two possibilities for starting the last deglaciation Core simulations. Either the simulation should be initialised from the end of a spun-up, PMIP-compliant LGM (21 ka) simulation, or a simulation with transient orbital and trace gas forcing should be run from an earlier time period (orbital and trace gas parameters will be provided from 26 ka onwards). Whichever method is applied, we require that it is comprehensively documented along with information on the model's state of spin-up at 21 ka (e.g. time series of surface climates, maximum strength of the North Atlantic Meridional Overturning Circulation stream function, net radiation at the top of the atmosphere).
Atmospheric trace gases through the last deglaciation from Antarctic
ice cores.
For setting up an equilibrium-type spin-up, please make sure to use the following constraints, which may differ from other PMIP 21 ka simulation protocols:
Insolation should be set so that eccentricity is 0.018994, obliquity is
22.949 Prescribed atmospheric trace gases should be as follows: CO Prescribed ice sheets should use either the GLAC-1D or ICE-6G_C (VM5a; hereafter simply “ICE-6G_C”) reconstruction at 21 ka
(see Sect. 2.4). The associated topography and coastlines should be used as
per the chosen ice sheet reconstruction. Beyond maintaining consistency with
the coastlines, it is optional whether or not to implement the associated
bathymetry and participants should adapt the bathymetry according to their
model's capabilities (for example, depending on whether the spatial
resolution allows for it or makes this a useful adaptation). These data will
be provided with the ice sheet reconstructions. Whichever ice sheet
reconstruction is chosen for the LGM spin-up should be carried through to the
Core transient simulation. Global ocean salinity should be Any other boundary conditions should be set to be consistent with the Core
transient simulation to follow (Sect. 2.2–2.7).
On the freshwater budget, PMIP advises groups to “carefully check the fresh
water budget in their LGM experiments in order to avoid unnecessary drifts
of the ocean salinity. It can be necessary to route the snow which has
fallen in excess on the ice sheets to the ocean. Given the change in
coastlines, it is also sometimes necessary to relocate the large river
estuaries on the coast” (PMIP LGM Working Group, 2015). Tarasov
and Peltier (2006) provided a glaciological
example of the possible re-routings for North America. As they become
available, routing maps for the Last Glacial Maximum continents will be
provided on the PMIP4 last deglaciation wiki (PMIP Last
Deglaciation Working Group, 2016).
The integration time required for spinning up the LGM climate state should
be decided on a case-by-case basis by the user (see
comments by Kageyama et al., 2016, on spin-up and duration of experiments).
Groups may choose to initialise their equilibrium-type simulation from other
PMIP LGM runs. However, caution is advised. Some of the boundary conditions
for previous PMIP LGM simulations are different to the set-up outlined here,
specifically in relation to ice sheets and trace gases concentrations, and
therefore need to be adapted to match these requirements. The protocol for
the PMIP4-CMIP6 (being finalised at the time of writing) is currently
compatible with the LGM spin-up described here. Therefore, provided that
either the ICE-6G_C or GLAC-1D ice sheet reconstruction is
used for
If this is the preferred option to initialise the Core, it is recommended
that the simulation is set up as per Sect. 2.1.1, but with time-evolving
orbital and trace gas parameters instead of fixed ones. Specifically for
orbit, the eccentricity, obliquity, perihelion–180
In this case, all other boundary conditions should remain fixed in line with the LGM equilibrium-type experiment design until 21 ka, when the fully transient Core simulations begin. This transient spin-up can be initialised from a spun-up previous LGM, cold ocean, preindustrial or observed present-day ocean simulation.
As per Sect. 2.1, the solar constant should be fixed to the established
preindustrial conditions (e.g.
1361.0
For the deglaciation, chlorofluorocarbons (CFCs) should be fixed at 0, and O
The atmospheric CO
It is noted that the N
Southern Hemisphere ice sheet elevation at 21, 12 and 9 ka for
For the Core experiment, ice sheet extent and topography should be prescribed from one of two possible reconstructions: ICE-6G_C (Figs. 2a and 4a) and GLAC-1D (Figs. 2b and 4b).
The ICE-6G_C model has been designed such that its total mass and local thickness variations provide excellent fits to most of the data that may be invoked to constrain it. In particular the total mass is constrained by the globally averaged (eustatic) rise of sea level that is well approximated by the coral-based record of relative sea-level history from the island of Barbados in the Caribbean Sea (Peltier and Fairbanks, 2006). On the other hand, the local variations of ice thickness are constrained to fit not only the very large number of radiocarbon dated histories of relative sea level change from the regions that were once ice covered, but also by the voluminous records of present-day vertical motion of the crust that are now available from North America, Eurasia and Antarctica, based upon space-based Global Positioning System measurements. This fit is obtained through iterative space domain refinement, as fully described by Argus et al. (2014) and Peltier et al. (2015). Furthermore, the reconstruction includes a history of Antarctic glaciation that correctly includes the expansion of ice cover in the Ross Sea and Weddell Sea embayments and out to the shelf break at the LGM. Stuhne and Peltier (2015) assess the compatibility of ICE-6G_C with current understanding of ice dynamical processes using data assimilation methods. The model is unique internationally insofar as the range of observational constraints that it has been shown to reconcile. Since the ICE-6G_C reconstruction is fully published (Argus et al., 2014; Peltier et al., 2015), the reader is directed to this literature for further detailed information.
The GLAC-1D reconstruction is combined from different sources (Briggs et al., 2014; Tarasov et al., 2012; Tarasov and Peltier, 2002) and whilst it is mostly published, there are some new components; therefore, a short description follows. The Eurasian and North American components are from Bayesian calibrations of a glaciological model (Tarasov et al., 2012; this study), the Antarctic component is from a scored ensemble of 3344 glaciological model runs (Briggs et al., 2014) and the Greenland component is the hand-tuned glaciological model of Tarasov and Peltier (2002) updated to the GICC05 age chronology (Rasmussen et al., 2006). All four of the GLAC-1D ice sheet components employ dynamical ice sheet models that have been constrained with relative sea-level data. Where available, they have also been constrained by geologically inferred deglacial ice margin chronologies, pro-glacial lake levels, ice-core temperature profiles, present-day vertical velocities, past ice thickness, and present-day ice configuration. Details of exactly how these constraints were derived and applied are given in the relevant references above. The four components (North America, Eurasia, Antarctica and Greenland) were combined under glacial isostatic adjustment (GIA) post-processing for a near-gravitationally self-consistent solution (Tarasov and Peltier, 2004), which was tested against complete GIA solutions. The topography in the global combined solution was adjusted in Patagonia and Iceland following ICE-5G (Peltier, 2004), but the changes in these ice caps are not reflected in the ice mask.
Compared to ICE-6G_C, GLAC-1D is derived with fewer degrees of freedom given the internal constraints of glacial physics and assumptions in the climate forcing (which in part depends on climatologies derived from PMIP2 and PMIP3 results). GLAC-1D incorporates additional constraints that are inapplicable to purely geophysically constrained models. These include inferred pro-glacial lake levels for North America as well as proximity to the present-day observed Antarctic Ice Sheet after a transient, multi-glacial cycle simulation with the underlying ice-earth model. Furthermore, GLAC-1D is subject to the critical physical feedbacks/constraints of ice (and bed) thermodynamics and bed surficial geology on ice streaming, which has a major impact on ice sheet topography. However, all these extra constraints and physics come at the cost of a 10 to 22 m shortfall in GLAC-1D relative to far-field relative sea-level proxies (a 10 to 15 m shortfall compared to ICE-6G_C). This is part of the so-called “missing ice” problem (Clark and Tarasov, 2014), with the upper bound also accounting for the local viscous slab effect of Austerman et al. (2013). Both ICE-6G_C and GLAC-1D are subject to as yet unquantified uncertainties, such as the impact of lateral inhomogeneity in the viscous structure of the Earth.
The two reconstructions incorporate similar constraints for North American ice sheet extent (i.e. Dyke, 2004). For Eurasia, ICE-6G_C follows the ice extent provided by Gyllencreutz et al. (2007), whereas GLAC-1D uses data from Hughes et al. (2015). The reconstructions only differ slightly in their ice extent evolution (Figs. 2 and 4), for example the Barents Sea deglaciates earlier in GLAC-1D than in ICE-6G_C (Fig. 2). The main differences between the reconstructions are in the shape and volume of individual ice sheets. In particular, the North American Ice Sheet reaches an elevation of 4000 m in ICE-6G_C, but is only 3500 m high in GLAC-1D. Similarly, the shape and thickness of the Barents Sea Ice Sheet are not the same in the two reconstructions.
The ICE-6G_C data set is provided at both 1
Ice surface elevation (topography) should be implemented as an anomaly from present-day topography and added to the model's present-day topography after regridding onto the model resolution, following the previous LGM experimental protocol (PMIP LGM Working Group, 2010, 2015). Land surface properties will need to be adjusted for changes in ice extent. Where ice retreats, land surface should be initialised as bare soil if a dynamic vegetation model is used, otherwise use prescribed vegetation (see Sect. 2.7) with appropriate consideration of soil characteristics. Where ice is replaced by ocean, it is advised to follow the procedure for changing coastlines described in Sect. 2.7. Inland lakes can be prescribed based on the ice sheet and topography reconstructions, but this is not compulsory. It is also optional whether to include changes in river routing basins (i.e. catchments) and outlets, which can either be calculated from the provided topography and land-sea mask data (see Sect. 2.6), or can be manually set to follow routing maps provided on the PMIP4 last deglaciation wiki (PMIP Last Deglaciation Working Group, 2016).
Groups are free to choose how often to update ice extent and elevation. This could be done at regular intervals (e.g. the sub-1000-year time slices provided) or at specific times during the deglaciation, as was done in the TraCE-21 ka experiment (Liu et al., 2009). Changes in ice extent can have a large impact on climate through ice albedo changes and feedbacks. We thus recommend that when possible, ice sheets are not updated at times of abrupt regional or global climate change, particularly the events that the working group will focus on, as this could artificially introduce stepped shifts in climate. Groups are also advised to consider that ice sheet associated boundary conditions (ice extent and elevation, land-sea mask, bathymetry) may need to be updated more often at times of rapid ice retreat. The timing and way in which land ice changes are implemented must be documented.
Alternative ice sheet reconstructions or simulations can be used to test the sensitivity of climate to this boundary condition. Simulations with coupled ice sheet–climate models are also welcomed. Although these will not form part of the Core, for which ICE-6G_C or GLAC-1D should be used, they will be coordinated as important supplementary focussed simulations.
For technical notes advising on the implementation of the ice sheet reconstructions in palaeoclimate models, see Kageyama et al. (2016).
The Core experiment protocol is flexible on whether or not to include
prescribed ice melt (i.e. freshwater fluxes) delivered from the ice sheets
to the ocean and how to do it. It is recommended to run at least one version
of the Core experiment with ice melt included, since around 110 m of
ice-volume equivalent sea level is thought to have melted 26–9 ka (e.g. Lambeck et al., 2014) and considering the
historical importance attached to the influence of (de)glacial freshwater
fluxes on climate (e.g.
Broecker et al., 1989; Condron and Winsor, 2012; Ganopolski and Rahmstorf,
2001; Liu et al., 2009; Rahmstorf, 1995, 1996; Teller et al., 2002;
Thornalley et al., 2010; Weaver et al., 2003). However, it is also important
to note the ongoing debate over the extent to which catastrophic freshwater
fluxes brought about abrupt deglacial climate change; several alternative or
complementary mechanisms have been proposed (e.g.
Adkins et al., 2005; Álvarez-Solas et al., 2011; Barker et al., 2010,
2015; Broecker, 2003; Hall et al., 2006; Knorr and Lohmann, 2003, 2007;
Roche et al., 2007; Rogerson et al., 2010; Thiagarajan et al., 2014).
Moreover, a thorough investigation of the extent to which
non-freshwater-forced climate evolution matches the geological records has
merit in its own right; can abrupt deglacial changes be simulated without
ice–meltwater? To what extent can “observed” patterns be attributed to
better constrained forcings, such as atmospheric CO
Freshwater forcing scenarios consistent with the ice sheet reconstructions
and which hence conserve salinity throughout the deglacial experiment are
provided in two formats (the
Multiple Core simulations exploring more than one of these scenarios are
welcomed.
Data for the
However, participants do not have to use the (recommended) versions of
Regardless of which scenario is employed, it is important that meltwater fluxes are prescribed as time-evolving model boundary conditions; rather than as stepwise adjustments at the same time as the ice sheets are updated, for example. Unless they are intentional conditions of the scenario, there should be no sudden jumps in the freshwater being applied. Furthermore, we invite participants to upload the boundary condition data for other freshwater flux scenarios along with appropriate documentation as/when they become available, and to contribute towards the coordination of focussed experiments (see Sect. 3) that will test specific hypotheses associated with model and climate sensitivity to the location, duration and magnitude of freshwater fluxes.
Participants are recommended to note the advice set out by Kageyama et al. (2016) for implementing these boundary conditions in PMIP4-CMIP6 Last Glacial Maximum and Pliocene equilibrium-type experiments.
Changes in the ice sheets and their glacial eustatic and isostatic influence affected continental topography and ocean bathymetry, which in turn shifted the coordinates of river mouths and the coastal outline throughout the deglaciation. Hence, time-varying topographic, bathymetric and land-sea mask fields that match the chosen ice sheet from Sect. 2.4 (i.e. ICE-6G_C or GLAC-1D) should be used; moreover, these are provided within the ice sheet reconstruction data sets.
Topography should be updated at the same time as the model's ice sheet is updated; this is mainly implicit to implementing the ice sheet reconstruction because the major orographic changes through the deglaciation relate directly to ice sheet evolution. This said, due to glacial isostatic adjustment components in the ice sheet reconstructions, there is evolution in continental topography that is not directly the lowering/heightening of the ice surface, and it is up to individuals whether they incorporate this or mask only the changes in ice sheet orography.
Ocean bathymetry will be provided. When deemed possible, this boundary condition should be varied through time. Where differences in the land-sea mask require extra land to fill up coastal regions, or land to be cut away into ocean as sea-level rises (see next paragraph on coastlines), the model must be changed accordingly, because it is important to adequately represent the changing land-sea mask; for example, in order to include overlying grounded ice.
Following on from this, coastlines will need to be varied according to changes in global sea level (and each model's horizontal grid resolution). It will be left to the discretion of participants to decide how often to update either boundary condition, and when deciding on their frequency it is recommended that groups consider the implications for opening/closing seaways and their effect on ocean circulation and climate. Furthermore, the frequency need not be regular and may instead focus on key “events” in the marine (gateway) realm. However, whenever possible and foreseeable, groups are encouraged to avoid making stepwise changes to model boundary conditions that would interfere with signals of abrupt climate change; particularly those events that the working group aims to focus on (Heinrich Event 1, the Bølling Warming, MWP1a, the Younger Dryas etc.) unless the forcing (e.g. opening of a gateway) is assumed to be linked with the event.
If groups wish, model river networks can be remapped to be consistent with this and updated on the same time step as the ice sheet reconstruction, either manually or by the model. However, it is appreciated that the technical challenges associated with such a methodology would be impractical for many. Therefore, following the recommendation of the PMIP3 LGM Working Group (2010) and Kageyama et al. (2016), “river pathways and basins should be at least adjusted so that fresh water is conserved at the Earth's surface and care should be taken that rivers reach the ocean” at every time step that the bathymetry is adjusted; for example, when sea levels were lower, some river mouths may need to be displaced towards the (new) coastline to make sure they reach the ocean.
In this section, recommendations are made for last deglaciation vegetation, land surface and aerosol (dust) parameters in the model.
There are three recommended options for setting up the Core experiment's vegetation and land surface parameters, they can either be (i) computed using a dynamical vegetation model (e.g. coupled to the atmospheric component of the model), (ii) prescribed to match the CMIP5 preindustrial set-up (Taylor et al., 2011a, b) with fixed vegetation types and fixed plant physiology (including leaf area index) or (iii) prescribed to match the CMIP5 preindustrial set-up (Taylor et al., 2011a, b) with fixed vegetation types and interactive plant physiology if running with an enabled carbon cycle. If prescribing vegetation and land surface, i.e. using option (ii) and (iii), groups should be aware that coastal land will be emerged compared to preindustrial because of the increased terrestrial ice volume and associated lower eustatic sea level (with the maximum during the early stages of the Core). Therefore, vegetation/land surface will need to be interpolated onto the emerged land from preindustrial grid cells, for example using the nearest neighbour methods.
For models with prognostic aerosols, the parameters for dust (forcing) can be computed dynamically. Alternatively, it is recommended that Core simulations fix the associated parameters according to the CMIP5 preindustrial simulation (Taylor et al., 2011a, b), with no temporal variation. Examining the influence of different transient aerosol scenarios (for those models that do not include prognostic dust, for example) could constitute a further suite of sensitivity simulations for comparison with the Core.
There is no last deglaciation protocol for setting up other forcings, transient or fixed in time. For all simulations, groups are required to fully document their methods, including experiment design and especially when different or with additional components to the set-up described here.
As already discussed, we are faced with the challenge of designing an experiment that is suitable to be run with a wide range of models, from the more computationally efficient class of intermediate complexity models, to state-of-the-art Earth system models. One particular difficulty is enabling the most complex and the highest resolution climate models to participate in this 12 000-year-long experiment when for some, even the integration to reach the LGM spin-up state demands a large number of computational resources. There is no easy solution and our approach will be to augment the Core simulations with shorter focussed simulations that target specific questions, mechanisms and time periods. Whilst the most computationally expensive models (e.g. the latest generation of Earth system models) may not be able to participate in the Core, they will be included in the shorter subset of focussed simulations. Similarly, alternative full-deglaciation simulations can be coordinated for the less computationally expensive models in the working group (e.g. low-resolution general circulation models, and Earth system models of intermediate complexity).
One line of investigation relating to meltwater inputs from ice sheets and icebergs is to carry out a suite of sensitivity simulations examining different injection sites. These simulations would help to address some of the uncertainty in freshwater flux scenarios. For example, geochemical evidence suggests that smaller and more localised discharges of freshwater than have traditionally been considered in climate models may have an important influence on ocean circulation (e.g. Hall et al., 2006), implying that precise freshwater fluxes are needed in the models to examine their effect. Certainly, others have shown that the location of injection is a controlling factor on the impact of freshwater delivery to the ocean, not just laterally (e.g. Condron and Winsor, 2012; Smith and Gregory, 2009), but also in terms of depth (e.g. Roche et al., 2007). A set of coordinated simulations exploring a range of uncertainty in the freshwater forcing (location, depth, duration, magnitude, and physical characteristics such as temperature and density) would be well suited for the focussed experiments, thus building on the Core simulations, which may themselves indicate interesting avenues for investigation; partly the purpose of a flexible meltwater approach.
However, freshwater is not the only issue and other focussed experiments could include the influence of greenhouse gas records, differences in ice sheet reconstructions (e.g. the PMIP3 merged ice sheet from Abe-Ouchi et al., 2015; ICE-6G_C; GLAC-1D) or simulations with (coupled) ice-sheet models, the relative importance of different forcings (e.g. insolation vs. trace gases vs. ice sheet evolution), sensitivity to dust-forcing scenarios, the influence of changes in tidal energy dissipation (Schmittner et al., 2015), event-specific hypothesis testing and shorter-term variability within the climate system.
Based on ongoing discussions, it is likely that the first sets of
focussed simulations will be
sensitivity and hypothesis-driven simulations that compare results from
uniformly distributed meltwater fluxes to results from river-routed
meltwater fluxes to examine the impact of the regional specificity of
freshwater forcing upon climate system evolution; sensitivity simulations that are free from ice meltwater fluxes to provide
information on what climate evolution was caused by processes other than
freshwater fluxes to the ocean; a hypothesis-driven investigation of the possible mechanisms for
preconditioning the glacial ocean for the relatively cool Heinrich Stadial 1
and ensuing catastrophic iceberg discharge (Barker et
al., 2015); sensitivity experiments examining the role of trace gas forcing resolution
on climate evolution, for example, smoothing the record provided by Bereiter
et al. (2015).
We have described the plans for focussed simulations to highlight the depth of the
working group's aims and to properly contextualise the Core simulations, but
the purpose of this manuscript is to outline the model set-up for the Core
experiment. The design for subsequent focussed simulations will be described at a
later date on the PMIP4 last deglaciation wiki (PMIP Last
Deglaciation Working Group, 2016) and we welcome contributions to the
discussion of what further simulations to coordinate there.
The experiment will be split into three phases that are designed to run seamlessly into each other (Fig. 1a). Phase one begins at the LGM (21 ka) and will finish at the abrupt Bølling Warming event, which is where phase 2 picks up, encompassing the Bølling Warming. Phase 3 begins at the start of the Younger Dryas cooling and is currently planned to continue through to the end of the Core experiment at 9 ka.
Perhaps most importantly, this affords near-future milestones for managing the ultimate completion of the long full deglacial simulation across all participant groups. It will provide a timetabled framework for beginning and continuing the longer simulations; for scheduling shorter, event- or challenge-specific transient simulations by more computationally expensive models (see discussion in Sect. 3); and for the analysis and publication of results as the milestones are reached. Another motivation is to ensure that the experiment design for later periods of the last deglaciation is updated according to knowledge gained from simulations of the preceding time period; for example, changes in ocean and climate states, which have previously been shown to have a strong influence on climate trajectories (e.g. Kageyama et al., 2010; Timm and Timmermann, 2007). This is particularly important for setting up shorter, event-specific focussed simulations, but it is not planned to be explicitly used to influence the Core. Splitting the period into phases also provides the opportunity to update model boundary conditions and climate forcing data with cutting edge palaeoclimate reconstructions, as they emerge during the lifespan of the multi-model experiment. However, care will be taken to ensure that these are physically consistent between phases, and these updates will not compromise the Core simulations described in this manuscript. This is so as not to disadvantage more computationally efficient models that may have already completed simulating the full 21–9 ka (or beyond) period. Instead, the information will be incorporated into focussed versions of the last deglaciation simulations; possibly spun-off sub-periods that do not have to start again at the LGM.
Each phase will encompass at least one distinguishable climate event; Heinrich Stadial 1 and Heinrich Event 1 in phase 1 following on from the LGM; MWP1a, the Bølling Warming and the Antarctic Cold Reversal in phase 2; and the Younger Dryas cooling in phase 3 (Fig. 1b). As outlined in Sect. 3, simulations of these shorter events can be coordinated in the focussed simulations. This is to engage the higher complexity/resolution models, which are unable to run longer simulations, but can use the wider framework of the working group to provide valuable knowledge on rapid climate changes known to have taken place in the last 21 ka.
The last deglaciation presents a host of exciting opportunities to study the Earth system and in particular, to try to understand a range of abrupt climate changes that occurred over just a few years to centuries within the context of more gradual trends. Numerical climate models provide useful tools to investigate the mechanisms that underpin the events of this well-studied time period, especially now that technological and scientific advances make it possible to run multi-millennium simulations with some of the most complex models. Several recent modelling studies have begun this task, but many questions and untested hypotheses remain. Therefore, under the auspices of the Paleoclimate Modelling Intercomparison Project (PMIP), we have set up an initiative to coordinate efforts to run transient simulations of the last deglaciation, and to facilitate the dissemination of expertise between modellers and those engaged with reconstructing the climate of the last 21 000 years.
The first step has been to design a Core experiment suitable for a range of PMIP models: from relatively fast and coarse resolution Earth system models of intermediate complexity, to new generations of the more complex and higher-resolution general circulation and Earth system models. The set-up for this Core experiment is based on an approach that tries to combine a traditional Model Intercomparison Project method of strictly prescribing boundary conditions across all models, and the philosophy of utilising the breadth of participants to address outstanding uncertainty in the climate forcings, model structure and palaeoclimate reconstructions. Accordingly, we have made recommendations for the initialisation conditions for the simulation and have stated our minimum requirements for the transient experiment design, as summarised in Tables 1 and 2, respectively.
However, there are some uncertainties that the Core is not designed to deal with directly or exhaustively; two examples discussed in this manuscript being the effect of trace gas record resolution and the influence of ice melt on the oceans and climate, respectively. We know that the Core simulations will not tackle all of our questions, and are likely to give rise to others. Therefore, additional focussed simulations will also be coordinated on an ad hoc basis by the working group. Many of these will build on and be centred around the Core; often taking shorter snapshots in time, thus including the most computationally expensive models in the experiment, or presenting 12 000 year alternatives to the Core for faster models to contribute. Not all simulations will be suitable for all models, but the aim is that taken as a whole, the experiment can utilise the wide range of PMIP model strengths and hence minimise individual weaknesses.
Essentially, the Core experiment has been designed to be inclusive, taking into account the best compromise between uncertainties in the geological data and model limitations. The hypothesis-driven focussed experiments will go further than the Core to target the questions that remain. It is hoped that this exciting initiative will improve our individual efforts, providing new opportunities to drive the science forwards towards understanding this fascinating time period, specific mechanisms of rapid climate warming, cooling and sea-level change and Earth's climate system more broadly.
All boundary condition data required for running the last deglaciation Core
experiment version 1 (summarised by Tables 1 and 2) can be downloaded
from the PMIP4 last deglaciation wiki (PMIP Last
Deglaciation Working Group, 2016;
Ruza F. Ivanovic and Lauren J. Gregoire lead the PMIP Last Deglaciation Working Group, for which Andrea Burke, Masa Kageyama, Didier M. Roche and Paul J. Valdes act as the advisory group. Ruza F. Ivanovic, Lauren J. Gregoire, Masa Kageyama, Didier M. Roche, Paul J. Valdes and Andrea Burke collaboratively designed the working group's aims, structure, Core experiment and additional experiments in consultation with the wider community. Rosemarie Drummond, W. Richard Peltier and Lev Tarasov provided the ice sheet reconstructions, plus associated boundary conditions. Ruza F. Ivanovic and Lauren J. Gregoire collated these and all other boundary condition data for the simulations. Ruza F. Ivanovic and Lauren J. Gregoire wrote the manuscript and produced the figures with contributions from all authors.
Ruza F. Ivanovic is funded by a NERC Independent Research Fellowship (no. NE/K008536/1). Data processing for boundary condition preparation was carried out using the computational facilities of the Palaeo@Leeds modelling group, University of Leeds, UK. All authors would like to thank everyone who has taken the time to discuss the Working Group's aims and experiments with us. We are especially grateful to Jean-Yves Peterschmitt (Laboratoire des Sciences du Climat et de l'Environnement, France) for managing and archiving the boundary conditions, as well as setting up and maintaining the PMIP wiki pages; to Emilie Capron (British Antarctic Survey, UK) for help with the ice-core data and to Bette Otto-Bliesner (National Center for Atmospheric Research, USA) for useful comments on an earlier version of this manuscript. Specific thanks also go to Anders Carlson, Eric Wolff, Andreas Schmittner, Shawn Marshall and an anonymous reviewer for valuable comments on the manuscript, as well as to Jeremy Fyke for editing. Edited by: J. Fyke Reviewed by: S. J. Marshall and one anonymous referee