Modelling marine ecosystems requires insight and judgement when it comes to
deciding upon appropriate model structure, equations and parameterisation.
Many processes are relatively poorly understood and tough decisions must be
made as to how to mathematically simplify the real world. Here, we present
an efficient plankton modelling testbed, EMPOWER-1.0 (Efficient Model of Planktonic ecOsystems WrittEn in R), coded in the freely
available language R. The testbed uses simple two-layer “slab” physics
whereby a seasonally varying mixed layer which contains the planktonic
marine ecosystem is positioned above a deep layer that contains only
nutrient. As such, EMPOWER-1.0 provides a readily available and easy to use
tool for evaluating model structure, formulations and parameterisation. The
code is transparent and modular such that modifications and changes to model
formulation are easily implemented allowing users to investigate and
familiarise themselves with the inner workings of their models. It can be
used either for preliminary model testing to set the stage for further work,
e.g. coupling the ecosystem model to 1-D or 3-D physics, or for undertaking
front line research in its own right. EMPOWER-1.0 also serves as an ideal
teaching tool. In order to demonstrate the utility of EMPOWER-1.0, we
implemented a simple nutrient–phytoplankton–zooplankton–detritus (NPZD)
ecosystem model and carried out both a parameter tuning exercise and
structural sensitivity analysis. Parameter tuning was demonstrated for four
contrasting ocean sites, focusing on station BIOTRANS in the North Atlantic
(47

Ecosystem models are ubiquitous in marine science today; they are used to study a range of compelling topics including ocean biogeochemistry and its response to changing climate, end-to-end links from physics to fish and associated trophic cascades, the impact of pollution on the formation of harmful algal blooms, etc. (e.g. Steele, 2012; Gilbert et al., 2014; Holt et al., 2014; Kwiatkowski et al., 2014). Models have become progressively elaborated in recent years, a consequence of both superior computing power and an expanding knowledge base from field studies and laboratory experiments. All manner of models have appeared in the published literature varying in terms of structure, equations and parameterisation. Anderson et al. (2014), for example, commented on the “enormous” diversity seen in chosen formulations for dissolved organic matter (DOM) in the current generation of marine ecosystem models and asked whether reliable simulations can be expected given this diversity. This question applies not just to modelling DOM, but also to most processes and components considered in modern marine ecosystem modelling (Fulton et al., 2003a; Anderson et al., 2010, 2013).

A certain amount of variability among models is to be expected because of differing objectives among modelling studies. A distinction can, for example, be made between models designed primarily for improving understanding of system dynamics, as opposed to those for out-and-out prediction (Anderson, 2010). Ultimately, however, much of the variability seen in model structure and equations is an outcome of personal choice on the part of the practitioner. Indeed, the art of modelling is in making decisions regarding model structure, parameters, design of simulations, types of output analysis, etc. The underlying root of this diversity and seeming subjectivity is that, despite a wealth of available data, many processes in marine ecosystems are not easy to characterise mathematically. Modellers therefore need to consider how this uncertainty affects their results and use it to inform how best to construct and parameterise their models for chosen applications. Sensitivity analysis and model validation are the obvious means to address model uncertainty, as well as model intercomparison studies. There is however an additional problem, namely that ocean biology is inextricably linked to physics and both incur modelling error. An appropriate physical framework must be selected that adequately represents mixing, advection and the seasonal changes in the depth of the upper mixed layer. Understandably, 1- or 3-D physical frameworks are the usual choice, given the realism thus provided. But this increased dimensionality (or spatial resolution) comes at a price. They require expertise and time to set up, sufficient computational resources for running and storage of output and, last but not least, analysis of the frequently copious output into coherent results. These constraints serve to limit the extent to which modellers can and do carry out extensive diagnosis and testing of their models including sensitivity analysis and validation.

In the early days of marine ecosystem modelling, it was necessary to resort to simple empirical approaches to deal with physics given the limited power of computers at the time. The so-called zero-dimensional “slab” models that came to the fore were the cornerstone of their discipline in the mid 20th century. Slab models have a simple physical structure consisting of two vertical layers. The depth of the upper (mixed) layer, which can vary seasonally, was determined empirically from observations of vertical profiles of temperature or density. Containing the pelagic marine ecosystem, the upper layer was positioned above an essentially implicit (in that it is unchanging) bottom layer that contains a (typically fixed) nutrient concentration. Such slab models can be run quickly and straightforwardly, enabling both a multitude of runs and ease of analysing results.

Despite the simplicity of the two-layer slab physics, these models are sufficiently well formulated to permit realistic and insightful simulations of marine ecosystems (e.g. Evans and Parslow, 1985; Fasham et al., 1990). Indeed, looking back at the history of marine ecosystem modelling, it is remarkable how simple models allowed so much progress to be made, notably by pioneers such as Gordon Riley, John Steele and Mike Fasham (Gentleman, 2002; Anderson and Gentleman, 2012). We admire these individuals when it came to encapsulating the complexity of the real world with mathematical equations. They necessarily had to think deeply about their models because they had to build them from scratch as, in most instances, established relationships for processes such as photosynthesis, grazing and mortality could not be borrowed from elsewhere. A key aspect of their success, we submit, is that they experimented extensively with their models, trying out different formulations and parameterisations in order to see the effect on model predictions (e.g. Anderson and Gentleman, 2012). It is this preparation that served them so well, allowing them to set up meaningful simulations from which they could so effectively draw conclusions and make progress in their field of study.

The need for preparation in terms of exploring sensitivity to ecosystem model formulations and parameterisation is no less in the modern era, indeed it is arguably greater given our deeper knowledge of the marine biota and a correspondingly larger multitude of mathematical formulations to choose from. We propose that modellers can benefit from extensively “playing with” and testing their models and that the use of simple slab physics is an obvious choice in this regard, at least for ocean locations where the bulk of the biological activity occurs in the surface mixed layer. Experimentation of this kind may then be used to set the stage for the “serious” model runs that may follow, e.g. in 1-D or 3-D, although it is also entirely possible to undertake successful studies using only slab physics models. In addition, because they are straightforward to understand and do not require powerful computing resources to run, models that incorporate simple slab physics are ideal for use in teaching future generations of marine scientists about ecological structure and function.

Here, we present a slab a.k.a. zero-dimensional, and hence computationally efficient, plankton ecosystem testbed, coded in the freely available R environment, EMPOWER-1.0 : Efficient Model of Planktonic ecOsystems WrittEn in R. Our aim is to provide EMPOWER-1.0 for general use and to demonstrate how it can readily and easily be used both to study ecosystem dynamics at a range of ocean sites and to assess the pros and cons of different model choices for best representing and analysing the ecosystems in question. EMPOWER's code is structured in a modular way to ensure maximum ease of adjusting parameters and formulations and, indeed, the inclusion of entirely new marine ecosystem compartments, processes and associated outputs as required. Here, we demonstrate the use of EMPOWER-1.0 in combination with a simple illustrative nutrient–phytoplankton–zooplankton–detritus (NPZD) model. It should be noted, however, that EMPOWER-1.0 can be used to test and examine the performance of simple and complex models alike. Our choice of a simple ecosystem model is motivated by the fact that simple models are conceptually straightforward as well as being easy to set up and analyse. This study is structured as follows. First, a brief history of slab models in marine science is presented to illustrate the origin and utility of these models as research tools in marine science. The NPZD model is then described and implemented within EMPOWER. The utility of EMPOWER as a testbed for undertaking model parameterisation is next demonstrated by a parameter adjustment exercise, specifically the fitting of the NPZD model to observed seasonal cycles of chlorophyll and nutrients at each of four stations in diverse regions of the world ocean. The sensitivity analysis is then extended to model equations with a comparison of the performance of different equations for calculating, first, daily depth-integrated photosynthesis and, second, phytoplankton and zooplankton mortality. Finally, the utility of slab models is discussed in context of ongoing contemporary marine ecosystem modelling research.

In this section, we provide a history of slab modelling which serves as an
introduction to how these models are constructed, as well as to demonstrate
that, despite their simplicity, the simulations these models generate can be
meaningful and realistic. Models provide the theoretical basis for our
understanding of the dynamics of marine ecosystems. One of the first
applications of theory in biological oceanography occurred around 80 years
ago when scientists were interested in the mechanisms driving the spring
phytoplankton bloom that is characteristic of many marine systems. The basic
theory as we know it today, whereby bloom initiation occurs as the water
column stratifies, was proposed in the early 1930s by Haaken H. Gran, a
Norwegian botanist (Gran, 1932; Gran and Braarud, 1935). Mathematical testing
of this proposal was essential in order to establish quantitative merit,
given the dynamic interplay between bottom-up controls on phytoplankton via
light and nutrients versus top-down control by grazing. Following on from
initial work by Fleming (1939), it was Gordon Riley, a biological
oceanographer based at the Bingham Oceanographic Laboratory in the
northeastern USA, who constructed a model of seasonal phytoplankton dynamics
for Georges Bank, a raised plateau off the coast of New England, northeast
USA (Riley, 1946), a remarkable achievement at the time (Anderson and
Gentleman, 2012). The model had a single differential equation for the rate
of change of phytoplankton biomass, expressed with terms for photosynthesis,
respiration and grazing. Using a photosynthesis–irradiance (

Forcing used by Riley (1946) in his model of George's Bank:

Although Riley's model considered depth-averaged photosynthesis over the
mixed layer, it could not be described as a slab model per se because it did
not account for fluxes of material across the pycnocline. It was John
Steele, a mathematical marine biologist from Scotland, who took the next
step by experimenting with a dynamic ecosystem embedded within multi-layer
models (e.g. Steele, 1956), arguably a coarser version of what is done
today in the more complex 1-D models. Steele's experience with this model
led him to realise that much of the net effect of vertical gradients could
be captured with just a few layers, and he further simplified the physics to
a two-layer sea in his study of the plankton in the North Sea (Steele,
1958). The resulting NPZ ecosystem was confined to the upper layer with a
lower layer that contained only nutrient, in fixed concentration. Inputs of
nutrients to the surface layer occurred due to mixing, balanced by export
via phytoplankton sinking and mixing (Fig. 2). Steele had thus constructed
the first slab model of its kind although with this, as well as his later
models including those in his seminal work

Two-layer slab physics framework (adapted from Steele, 1974).

It was Geoff Evans and John Parslow who would make the next major advance in
the development of slab models with their “model of annual plankton
cycles” (Evans and Parslow, 1985). Following Steele, they opted for an NPZ
ecosystem embedded within the same two-layer framework with the marine
ecosystem restricted to the upper layer and a fixed nutrient concentration
in the lower. Evans and Parslow provided a more complete representation of
the interaction of the marine ecosystem with its physical environment by
allowing the depth of the mixed layer to vary seasonally with direct impacts
on the model state variables. As the mixed layer deepens, nutrients are
entrained from below while phytoplankton density is diluted because their
surface layer biomass is spread over a greater depth. Conversely, as the
mixed layer shallows, the concentrations of nutrients and phytoplankton are
unchanged although losses occur on a per unit area (m

Evans and Parslow (1985) also took seasonal and daily irradiance forcing
into consideration, in combination with depth integration of a non-linear

In common with their predecessors, Evans and Parslow were interested in the factors controlling the initiation of the spring phytoplankton bloom, focussing on the role of vertical mixing. Bloom initiation, they concluded, required a low rate of primary production over winter, which is to be expected in the North Atlantic due to deep mixed layers at that time, and is also linked to coupling between phytoplankton and grazers. The simplicity of the slab model was key to their conclusions as articulated in their own words: “It is worth emphasising the advantages of analysing simple models, and simplifying models until they can be analysed”. The controls on phytoplankton dynamics in high-nutrient low-chlorophyll (HNLC) areas such as the subarctic Pacific has remained a topical issue ever since, in large part because limitation by iron is also indicated (Martin et al., 1994; Coale et al., 1996), but the role of grazing and the link between phytoplankton–zooplankton coupling and mixed layer depth remains firmly established as a key mechanism in these systems (Frost, 1987; Fasham, 1995; Chai et al., 2000; Smith Jr. and Lancelot, 2004).

Perhaps the most famous slab modelling paper, published 5 years after
Evans and Parslow (1985), is the study of nitrogen cycling in the Sargasso
Sea by Fasham et al. (1990; henceforth FDM90). It is by far the most highly
cited marine ecosystem model (Arhonditsis et al., 2006, noted that it had
accumulated 405 ISI cites by November 2005; this number has increased to 758
as of May 2015). In terms of physical structure, Fasham's model used the
same basic slab construct as in Evans and Parslow (1985), with seasonally
varying mixed layer depth and irradiance forcing. The novel aspects of FDM90
were instead related to additional complexity of the ecosystem, expanding
from a simple NPZ to explicitly separate new and regenerated production by
including state variables for nitrate and ammonium (critical for calculating
the

Characteristics of published slab models.

MLD: clim. (climatological from data); hypothet. (hypothetical);

The description of the marine ecosystem provided by FDM90 has largely served as the foundation for marine ecosystem modelling ever since. With the advent of increasing computer power, as well as increasing interest in the spatio-temporal behaviour of plankton systems, most modelling studies are now undertaken in 1-D or 3-D physical frameworks. Nevertheless, many slab modelling studies have been published since FDM90 which follow the basic design described above, or slight modifications thereof (Table 1). A range of ecosystem models of varying complexity have been incorporated within slab physics and applied to contrasting sites throughout the world ocean. The basic physical construction is similar in most cases consisting of a classic slab structure with a seasonal cycle of mixed layer depth specified from data and seasonal irradiance from standard trigonometric equations. Remarkably, Evans and Parslow's (1985) equations for calculating daily depth-integrated photosynthesis have prevailed and been used in most studies. A more sophisticated calculation method was developed by Morel (1988, 1991) and a simplified form of this (Anderson, 1993) is examined in Sect. 4.3. The models in Table 1 have been used for a diverse range of applications including studies of parameter optimisation (Spitz et al., 1998; Fennel et al., 2001; Schartau et al., 2001; Hemmings et al., 2004), parameter sensitivity analysis (Mitra, 2009; Mitra et al., 2007, 2014), phytoplankton bloom dynamics (Findlay et al., 2006), nutrient cycling via organic and inorganic pathways (Llebot et al., 2010), primary production in HNLC systems (Kidston et al., 2013) and primary production and export flux in contrasting regions (Fasham, 1995; Onitsuka and Yanagi, 2005).

We demonstrate the use of EMPOWER-1.0 using a simple NPZD ecosystem model and forcing for four time series stations in the ocean. The code is readily adapted to incorporate other ecosystem models, including the relatively complex models of the modern era, and/or forcing for other ocean sites.

The model uses slab physics as per Evans and Parslow (1985), namely a
seasonally varying surface mixed layer that contains the ecosystem
positioned above a deep homogeneous layer containing unchanging nutrient and
no plankton (Fig. 2). We have also included temperature dependencies for the
physiological rates in the ecosystem model (see below). Our model was set up
for four stations, two in the North Atlantic (stations BIOTRANS,
47

The bottom layer in most slab models is assumed to have a fixed
concentration of nutrient,

The regression coefficients were fitted from WOA
data (Garcia et al., 2010) for subthermocline NO

Model forcing for stations India (60

Structure of the NPZD model.

The NPZD ecosystem model we have implemented in EMPOWER is presented in
Fig. 4 with dissolved inorganic nitrogen (

Note that

With the assumption of balanced growth,

The usual way NPZD-type models characterise nutrient limitation of
phytoplankton growth rate by nutrients,

Photosynthesis–irradiance curves with parameter settings

The calculation of

Triangular versus sinusoidal patterns of diel irradiance
illustrated for a 12 h day and noon irradiance of 200 W m

Integration with depth (inner integral of Eq. 4) can be calculated
analytically for either of the two

Analytic depth integrals require a Beer's law attenuation of light within
the water column characterised by a single attenuation coefficient,

The assumption of a single mixed layer value of

This approach to light attenuation is provided as the default option for use
in EMPOWER. The values of the polynomial coefficients (

Coefficients for use in the Anderson (1993) calculation of light attenuation (Eq. 10).

The diurnal variation in light at the ocean surface over the course of a day
may be reasonably approximated by a sinusoidal function that is symmetric
about noon irradiance (Platt et al., 1990). Further simplification is possible by
use of a linear model, i.e. use of a triangular model centred at noon (e.g. Steele, 1962;
Evans and Parlsow, 1985) because this simplifies the time integration. It
should be noted here that despite Evans and Parslow's (1985) claim that
differences between the triangular and sinusoidal approximations are minimal
if the area under the curve is the same, they did not make the “equivalent
area” adjustment to their formula, nor is their statement generically true
(i.e. it depends on the peak light intensity, the attenuation of light with
depth and the non-linear

In EMPOWER, the default method of handling the diurnal variation in
irradiance at the ocean surface is to do a numeric integration. Undertaking
a numerical time integral involves computational cost and two empirical
methods (Evans and Parslow, 1985; Anderson, 1993) have been published that
provide analytic calculations (i.e. pre-determined formulae) for daily
depth-integrated photosynthesis in a water column. Both are provided as
options for use in EMPOWER and have the advantage of faster run time. The
first of the two EMPOWER options is the depth-averaged light-dependent
calculation of growth of Evans and Parslow (1985) which assumes a triangular
pattern of daily irradiance, Beer's law for light attenuation (Eq. 9) and a
Smith function as the

Grazing by zooplankton is assumed to be on both phytoplankton and detritus.
This choice was made in part to illustrate how to implement ingestion on
multiple prey types, as such functions are used for more complex models
(e.g. when there are multiple phytoplankton size classes or functional types
and/or omnivory by zooplankton). Many multiple-grazing formulations,
however, comprise questionable assumptions about zooplankton feeding
behaviour (Gentleman et al., 2003). For example, the multiple-prey grazing
formula used in FDM90 is classified as an active switching response
(Gentleman et al., 2003) which can display anomalous behaviour such as
suboptimal feeding (i.e. ingestion rates decreasing when prey availability
increases). We have therefore opted to improve upon Fasham's choice by using
a different multiple-prey response, but one that is nevertheless commonplace
in the literature. Specifically, we have adopted a passive switching
response where density dependence of the prey preferences arises due to
inherent differences in the single-prey responses (see Gentleman et al.,
2003). This Sigmoidal (or Holling Type 3) response is characterised as
(Fig. 7)

Contours of the zooplankton specific ingestion rates (

The Sigmoidal response assumes an interference effect of alternative prey in that as detritus increases, ingestion of phytoplankton decreases (with the same interaction for phytoplankton and ingestion of detritus). This interference effect is not so great as losing the benefit of generalism, i.e. total ingestion always increases for an increase in total prey density. The non-equal preferences reduce the interference effect for phytoplankton, i.e. the contours in the first panel of Fig. 7 are more vertical than for equal preferences. The corollary effect is that the increased ingestion by consuming both phytoplankton and detritus versus just phytoplankton is reduced as compared to when prey have equal preferences.

Regarding phytoplankton non-grazing mortality, FDM90 has the usual choice of a linear term although non-linear approaches are also possible, e.g. the use of a Michaelis–Menten saturating function by Fasham (1993). We opted for the more flexible approach of using both linear and non-linear terms (Yool et al., 2011, 2013a). The former may account for metabolic losses or natural mortality. The use of an additional non-linear term represents density-dependent loss processes, notably mortality due to infection by viruses. The abundance of viruses is highly dependent on the density of potential host cells (e.g. Weinbauer, 2004) and, as reviewed by Danovaro et al. (2011), there is “compelling” evidence that, at least in some instances, viruses are responsible for the demise of phytoplankton blooms based on observations of high proportions (10–50 %) of infected cells (e.g. Bratbak et al., 1993, 1996). A quadratic form was used for the non-linear mortality term (e.g. Kawamiya et al., 1995; Oschlies and Schartau, 2005) and all phytoplankton non-grazing mortality losses were allocated to detritus.

The equation for rate of change of zooplankton density is

A variety of formulations exist in ecosystem models to describe zooplankton mortality and the appropriate functional form has been and continues to be a hotly debated topic (Steele and Henderson, 1992; Edwards and Yool, 2000; Mitra et al., 2014). Most common are the linear and quadratic terms, although some authors have chosen to employ other non-linear functions (e.g. Fasham, 1993 used a Michaelis–Menten relationship). As with phytoplankton, we used both linear and quadratic non-linear terms (Yool et al., 2011). The linear term represents density-independent natural mortality, whereas the quadratic term is considered to be due to predation by carnivores (whose population tracks that of the zooplankton). The different sources of mortality result in different fates for these terms. Loss from natural mortality is allocated to modelled detritus, which implies a broader size class of modelled particulates (and therefore higher sinking rates) than when just phytoplankton death contributes to this variable.

The fate of the predation-related mortality is less obvious because the metabolic activity of higher predators results in ingested material being converted into dissolved nutrients as well as larger particulates (e.g. fecal pellets and death). Moreover, the higher predators may export material from the local region with migration. FDM90, along with a suite of follow-on models, therefore chose to allocate predation-related zooplankton mortality between nutrients (ammonium and DON, attributed to excretion by higher predators) and material that is immediately exported from the system (e.g. attributed to fast-sinking detritus generated by higher predators). Similarly, Steele and Henderson (1992) also allocated zooplankton mortality to export. Nevertheless, many past and recently published marine ecosystem modelling studies allocate all of zooplankton mortality to detritus (Oschlies and Schartau, 2005; Salihoglu et al., 2008; Hinckley et al., 2009; Ye et al., 2012). We argue, however, that this is not necessarily realistic given that detrital particles related to higher predators are larger and therefore even faster-sinking than that produced by the modelled plankton. We have therefore here adopted to follow the sage approach of the model pioneers and assume that the predation-related mortality represented by our quadratic term is instantly exported and thereby entirely lost from the surface mixed layer of the model. As with phytoplankton, zooplankton are subject to changes in concentration via mixing and changes in MLD.

The equation for the rate of change of dissolved inorganic nitrogen (DIN)
density is

DIN is taken up by phytoplankton (first term) and, via the food web, regenerated with the second and third terms in Eq. (14) representing excretion by zooplankton and remineralisation of detritus respectively. The fourth term represents the net transport due to mixing (i.e. supply by the deep water and loss from the surface layer). The last term represents the net effect of volume changes, i.e. increases in DIN density due to supply of deep water nutrients through entrainment and decreases in DIN density due to volume increases associated with entrainment.

Finally, the detritus equation is

Detritus is produced by phytoplankton mortality, zooplankton natural
mortality (linear term) and as zooplankton egestion (faecal pellet
production). It is lost by zooplankton grazing and is also remineralised at
a constant rate,

The first results Sections (4.1, 4.2) are devoted to parameterising the model, in the first instance, for station BIOTRANS and a detailed description of values assigned to model parameters is provided therein.

We have chosen to code our model in the R programming language which can be
readily downloaded for free over the Internet. Input and output files are in
ASCII text (.txt) format, avoiding the use of proprietary software. The
structure of the code is designed to be transparent, where possible using
conventional syntax common to different programming languages such as the
use of loops and block IF statements. Where possible, we have followed
what we consider to be best practice in developing the code, which includes the following.

Creation of a fixed segment of core code that handles the numerical integration, as well as writing to output files. Being fixed, this segment does not require alteration in the event of changes to the ecosystem model formulation, nor indeed if an entirely new ecosystem model is implemented.

The ecosystem model formulation, i.e. the specification of the terms in the differential equations and calculation of their rates of change, is handled by a function (FNget_flux) that is external to the core code.

The specification of parameter values and run characteristics (e.g. time step, run duration, as well as flags for choices between different formats for export to output files, choice of ocean location and for different parameterisations of key processes) is via text files that are read in at the onset of each simulation. Thus, there is no need to enter or alter the model code when changing parameter values or other model settings.

When a model run finishes, the summed annual fluxes associated with each term in the differential equations is displayed on the computer screen along with a report as to whether mass balance is achieved for each state variable (over the last year of simulation). Basic checking of mass balance is useful for ensuring that the model equations are error-free.

Regimented layout for clarity with extensive commenting throughout.

The R programming language is supported by various libraries that can be
accessed via the Internet. One such library is for solving ordinary
differential equations (Soetaert et al., 2010). Using this library has the
advantage of minimising the length of the code and offers flexibility in
terms of a range of numerical methods. On the other hand, its implementation
requires that various conventions are adhered to and these can be
restrictive when it comes to producing ancillary code, e.g. the formatting
and export of output files. As such, we opted to code the numerical solution
of the ordinary differential equations (ODEs) manually within the core code of the model for the following reasons.

It offers full transparency for the interested user who wishes to see the method of integration.

The use of manual code makes it considerably easier to export chosen variables and fluxes to output files in desired formats and frequencies.

In our case, the user is given the choice between two integration methods, Euler and fourth order Runge Kutta (RK4). These methods, particularly the latter, are entirely sufficient for the numerical task at hand and the coding of them is straightforward.

By using elementary syntax, the code can be easily altered or converted to other programming languages.

The code is stand alone and not subject to reformulation in the event of future changes in subroutine libraries.

Structure of the model code.

The structure of the code is shown in Fig. 8. The functions come first,
appearing prior to the core code in R. The key function call is
FNget_flux which contains the ecosystem model specification
(Sect. 3.2). The rate of change is calculated for each term in the
differential equations and allocated to a 2-D array (flux no., state
variable no.) which is then passed back to the core (permanent) code for
processing. Other functions are FNdaylcalc (calculates length of day; Eq. A7), FNnoonparcalc (noon irradiance, PAR; Eq. A5), FNLIcalcNum (undertakes
numerical (over time) calculation of daily depth-integrated photosynthesis),
FNLIcalcEP85 (calculates

Model setup comes next. Parameter values are read in from file
NPZD_parms.txt. Simulation characteristics are then read in
from file NPZDextra.txt. These include

initial values for state variables (

run duration (years) and time step;

choice of station: BIOTRANS, India, Papa, KERFIX;

choice of photosynthesis calculation: numeric (default), Evans and Parslow (1985) or Anderson (1993);

choice of integration method: Euler or RK4;

choice of output characteristics: none, last year only or whole simulation, and a frequency of once per day or every time step.

Model forcing for the chosen station of interest is then assigned. Monthly
values of MLD and sea surface temperature are read in and subject to linear interpolation in
order to derive daily forcing. Other forcing variables are also set:
latitude, deep nitrate (

An advantage of this structure is that an initial section of customisable code is followed by a section of permanent code that does not require adjustment in the event of changes to the equations that describe the ecosystem model, or indeed if a completely new ecosystem model is to be used. This code sets up a series of matrices to store fluxes and outputs and then integrates the model equations over time. State variables are updated and results exported to three output files: out_statevars.txt (state variables), out_aux.txt (chosen auxiliary variables) and out_fluxes.txt (all the terms in the differential equations). These text files are readily imported to, for example, Microsoft Excel.

Results are plotted graphically on the computer screen at the completion of each simulation run. The graph plotting code is necessarily model specific and needs to be updated by the user as required. R is a user friendly programming language in this regard and the code provided should be sufficient for the user to incorporate extra variables with ease.

Finally, a user guide is provided in Appendix D, outlining how to set up R, run the code, a summary of input and output files, and guidance on considerations when altering the ecosystem code and/or forcing.

Model results are presented in four sections. First, a simulation is shown for station BIOTRANS using parameters taken from the literature (Sect. 4.1). This station is chosen as our primary focus, inspired by the North Atlantic Bloom Experiment in 1989 as part of JGOFS (the Joint Global Ocean Flux Study; e.g. Ducklow and Harris, 1993; Lochte et al., 1993). It exhibits the characteristic spring blooming of phytoplankton of temperate latitudes, followed by relatively oligotrophic conditions over summer, and has been the subject of previous work using slab models (Fasham and Evans, 1995). Parameter tuning is then undertaken to fit all four ocean time series stations, BIOTRANS, India, Papa and KERFIX, to data for chlorophyll and nitrate at each site (Sect. 4.2). Moving on from the calibration of parameters, structural sensitivity analysis is then carried out by examining model sensitivity to equations for the calculation of daily depth-integrated photosynthesis (Sect. 4.3) and mortality terms for phytoplankton and zooplankton (Sect. 4.4).

The model is compared to seasonal data for chlorophyll and nitrate within the mixed layer, for each station. Nitrate data are climatological, from World Ocean Atlas 2009 (Garcia et al., 2010), as is the model forcing in terms of mixed layer depths and irradiance. Regarding chlorophyll, data are SeaWiFS (Sea-Viewing Wide Field-of-View Sensor) 8-day averages (O'Reilly et al., 1998), for which we had access to years 1998–2013. Averaging data across years to provide a climatological seasonal cycle of chlorophyll is not meaningful as key features, such as the spring phytoplankton bloom, are smoothed out because the bloom timing is variable between years. A characteristic year was therefore chosen for each station by firstly converting the data to log(chlorophyll), then calculating mean log(chlorophyll) for each year and finally selecting the median year (an odd number of years is required, so we used 1998–2012. The resulting year selections were 2002, 1998, 2007 and 2006 for stations BIOTRANS, India, Papa and KERFIX respectively. The entire data sets are shown with the multiple years overlaid in Fig. 9, with data for the selected median year highlighted.

SeaWiFS chlorophyll data (mg m

It is not our objective here to provide thorough quantitative assessment of different model simulations in terms of objective quantification of model–data misfit but, rather, to demonstrate the utility of EMPOWER as a testbed for model evaluation. Different ecosystem models and associated data sets will necessarily require different skill metrics and so a lengthy description and use of quantitative metrics is not appropriate here. Very often anyway, as is the case here, visual inspection of model–data misfit is sufficient to determine the best options for model formulation/parameterisation. If quantitative methods are required, these are readily accessed from the literature (e.g. Lewis and Allen; 2009; Lewis et al., 2006).

Adjustment of parameters is a perennial problem for modellers. Parameters can be set from the literature, sometimes directly on the basis of observation and experiment, but the usual starting point is to take values from previously published modelling studies. Almost inevitably, however, the resulting simulations will show mismatch with data and parameters are usually selected for adjustment (tuning) to improve the agreement with data. One option is to use objective tuning methods, such as the genetic algorithm or adjoint method in which many or all of the model parameters are varied simultaneously in order to try and find a best fit solution to data (e.g. Friedrichs et al., 2007; Record et al., 2010; Ward et al., 2010; Xiao and Friedrichs, 2014). The advantage is objectivity, but difficulties include sloppy parameter sensitivities (parameters compensate for each other), different values of model parameters may be similarly consistent with the data (the problem of identifiability), exploration of a huge parameter space may be required and local minima in misfit parameter space can make it difficult to find the true global minimum (Slezak et al., 2010). It is usually the case that models are underdetermined by data anyway (Ward et al., 2010), i.e. there are insufficient data (in terms of absolute amount and/or different types of data) to adequately constrain parameter values. And of course, objective methods require expertise, time and computing resources.

Modellers more often than not carry out parameter adjustment by varying values of chosen parameters one at a time until satisfactory convergence with data is achieved. The skill is in deciding which parameters to vary. In principle, sensitivity analysis can be of help in this regard in that sensitive parameters can be identified and selected for adjustment if they can be justifiably altered (i.e. there is uncertainty regarding their value). Here, we will demonstrate the use of EMPOWER for model calibration. Parameter sets will be derived for the four stations, BIOTRANS and India in the North Atlantic and the HNLC stations Papa (subarctic North Pacific) and KERFIX (Southern Ocean). The ecosystem model we have presented uses the NPZD structure in combination with up-to-date formulations for key processes such as photosynthesis, grazing and mortality. As such, it has not been previously published and so there is no readily available complete set of parameter values to draw upon. Using our experience, we chose appropriate parameter values from the literature and adjusted others to give a good fit with the data for station BIOTRANS. This result is presented below along with a discussion of how we went about achieving this parameter set. Working from this parameter set, tuning of parameters is then undertaken to fit the other stations to the data.

Model parameters. Fitted model solutions for stations
BIOTRANS, India, Papa and KERFIX. The initial (unfitted) parameter guesses
for BIOTRANS were as for the fitted solution, except that parameters

Source:

Station BIOTRANS was previously modelled by Fasham and Evans (1995) and we used this publication as a starting point for the assignment of some of the parameter values (note that we opted for the second of two optimisation solutions in this reference). Other parameters were otherwise assigned values from the literature where possible and/or selected as a best guess. The resulting parameter set, along with adjusted (tuned) values (see below), is shown in Table 3.

Photosynthetic parameters,

Zooplankton parameters

Detritus is composed of a range of sinking material including faecal pellets
and marine snow of between 5 and several 100 m d

Choices have to be made regarding the settings for calculating daily
depth-integrated photosynthesis. A sinusoidal pattern of daily irradiance
was set as default for this purpose, with a numeric integration over time of
day. A Smith function was chosen as the

The model was run for 5 years, by which time it generates a repeating
annual cycle of plankton dynamics. The last year of simulation for station
BIOTRANS, with initial parameter settings as described above, is compared to
data for chlorophyll and nitrate in Fig. 10. Nitrate (model DIN) is
predicted remarkably well using these default parameter settings, whereas
the predicted seasonal cycle of chlorophyll shows a poorer match with
data. The peak of the spring bloom is more than double that observed and
post-bloom chlorophyll is also consistently elevated (by approximately 0.2 mg m

Simulation for station BIOTRANS using first-guess parameters
compared to data (year 2002) for

Many modelers go about parameter adjustment on a trial-and-error basis,
making ad hoc changes to parameters and observing the outcome. A more structured
way of going about this is to undertake a systematic sensitivity analysis of
parameters and then, informed by this analysis, choose which parameters to
vary. We use EMPOWER to demonstrate this practice here. Three variables were
selected as simple measures of model mismatch with data: minimum DIN
encountered during the seasonal cycle,

Model sensitivity analysis: station BIOTRANS. Variables
are chl

The requirement for improving the model fit is to decrease chl

Simulation for station BIOTRANS after parameter tuning (see
text):

The associated seasonal cycles of

Predicted state variables and fluxes for the station BIOTRANS
simulation:

It might be expected that station India is simulated accurately with the
same parameter values as those of station BIOTRANS because of their
relatively close proximity in the northern North Atlantic Ocean. In fact,
the predicted spring bloom is rather high, approximately double the maximum
in the observations for year 1998 (Fig. 13), although not outwith what is
seen in the multi-year data (Fig. 9). An improved fit is easily achieved by
setting

Simulations for station India:

The two HNLC stations can be expected to require alternative
parameterisations to the two North Atlantic stations because of their
different food web structure. In contrast to the diatom spring bloom in the
northern North Atlantic, iron-limited HNLC systems favour small
phytoplankton which are tightly coupled to microzooplankton grazers (Landry
et al., 1997, 2011), “grazer controlled phytoplankton populations in an
iron-limited ecosystem” (Price et al., 1994). Low growth rate of
phytoplankton may be expected relative to the North Atlantic because of iron
limitation. Parameters

Simulations for station Papa before and after parameter tuning:

Structural sensitivity analysis is performed to assess model sensitivity to
the different assumptions for calculating daily depth-integrated
photosynthesis. The best-fit simulation for station BIOTRANS presented above
(Fig. 11) is used as the baseline for comparison, although we will comment
on sensitivity for other stations also. Default settings in the baseline
simulation were a numerical time integration (over the day), a Smith
function for the

Simulations for station KERFIX before and after parameter tuning
(see text for details):

Simulations for station BIOTRANS showing sensitivity to choice of

Simulations for station BIOTRANS showing sensitivity to choice of
diel variation in irradiance:

The first sensitivity test involved changing the

Reverting to the Smith function as the chosen

Model sensitivity of predicted primary production to the equations
describing light attenuation in the water column was previously highlighted
by Anderson (1993), although without extending to analysis using full
ecosystem models. Model predictions for the two choices for light
attenuation (simple Beer's law, Eq. 9, versus piecewise Beer's, Eq. 10) are
shown in Fig. 18, for all four stations. Whereas chlorophyll shows little
change when switching between the two routines, predicted NO

Model simulations for all four stations showing sensitivity to
choice of method for calculating light attenuation in the water column:

Finally, there is the option to use the routines of Evans and Parslow (1985)
and Anderson (1993) to calculate daily depth-integrated photosynthesis,
without recourse to using numerical integration over time. Evans and Parslow
used a Smith function for photosynthesis in combination with a triangular
pattern of daily irradiance. This corresponds exactly to the simulation in
Fig. 17 for triangular irradiance. Thus, running the model using the
Evans and Parslow equations (Appendix C) produces a result indistinguishable
from the numerical simulation. Matters are not so simple when using the
Anderson (1993) equations to calculate daily depth-integrated
photosynthesis. The assumptions here are an exponential

Light attenuation as predicted by Evans and Parslow
(1985; EP85) and for the three layers (0–5, 5–23,

Simulations for all four stations comparing methods for
calculating daily depth-integrated photosynthesis, standard run (numeric
integration) and the algorithm of Anderson (1993) which is an empirical
approximation of a full spectral model:

Simulations for all four stations showing model sensitivity to
phytoplankton mortality. Parameters

The model includes two mortality terms, linear and quadratic, for each of
phytoplankton and zooplankton. This approach has previously been used in
other models (e.g. Yool et al., 2011, 2013a), giving maximum flexibility.
The obvious question is whether all four terms are actually needed. As a
simple structural sensitivity analysis, we removed each of the four
mortality terms in turn and show the impact on the predicted seasonal cycles
of chlorophyll and nitrate for all four stations. The model is relatively
insensitive to the phytoplankton mortality terms although setting

Simulations for all four stations showing model sensitivity for
zooplankton mortality. Parameters

In contrast to the phytoplankton results, removing the linear zooplankton
mortality term had relatively little impact on model predictions, whereas
removal of the quadratic term did, for all four stations (Fig. 22). Removal
of quadratic mortality resulted in phytoplankton levels decreasing by as
much as 50 % which is unsurprising since more zooplankton means more
grazing. Perhaps less obvious is the result that removal of quadratic
closure resulted in similarly large changes in predicted post-bloom nitrate
levels. Predation-related losses, the quadratic term, were assumed to be
instantly exported and thereby lost from the surface mixed layer of the
model. Thus, when these losses are set to zero (parameter

Marine ecosystem modelling is somewhat of a black art regarding decisions about what state variables to include and how to mathematically represent key processes such as photosynthesis, grazing and mortality, as well as allocating suitable parameter values. The proliferation of complexity in models has only served to increase the plethora of formulations and parameterisations available to choose from. The complex ecosystem models that have come to the fore in recent years include, for example, any number of plankton functional types, multiple nutrients, dissolved organic matter and bacteria, etc. (e.g. Blackford et al., 2004; Moore et al., 2004; Le Quéré et al., 2005). Simulations are often carried out within computationally demanding 3-D general circulation models (GCMs) and, of course, the realism in ocean physics thus gained is to be welcomed. The caveat is, however, that improvements in prediction can only be achieved if the biological processes of interest can be realistically characterised (Anderson, 2005). The key is, as described above, to undertake extensive analysis of ecosystem model performance and we propose that the use of a simple slab physical framework of the type used in EMPOWER is ideal in this regard. The pioneers of the field such as Riley, Steele and Fasham employed slab physics to test their models, trying out different formulations and parameterisations, just to see what would happen (Anderson and Gentleman, 2012). The simplicity afforded by using a zero-dimensional slab physics framework provides an ideal playground for familiarisation with ecosystem models, allowing for a multiplicity of runs and ease of analysis. It is by following this approach that the user develops an intuitive understanding of the complex non-linear interdependencies of the model equations, a precursor to making predictions with confidence.

Here, we have presented an efficient plankton modelling testbed, EMPOWER-1.0, coded in the freely available language R. It provides a readily available and easy to use tool for thoroughly evaluating ecosystem model structure, formulations and parameterisations by coupling the ecosystem dynamics to a simplified representation of the physical environment. EMPOWER has several advantages in that it is fast, easy to run, its results are straightforward to analyse and, last but by no means least, the code is transparent and easily adapted to incorporate new formulations and parameterisations. As such, the main purpose of EMPOWER is to provide an ecosystem model testbed that allows users to fully familiarise themselves with their models, allowing them to subsequently be incorporated with greater confidence into 1-D or 3-D models, as required. It may be that some amount of reparameterisation is required when transferring the model ecosystem between physical codes (from slab to 1-D or 3-D), but this ought usually to be minimal in extent and will itself be greatly informed by the previous slab modelling work. Much better this approach, than starting out from scratch using computationally expensive and time-consuming 1-D or 3-D codes to undertake ecosystem model parameterisation.

Bearing in mind Steele's two-layer sea, the first slab model of its kind (Sect. 2), it is worth noting that simple ocean box models are akin to slab models in terms of physical structure but, whereas slab models usually are usually set up for point locations in the ocean, box models represent spatial areas (e.g. ocean basins or the global ocean). A mixed layer or euphotic zone is positioned above a deep ocean layer, with mixing between the two but usually without a seasonally changing mixed layer depth. Tyrrell (1999), for example, used a global ocean box model to study the relative influences of nitrogen and phosphorus on oceanic primary production. Box models were likewise used by Chuck et al. (2005) to study the ocean response to atmospheric carbon emissions over the 21st century. Slab models, including EMPOWER, effectively convert to simple box models if the seasonality of mixed layer depth is switched off. Without a seasonally varying MLD, box models have limited capacity to capture seasonal plankton dynamics because of the role played by MLD in mediating the light and nutrient environment experienced by phytoplankton. Our results (Figs. 18–20) demonstrate sensitivity to accurate representation of the submarine light field (i.e. equations describing light attenuation in the water column).

In order to demonstrate the utility of EMPOWER, we carried out both a
parameter tuning exercise and a structural sensitivity analysis, the latter
examining the equations for calculating daily depth-integrated
photosynthesis and mortality terms for both phytoplankton and zooplankton.
In the parameter tuning exercise, a simple NPZD model, broadly based on the
ecosystem model of Fasham and Evans (1995), was fitted to data (seasonal
cycles) for chlorophyll and nitrate at four stations: BIOTRANS
(47

Our parameterisation of the different stations highlighted the somewhat ad hoc process that most modellers go through when assigning parameter values. Some parameters were set directly from the results of observation and experiment. More often than not, however, we followed the “path of least resistance” when assigning parameters, namely to simply select values from previously published modelling studies. Equations for processes such as photosynthesis, grazing and mortality were likewise selected “off the shelf” from the published literature. Previous publication does not, of course, guarantee that equations or parameter values are necessarily best suited for a particular modelling application. Moreover, it is all too easy for less than ideal, even dysfunctional, formulations to become entrenched within the discipline and used in common practice (Anderson and Mitra, 2010). Parameter tuning is almost inevitable in order to ensure satisfactory agreement with data and we have shown how rigorous sensitivity analysis can help in this regard. Of course, even with a table of parameter sensitivities, there is still a considerable subjective element to choosing which parameters to adjust. The most sensitive parameters should be selected, but the degree of uncertainty in parameter values is an additional consideration. It is no good tuning a sensitive parameter if its value is already well known from observation and experiment.

A necessary complement when ensuring that models show acceptable agreement
with data is to remember that it is important that the theories and
assumptions underlying the conceptual description of models are correct or,
at least, not

When it comes to biogeochemical modelling studies in GCMs, it is possible
that all manner of different methods are used to calculate light attenuation
in the water column and resulting photosynthesis. Methodologies are often
not reported in full within published texts, the assumption being that they
are in some way routine and straightforward and that, perhaps, the models
are insensitive to this choice. Consider, for example, the MEDUSA-2.0 (Model of Ecosystem Dynamics, nutrient Utilisation, Sequestration and Acidification) model
(Yool et al., 2013a), published within

As a point of interest, we ran our model for all four stations again, this
time using the MEDUSA-2.0 method of light attenuation and a Smith function
for the

We also used EMPOWER to undertake an analysis of model sensitivity to the presence/absence of linear and non-linear mortality terms for phytoplankton and zooplankton. Whereas the use of linear phytoplankton mortality terms is commonplace in models (e.g. Anderson and Williams, 1998; Oschlies and Schartau, 2005; Salihoglu et al., 2008; Llebot et al., 2010), we investigated the performance of an additional quadratic phytoplankton mortality term. This term is intended to represent loss processes that scale with phytoplankton biomass that are not already accounted for in the model. Given that both self-shading and grazing are explicitly modelled, we considered the quadratic term to represent mortality due to viruses. Model results were however relatively insensitive to this parameterisation, although the potential importance of viruses in marine systems should not be underestimated (Bratbak, 1993, 1996; Danovaro et al., 2011).

It has long been recognised that the parameterisation and functional form of zooplankton mortality, the model closure term, can have a pronounced effect on modelled ecosystem dynamics (e.g. Steele and Henderson, 1981, 1992, 1995; Murray and Parslow, 1999; Edwards and Yool, 2000; Fulton et al., 2003a, b; Neubert et al., 2004). Quadratic closure is a common choice, although other non-linear functional forms are also in use. While it is commonly stated that quadratic closure is dynamically stabilising, i.e. it prevents both blooms and extinction of prey, there is a limit to this influence (Edwards and Yool, 2000) since other processes can come into play. In our case, it is obvious that quadratic closure had a stabilising effect on the model. Its removal caused the bloom peak to be higher and also post-bloom phytoplankton levels to decline to near zero.

In contrast to the community's broad recognition of the potential
sensitivity to choice of closure scheme, far less attention has been paid to
model sensitivity regarding the fate of zooplankton mortality. In reality,
there are likely various types of zooplankton mortality including grazing by
higher predators, starvation and disease. As a mathematical closure term,
one can consider the grazing loss to be partitioned between an infinite
series of higher predators (e.g. Fasham et al., 1990), with partitioning
between detritus and dissolved nutrients in both organic and inorganic form.
The fate of these losses will occur with time delays and potentially also
with spatial separation due to migration of predators. Moreover, any
detrital production by higher predators would comprise significantly larger
“particles” than those due to plankton death and would therefore be
associated with much higher sinking rates. Non-grazing mortality might lead
to production of detritus in situ. There is no consensus on best practice, despite
the fact that different approaches to partitioning of zooplankton losses
between detritus, nutrient and DOM differs markedly between models and can
have a significant effect on modelled ecosystem function (Anderson et al.,
2013). Future structural sensitivity studies should be conducted to explore
how the

Model sensitivity to choice of functional forms and parameterisation, often manifested as surprising and unforseen emergent predictions, is classic complexity science (Bar-Yam, 1997). Understanding emergence and the consequences for accuracy of prediction is a key component of modelling complex systems (Anderson, 2005). Results here, as discussed above, showed varying sensitivities to different formulations and assumptions and demonstrated the utility of EMPOWER in tackling this important topic. High sensitivities have previously been documented in marine ecosystem models, e.g. to the exact form of the zooplankton functional response (Anderson, 2010; Wollrab and Diehl, 2015) and choice of zooplankton trophic transfer formulation (Anderson et al., 2013). Other studies have also shown “alarming” sensitivity to apparently small changes in the specification of biological models (e.g. Wood and Thomas, 1999; Fussmann and Blasius, 2005). Anderson (2005) described this insidious problem, namely sensitivity of emergent outcomes to interacting non-linear differential equations, as “all in the interactions”. Dealing with it poses an ongoing challenge for the modelling community.

EMPOWER-1.0 is provided as a testbed which is suitable for examining the performance of any chosen marine ecosystem model, simple or complex. We chose to demonstrate its use by incorporating a simple NPZD ecosystem model. Simple marine ecosystem models are, however, all too often brushed aside in marine science today. While our objective here is not to delve deeply into the ongoing debate about complexity in models (e.g. Fulton et al., 2004; Anderson, 2005; Friedrichs et al., 2007; Ward et al., 2010), we would nevertheless like to comment on the worth of simple ecosystem models. Complex ecosystem models are often favoured today (e.g. Blackford et al., 2004; Moore et al., 2004; Le Quere et al., 2005) with a similar trend in ocean physics toward large, computationally demanding models. Many publications in recent years have involved the use of 3-D models (e.g. Le Quéré et al., 2005; Wiggert et al., 2006; Follows et al., 2007; Hashioka et al., 2013; Yool et al., 2013b; Vallina et al., 2014), although 1-D models are also well represented (e.g. Vallina et al., 2008; Kearney et al., 2012; Ward et al., 2013). The caveat is that improvements in prediction can only be achieved if the processes of interest can be adequately parameterised (Anderson, 2005). That is a big caveat and one made harder to achieve because it is often difficult and/or time consuming to thoroughly test the formulations and parameterisations involved. Simple NPZD-type models have a useful role in this regard. Albeit with tuning (but the complex models are tuned also), our NPZD model was successfully used to describe the seasonal cycles of phytoplankton and nutrients at four contrasting sites in the world ocean. It was readily applied to test different parameterisations for photosynthesis and mortality. At least in terms of basic bulk properties, simple models produce realistic predictions and are easy to thoroughly investigate and assess. The whole issue of model complexity ought in any case to be question dependent (Anderson, 2010), e.g. simple models may be useful to address questions on biogeochemical cycles whereas more complex models may be necessary to answer more ecologically relevant questions such as the effect of biodiversity on ecosystem function. The use of the EMPOWER testbed allows the user to investigate and determine whether a particular ecosystem model is sufficiently complex, or indeed too complex, to address the question of interest.

We have described the utility of slab models as a testbed underpinning marine ecosystem modelling research. This is however by no means their only use. Slab models are ideal for teaching ecological modelling. They embrace the complex interplay between primary production and the physico-chemical environment, combined with top-down control by zooplankton. Students often have difficulty grasping the relative significance of causal effects in ecosystems (Grotzer and Basca, 2003), e.g. the relative roles of bottom-up versus top-down processes in structuring food webs. A certain amount of lecture material is of course needed, but there is no substitute for hands-on modelling providing an interactive approach whereby students can actively investigate ideas and interact between themselves and a teacher (Knapp and D'Avanzo, 2010). Insight can be gained by getting students to try simple things like switching grazing off, doubling phytoplankton growth rates, etc. The slab modelling framework provided herein is ideal for this purpose. The code is transparent, modular and readily adjusted to include alternate parameterisations, it is easily set up for alternate ocean sites, the model runs fast with graphs of results appearing on the screen on completion, results are readily written to output files for more in depth analysis and, by coding in R, the models can be accessed and run without need for purchasing proprietary software.

Finally, the great advances in marine ecology that the pioneers of plankton modelling achieved using slab models should not be forgotten. Riley, Steele and Fasham laid the foundations of today's marine ecosystem modelling using plankton models embedded within simple physics. Even in the modern arena, this use of simple physics cannot be dismissed as being too simple for practical application and there is no reason why further scientific advances cannot be made using slab models. Models are, fundamentally, all about simplifying reality.

Both the Evans and Parslow (1985) and Anderson (1993) subroutines for
calculating daily photosynthesis require noon irradiance and day length as
inputs. When there are data available, these data can be used as forcing for
a model, akin to what is done for temperature. However, most typically light
data is not available and so a light submodel must be used to prescribe the
necessary forcing. A climatological approach is often used whereby these
inputs are specified using trigonometric/astronomical equations. This task
is not as straightforward as it might first appear. The basic equations are
presented in texts such as Brock (1981) and Iqbal (1983). Some adjustments
were provided by Shine (1984) and we use the equation for short-wave
irradiance at the ocean surface on a clear day published therein:

The cos(

The flux of photosynthetically active solar radiation just below the ocean
surface at noon,

The equation for calculating day length (

The average photosynthesis within a layer of depth H is

By performing a change of variables such that

This integral is solved analytically using a trigonometric transformation
and then integration by parts, giving

In order to integrate Equation B1 using an exponential

The integration over depth is then (see Platt et al., 1990)

For practical purposes, we used a maximum

Evans and Parslow (1985) provide an algorithm for calculating daily
depth-integrated photosynthesis with the assumptions of a Smith

The subroutine of Anderson (1993) was developed as an empirical
approximation to the spectrally resolved model of light attenuation and
photosynthesis of Morel (1988), used in combination with the polynomial
method of integrating daily photosynthesis of Platt et al. (1990). It is
based on an exponential

The subroutine of Anderson (1993) also takes account of the fact that, in
reality,

Ordinarily (e.g. Table 2),

The value of

Coefficients for use in the Anderson (1993) calculation of photosynthesis.

Again using the piecewise three-layer scheme described above for

The coefficients,

Installation and setup. The R programming language is
freeware and is readily downloaded from the Internet for use on personal
computers. For example, visit page

Running R. Open the R console. From the toolbar, select “File” and “Change dir ...” and select the directory in which the model code and input files have been placed. To run the model, type: source(“EMPOWER1.R”)

Preparation of input files. The model reads in three input files, each as ASCII text files.

File NPZD_parms.txt. This file includes a single line header and then lists the value of each model parameter in turn, followed by a text string for the purpose of annotation. When changing the parameter list in the model, the corresponding section in the R code must be altered accordingly.

File NPZD_extra.txt. This file holds initial values for
state variables, additional parameters, and various flags: choice of
station, choices for photosynthesis calculations (

File stations_forcing.txt. This file has a header line
for information and then holds monthly values for forcing, in our case
mixed layer depth and temperature, for each station. There are 13
entries in each case, the first and last being the same and corresponding to
the beginning and end of the year. A 366 unit array is set up in the model
code for each forcing variable, with unit 1 corresponding to

Output files. These are generated automatically by the model, on completion of each model simulation. The type of output generated is controlled by flags (above). The output files are ASCII, comma separated and do not have headers. They are readily imported into various software packages, e.g. Microsoft Excel, for further analysis. The files are the following.

File out_statevars.txt. Outputs the state variables, ordered as they are in array X in the code.

File out_fluxes.txt. Outputs the model fluxes, ordered
as they are in matrix flux (

File out_aux.txt. This file stores the values of auxiliary variables, as defined by the user in array Y (final section of function FNget_flux). The maximum size of this array is set by variable nDvar.

Altering the model structure. If the user wants to change
the number of state variables, nDvar or nfluxmax (above), adjustments
should first be made to the short section of code “Variables specific to
model: adjust accordingly”. Alter nSvar, the initialisation of array X
(which holds the state variables) and the text arrays Svarname and Svarnames
(which are used for output). Then go to function FNget_flux
and rewrite the line of code unpacking the state variables. Finally, specify
the terms associated with the new state variable(s) in matrix flux (

Altering model equations. The model equations are handled in function FNget_flux and can be adjusted as desired by the user, calling additional functions as necessary.

Graphical output. The model automatically generates graphical output on the computer screen on completion of each simulation. An advantage of R is that the syntax for generating plots is straightforward and the user should have no problem, working from the plots provided, in generating extra graphs as desired.

Light attenuation in the water column in the MEDUSA model (Yool et al.,
2011, 2013a) is calculated assuming that PAR at the ocean surface can be
divided equally into two wavebands, nominally red and green. The attenuation
of each is calculated through the water column using Beer's law. The average
light in a model layer can then be calculated on the basis of summing the
two wavebands, this average is then used in combination with a

T. R. Anderson and A. Yool acknowledge support from the Natural Environment Research Council, UK, as part of the Integrated Marine Biogeochemical Modelling Network to Support UK Earth System Research (i-MarNet) project (grant ref. NE/K001345/1). W. C. Gentleman acknowledges support from the Natural Sciences and Engineering Council of Canada. We wish to thank two anonymous referees for their critique of the manuscript.Edited by: A. Ridgwell