This document is intended as an informal source of additional information for the B-grid dynamical core. The primary documentation is still the "Atmospheric Dynamical Core User's Guide", the web-based module documentation, and the technical description on "Finite-differencing used by the B-grid dynamical core". Users who want to simply run the model "as is" should refer to the "User's Guide" or the "quickstart guide" which can be found on the FMS homepage.
When the source code is "checked out" of the CVS repository a directory called atmos_bgrid is created. The atmos_bgrid directory contains the following sub-directories.
atmos_bgrid/ | +---- documentation/ Contains this document and the technical description on | "Finite-differencing used by the B-grid dynamical core". | +---- driver/ Contains sub-directories with drivers for various types | of atmospheric models. There is currently directories | for "coupled" models (this includes AMIP-type models) and | "solo" models (for standalone dynamical cores running | the Held-Suarez benchmark). | +---- model/ Contains the dynamical core modules. | The modules in this directory can be considered a package | and are not intended for individual use. | +---- tools/ Contains tools specific to the dynamical core. The modules in this directory may be useful in other B-grid core applications such as pre- or post-processing.
This is a complete list of all B-grid core modules. Depending on what model is "checked-out" (coupled versus solo) determines which driver modules will be present.
The B-grid core uses a split time differencing scheme. There are 3 time steps that are used. The shortest time step for gravity waves is called the adjustment time step. Advection and horizontal mixing terms are solved on the advection time step. The physical forcing, diagnostics, and the calculation of updated prognostic variables is done on the atmospheric time step. This figure depicts the relationship between these time steps.
The advection and adjustment time steps are set as multiples of the atmospheric time step. The following namelist variables are used to set the time steps.
&main_nml dt_atmos = 1800 / &bgrid_core_driver_nml num_adjust_dt = 3, num_advec_dt = 3 /
For this example the atmospheric, advection, and adjustment time steps would be 1800, 600, and 300 seconds, respectively. These are typical time steps used with a resolution of 144 longitudes by 90 latitudes. A model with this resolution is also referred to as a N45 model since there are 45 latitudes rows between the equator and pole.
Here is a flow chart for the B-grid dynamical core.
The center boxes represent terms that are solved on the adjustment time step.
Terms in the right column are applied on the advection time. The left column
is computed on the atmospheric time step.
The B-grid core makes extensive use of data structures (Fortran 90 derived-types). The data structures group related data and greatly reduce the number of arguments passed between Fortran subroutines. Unless specifically stated, the data defined within a data structure is public (it may be directly accessed). The data structures names defined in the B-grid core all end with _type. All data structures are initialized by calling a constructor routine and may be terminated by calling a destructor routine (if it exists).
Here is a summary of important data structures defined in the B-grid core.
Memory for the B-grid dynamical core is dynamically allocated at run time (as opposed to fixing the memory requirements prior to compilation). The memory required is determined from the model resolution in the initial condition restart file. When the restart file does not exist the model will try to create a simple initial condition (a flow at rest). In this case, the model resolution is read from the cold start namelist (bgrid_cold_start_nml). The model fails if neither the B-grid restart file or the cold start resolution have been provided.
A limited number of input files are required when running the solo dynamical core. The following (input) files and directories must be present in the directory where you are running the model.
If you are running full atmospheric model (with physics) a large number of input files will be required. These files will reside in the INPUT sub-directory.
Restart files may be written as NetCDF files or as NATIVE files (using unformatted Fortran writes). A namelist switch controls which output format is used, the default is NATIVE format. The switch is set with the following variable:
&bgrid_core_driver_nml restart_output_format = 'netcdf' / &bgrid_core_driver_nml restart_output_format = 'native' /
When the NetCDF output option is used it is recommended that a single fileset be used to write the restart files. This option is set with the following namelist variable:
&fms_io_nml threading_write = 'single', fileset_write = 'single'/
For input restart files the model can read either the NetCDF or NATIVE format. The model will look for both file types, there is no need to set a namelist variable.
NetCDF restart files are written using simplified metadata. For example, indices are used for axis data and there are no meaningful attributes. All variables are defined with 4 dimensions (dim_1,dim_2,dim_3,time). A one-dimensional field only uses the first dimension (dim_1) and sets the size of the other dimensions to one. All variables are written as double (64-bit) precision.
For a model resolution of 144 longitudes x 90 latitudes x 20 levels, the following axes and variables might be defined in the restart file called RESTART/bgrid_prog_var.res.nc.
dimensions: xaxis_1 = 21 ; xaxis_2 = 144 ; yaxis_1 = 1 ; yaxis_2 = 90 ; yaxis_3 = 89 ; zaxis_1 = 1 ; zaxis_2 = 20 ; Time = UNLIMITED ; // (1 currently) variables: double eta(Time, zaxis_1, yaxis_1, xaxis_1) ; eta:long_name = "eta" ; eta:units = "none" ; double peta(Time, zaxis_1, yaxis_1, xaxis_1) ; peta:long_name = "peta" ; peta:units = "none" ; double ps(Time, zaxis_1, yaxis_2, xaxis_2) ; ps:long_name = "ps" ; ps:units = "none" ; double pssl(Time, zaxis_1, yaxis_2, xaxis_2) ; pssl:long_name = "pssl" ; pssl:units = "none" ; double res(Time, zaxis_1, yaxis_2, xaxis_2) ; res:long_name = "res" ; res:units = "none" ; double fis(Time, zaxis_1, yaxis_2, xaxis_2) ; fis:long_name = "fis" ; fis:units = "none" ; double u(Time, zaxis_2, yaxis_3, xaxis_2) ; u:long_name = "u" ; u:units = "none" ; double v(Time, zaxis_2, yaxis_3, xaxis_2) ; v:long_name = "v" ; v:units = "none" ; double t(Time, zaxis_2, yaxis_2, xaxis_2) ; t:long_name = "t" ; t:units = "none" ;
Here is a brief definition of the variables, units, and array sizes for a model resolution of nlon by nlat by nlev.
For all model grids, array indexing increases from west to east, south to north, and top to bottom.
The global temperature grid is aligned so that grid box edges line up with the equator and the poles. For a global grid with 144 x 90 points, the west and south edges of the first grid box start at 0° (longitude) and 90°S. The last grid box has east and north edges at 0° (longitude) and 90°N. The following figure depicts key features of a global temperature grid with this resolution.
The global velocity grid is shifted by one-half of a grid box to the north and east of a temperature grid box. The global velocity grid has one less latitude row than the global temperature grid. For a global grid with 144 x 89 points, the west and south edges of the first grid box start at 1.25°E and 89°S. The last grid box has east and north edges at 1.25°E and 89°N. The latitude rows that would be centered at the poles are not considered part of the computation domain. The following figure depicts key features of a global velocity grid at this resolution.
Domain decomposition (or layout) for multiple processor jobs is done in the X-Y (longitude-latitude) plane. The following namelist variable specifies that the global grid be decomposed 3 times along the X-axis and 2 times along the Y-axis (6 processors must be used).
&bgrid_core_driver_nml layout = 3,2 /More information about how to specify the domain layout can be found in the online namelist documentation.
The following figure depicts the domain decomposition and displays some of the terminology.
The compute domain and data domain are defined locally on each processor. The compute domain is the computational grid where valid model results are computed. The data domain is the compute domain plus additional rows and columns to assist in horizontal differencing. The B-grid core uses only one halo row. Higher order differencing is accomplished by doing additional halo updates.
As previous stated, the global velocity grid has one less latitude row than the temperature grid. To simplify array structures the B-grid core allocates the same size arrays for both temperature and velocity fields. As a result the northernmost latitude row in the velocity grid is not used.
The alignment of the temperature and velocity grids in the vicinity of the south pole is relatively straightforward. The velocity grid cell with the same i,j indices is located to the northeast of a temperature cell. The halo row for the velocity grid is located on the pole with the velocity set to zero.
Here is the same figure for the north pole. Note that the northernmost latitude row in the velocity grid is not used.