Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates for CCPP version 7 release #73

Merged
merged 17 commits into from
Sep 9, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions CCPPtechnical/source/AddingNewSchemes.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
.. _AddNewSchemes:

****************************************
Connecting a scheme to CCPP
****************************************
Expand All @@ -20,7 +20,7 @@ There are a few steps that can be taken to prepare a scheme for addition to CCPP
Implementing a scheme in CCPP
=============================

There are, broadly speaking, two approaches for connecting an existing physics scheme to the CCPP Framework:
There are, broadly speaking, two approaches for connecting an existing physics scheme to the CCPP Framework:

1. Refactor the existing scheme to CCPP format standards, using ``pre_`` and ``post_`` :term:`interstitial schemes <interstitial scheme>` to interface to and from the existing scheme if necessary.
2. Create a driver scheme as an interface from the existing scheme's Fortran module to the CCPP Framework.
Expand All @@ -33,11 +33,11 @@ Method 1 is the preferred method of adapting a scheme to CCPP. This involves mak

While method 1 is preferred, there are cases where method 1 may not be possible: for example, in schemes that are shared with other, non-CCPP hosts, and so require specialized, model-specific drivers, and might be beholden to different coding standards required by another model. In cases such as this, method 2 may be employed.

Method 2 involves fewer changes to the original scheme's Fortran module: A CCPP-compliant driver module (see :numref:`Chapter %s <CompliantPhysParams>`) handles defining the inputs to and outputs from the scheme module in terms of state variables, constants, and tendencies provided by the model as defined in the scheme's .meta file. The calculation of variables that are not available directly from the model, and conversion of scheme output back into the variables expected by CCPP, should be handled by interstitial schemes (``schemename_pre`` and ``schemename_post``). While this method puts most CCPP-required features in the driver and interstitial subroutines, the original scheme must still be updated to remove STOP statements, common blocks, or any other disallowed features as listed in :numref:`Section %s <CodingRules>`.
Method 2 involves fewer changes to the original scheme's Fortran module: A CCPP-compliant driver module (see :numref:`Chapter %s <CompliantPhysParams>`) handles defining the inputs to and outputs from the scheme module in terms of state variables, constants, and tendencies provided by the model as defined in the scheme's .meta file. The calculation of variables that are not available directly from the model, and conversion of scheme output back into the variables expected by CCPP, should be handled by interstitial schemes (``schemename_pre`` and ``schemename_post``). While this method puts most CCPP-required features in the driver and interstitial subroutines, the original scheme must still be updated to remove STOP statements, common blocks, or any other disallowed features as listed in :numref:`Section %s <CodingRules>`.

For both methods, optional interstitial schemes can be used for code that can not be handled within the scheme itself. For example, if different code needs to be run for coupling with other schemes or in different orders (e.g. because of dependencies on other schemes and/or the order the scheme is run in the :term:`SDF`), or if variables needed by the scheme must be derived from variables provided by the host. See :numref:`Chapter %s <CompliantPhysParams>` for more details on primary and interstitial schemes.

.. note:: Depending on the complexity of the scheme and how it works together with other schemes, multiple interstitial schemes may be necessary.
.. note:: Depending on the complexity of the scheme and how it works together with other schemes, multiple interstitial schemes may be necessary.

------------------------------
Adding new variables to CCPP
Expand All @@ -61,7 +61,7 @@ For variables that can be set via namelist, the ``GFS_control_type`` Derived Dat

If information from the previous timestep is needed, it is important to identify if the host model provides this information, or if it needs to be stored as a special variable. For example, in the Model for Prediction Across Scales (MPAS), variables containing the values of several quantities in the preceding timesteps are available. When that is not the case, as in the :term:`UFS Atmosphere`, interstitial schemes are needed to access these quantities.

.. note:: As an example, the reader is referred to the `GF convective scheme <https://dtcenter.ucar.edu/GMTB/v6.0.0/sci_doc/_c_u__g_f.html>`_, which makes use of interstitials to obtain the previous timestep information.
.. note:: As an example, the reader is referred to the `Grell-Freidas convective scheme <https://dtcenter.ucar.edu/GMTB/v7.0.0p/sci_doc/_c_u__g_f.html>`_, which makes use of interstitials to obtain the previous timestep information.
mkavulich marked this conversation as resolved.
Show resolved Hide resolved

Consider allocating the new variable only when needed (i.e. when the new scheme is used and/or when a certain control flag is set). If this is a viable option, following the existing examples in ``CCPP_typedefs.F90`` and ``GFS_typedefs.meta`` for allocating the variable and setting the ``active`` attribute in the metadata correctly.

Expand All @@ -70,14 +70,14 @@ Incorporating a scheme into CCPP
----------------------------------
The new scheme and any interstitials will need to be added to the CCPP prebuild configuration file. Add the new scheme to the Python dictionary in ``ccpp-scm/ccpp/config/ccpp_prebuild_config.py`` using the same path as the existing schemes:

.. code-block::
.. code-block::

SCHEME_FILES = [ ...
'../some_relative_path/existing_scheme.F90',
'../some_relative_path/new_scheme.F90',
...]

.. note:: Different host models will have different prebuild config files. For example, the :term:`UFS Atmosphere's <UFS Atmosphere>` config file is located at ``ufs-weather-model/FV3/ccpp/config/ccpp_prebuild_config.py``
.. note:: Different host models will have different prebuild config files. For example, the :term:`UFS Atmosphere's <UFS Atmosphere>` config file is located at ``ufs-weather-model/FV3/ccpp/config/ccpp_prebuild_config.py``

The source code and ``.meta`` files for the new scheme should be placed in the same location as existing schemes in the CCPP: in the ccpp-physics repository under the ``physics/`` directory.

Expand Down Expand Up @@ -113,7 +113,7 @@ Some tips for debugging problems:
* Make sure to use an uppercase suffix ``.F90`` to enable C preprocessing.
* A scheme called GFS_debug (GFS_debug.F90) may be added to the SDF where needed to print state variables and interstitial variables. If needed, edit the scheme beforehand to add new variables that need to be printed.
* Check the ``ccpp_prebuild.py`` script for success/failure and associated messages; run the prebuild script with the --debug and --verbose flags. See :numref:`Chapter %s <ConstructingSuite>` for more details
* Compile code in DEBUG mode (see section 2.3 of the `SCM User's Guide <https://github.com/NCAR/ccpp-scm/blob/main/scm/doc/TechGuide/main.pdf>`_, run through debugger if necessary (gdb, Allinea DDT, totalview, …).
* Compile code in DEBUG mode (see section 4.3 of the `SCM User's Guide <https://ccpp-scm.readthedocs.io/en/latest/chap_quick.html#compiling-scm-with-ccpp>`_, run through debugger if necessary (gdb, Allinea DDT, totalview, …).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this will require a change before the release, to refer to v7

mkavulich marked this conversation as resolved.
Show resolved Hide resolved
* Use memory check utilities such as ``valgrind``.
* Double-check the metadata file associated with your scheme to make sure that all information, including standard names and units, correspond to the correct local variables.

18 changes: 9 additions & 9 deletions CCPPtechnical/source/AutoGenPhysCaps.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,23 +4,23 @@
Suite and Group *Caps*
****************************************

The connection between the :term:`host model` and the physics :term:`schemes<scheme>` through the :term:`CCPP Framework`
The connection between the :term:`host model` and the physics :term:`schemes<scheme>` through the :term:`CCPP Framework`
is realized with :term:`caps<Physics cap>` on both sides as illustrated in :numref:`Figure %s <ccpp_arch_host>`.
The CCPP *prebuild* script discussed in :numref:`Chapter %s <ConfigBuildOptions>`
generates the :term:`caps <physics cap>` that connect the physics schemes to the CCPP Framework.
This chapter describes the :term:`suite<Physics Suite cap>` and :term:`group caps<Group cap>`,
This chapter describes the :term:`suite<Physics Suite cap>` and :term:`group caps<Group cap>`,
while the host model *caps* are described in :numref:`Chapter %s <Host-side Coding>`.
These *caps* autogenerated by ``ccpp_prebuild.py`` reside in the directory
These *caps* autogenerated by ``ccpp_prebuild.py`` reside in the directory
defined by the ``CAPS_DIR`` variable (see example in :ref:`Listing 8.1 <ccpp_prebuild_example>`).

Overview
========

When CCPP is built, the CCPP Framework and physics are statically linked to the executable. This allows the best
performance and efficient memory use. This build requires metadata provided
When CCPP is built, the CCPP Framework and physics are statically linked to the executable. This allows the best
performance and efficient memory use. This build requires metadata provided
by the host model and variables requested from the physics scheme. Only the variables required for
the specified suites are kept, requiring one or more :term:`SDF`\ s (see left side of :numref:`Figure %s <ccpp_static_build>`)
as arguments to the ``ccpp_prebuild.py`` script.
as arguments to the ``ccpp_prebuild.py`` script.
The CCPP *prebuild* step performs the tasks below.

* Check requested vs provided variables by ``standard_name``.
Expand Down Expand Up @@ -83,9 +83,9 @@ The *prebuild* step will produce the following files for any host model. Note th
ccpp_static_api.F90

``ccpp_static_api.F90`` is an interface, which contains subroutines ``ccpp_physics_init``,
``ccpp_physics_timestep_init``, ``ccpp_physics_run``, ``ccpp_physics_timestep_finalize``, and ``ccpp_physics_finalize``.
Each subroutine uses a ``suite_name`` and an optional argument, ``group_name``, to call the groups
of a specified suite (e.g. ``fast_physics``, ``physics``, ``time_vary``, ``radiation``, ``stochastic``, etc.),
``ccpp_physics_timestep_init``, ``ccpp_physics_run``, ``ccpp_physics_timestep_finalize``, and ``ccpp_physics_finalize``.
Each subroutine uses a ``suite_name`` and an optional argument, ``group_name``, to call the groups
of a specified suite (e.g. ``fast_physics``, ``physics``, ``time_vary``, ``radiation``, ``stochastic``, etc.),
or to call the entire suite. For example, ``ccpp_static_api.F90`` would contain module ``ccpp_static_api``
with subroutines ``ccpp_physics_{init, timestep_init, run, timestep_finalize, finalize}``. Interested users
should run ``ccpp_prebuild.py`` as appropriate for their model and inspect these auto-generated files.
Expand Down
14 changes: 7 additions & 7 deletions CCPPtechnical/source/CCPPDebug.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Debugging with CCPP
Introduction
================================

In order to debug code efficiently with :term:`CCPP`, it is important to remember the conceptual differences between traditional, physics-driver based approaches and the ones with CCPP.
In order to debug code efficiently with :term:`CCPP`, it is important to remember the conceptual differences between traditional, physics-driver based approaches and the ones with CCPP.

Traditional, physics-driver based approaches rely on hand-written physics drivers that connect the different physical :term:`parameterizations <parameterization>` together and often contain a large amount of "glue code" required between the parameterizations. As such, the physics drivers usually have access to all variables that are used by the physical parameterizations, while individual parameterizations only have access to the variables that are passed in. Debugging either happens on the level of the physics driver or inside physical parameterizations. In both cases, print statements are inserted in one or more places (e.g. in the driver before/after parameterizations to debug). In the CCPP, there are no hand-written physics drivers. Instead, the physical parameterizations are glued together by an :term:`SDF` that lists the :term:`primary physical parameterizations <primary scheme>` and so-called :term:`interstitial parameterizations <interstitial scheme>` or interstitial schemes (containing the glue code, broken up into logical units) in the order of execution.

Expand Down Expand Up @@ -39,7 +39,7 @@ For the UFS models, dedicated debugging schemes have been created by the CCPP de
----------------------------------------------------------------
Descriptions of the CCPP-compliant debugging schemes for the UFS
----------------------------------------------------------------
* ``GFS_diagtoscreen``
* ``GFS_diagtoscreen``
This scheme loops over all blocks for all GFS types that are persistent from one time step to the next (except ``GFS_control``) and prints data for almost all constituents. The call signature and rough outline for this scheme is:

.. code-block:: console
Expand Down Expand Up @@ -83,9 +83,9 @@ Descriptions of the CCPP-compliant debugging schemes for the UFS

* ``GFS_interstitialtoscreen``
This scheme is identical to ``GFS_diagtoscreen``, except that it prints data for all constituents of the ``GFS_interstitial`` derived data type only. As for ``GFS_diagtoscreen``, the amount of information printed to screen can be customized using preprocessor statements, and all output to ``stdout/stderr`` from this routine is prefixed with **'XXX: '** so that it can be easily removed from the log files using "grep -ve 'XXX: ' ..." if needed.



* ``GFS_abort``
This scheme can be used to terminate a model run at some point in the call to the physics. It can be customized to meet the developer's requirements.

Expand Down Expand Up @@ -115,7 +115,7 @@ Descriptions of the CCPP-compliant debugging schemes for the UFS

* ``GFS_checkland``
This routine is an example of a user-provided debugging scheme that is useful for solving issues with the fractional grid with the Rapid Update Cycle Land Surface Model (RUC LSM). All output to ``stdout/stderr`` from this routine is prefixed with **'YYY: '** (instead of ‘XXX:’), which can be easily removed from the log files using "grep -ve 'YYY: ' ..." if needed.

.. code-block:: console

subroutine GFS_checkland_run (me, master, blkno, im, kdt, iter, flag_iter, flag_guess, &
Expand All @@ -141,7 +141,7 @@ Descriptions of the CCPP-compliant debugging schemes for the UFS
write(0,'(a,2i5,1x,e16.7)')'YYY: i, blk, oceanfrac(i)  :', i, blkno, oceanfrac(i)
write(0,'(a,2i5,1x,e16.7)')'YYY: i, blk, landfrac(i)   :', i, blkno, landfrac(i)
write(0,'(a,2i5,1x,e16.7)')'YYY: i, blk, lakefrac(i)   :', i, blkno, lakefrac(i)
write(0,'(a,2i5,1x,e16.7)')'YYY: i, blk, slmsk(i)      :', i, blkno, slmsk(i)
write(0,'(a,2i5,1x,e16.7)')'YYY: i, blk, slmsk(i)      :', i, blkno, slmsk(i)
write(0,'(a,2i5,1x,i5)')   'YYY: i, blk, islmsk(i)     :', i, blkno, islmsk(i)
!end if
end do
Expand Down
Loading