diff --git a/previews/PR1153/.documenter-siteinfo.json b/previews/PR1153/.documenter-siteinfo.json index 4d06ae085..af3f1a3fd 100644 --- a/previews/PR1153/.documenter-siteinfo.json +++ b/previews/PR1153/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-11-27T08:06:19","documenter_version":"1.5.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-11-27T09:57:55","documenter_version":"1.5.0"}} \ No newline at end of file diff --git a/previews/PR1153/Boundaries/BoundaryFilePreparation/index.html b/previews/PR1153/Boundaries/BoundaryFilePreparation/index.html index f07db6991..c50cce8c9 100644 --- a/previews/PR1153/Boundaries/BoundaryFilePreparation/index.html +++ b/previews/PR1153/Boundaries/BoundaryFilePreparation/index.html @@ -57,4 +57,4 @@ /

From src/arpifs/module/yommcc.F90:

! LMCC01_MSE = .T.   ===> THE CLIM.FIELD(S) ARE READ IN LBC FILE AND USED IN SURFEX
  :
 ! LMCCECSST =.T. ===> SST FROM ECMWF (SST-ANA COMB with surf temp over seaice)
-!           =.F. ===> SST FROM SURFTEMPERATURE
+! =.F. ===> SST FROM SURFTEMPERATURE diff --git a/previews/PR1153/Build/Build_with_cmake/index.html b/previews/PR1153/Build/Build_with_cmake/index.html index 429a9127f..7e597226f 100644 --- a/previews/PR1153/Build/Build_with_cmake/index.html +++ b/previews/PR1153/Build/Build_with_cmake/index.html @@ -100,4 +100,4 @@ set(Fortran_DEFAULT_FLOAT_64 "-fdefault-double-8 -fdefault-real-8") set(Fortran_DEFAULT_INT_32 "") -set(Fortran_DEFAULT_INT_64 "-fdefault-integer-8")

When running cmake configure, and depending on the build precision, a subset of these flags is added to the CMAKE_Fortran_FLAGS variable thus affecting all the Fortran targets. Currently, DEFAULT_INT variables are not used in CMake build, but are provided for consistency.

Note

When creating FortranCompilerFlags.<compiler type>.cmake, <compiler type> should follow the naming provided by CMAKE_Fortran_COMPILER_ID, for example, GNU for gfortran and Intel for ifort. See the CMake documentation for a list of all supported compiler vendors.

Note on generating different build systems with CMake

CMake is a build system generator and it can create different native build systems from the same CMakeLists.txt. The full list of supported generators is available in the CMake documentation, however in practice when building HARMONIE-AROME on a Linux machine (or on a UNIX-like one in general) there are two options: the Unix Makefiles generator and the Ninja generator:

Note

Specific CMake generator can be selected at the configure time by passing the correct -G <gen> flag to cmake. For example, cmake -G Ninja <...other CMake args...> or cmake -G "Unix Makefiles" <...other CMake args...>.

Practical considerations

When to re-run CMake configure in my experiment?

In principle, it should be enough to run CMake configure only once to generate the build system and after that any modification of the source code or configuration files should be detected by the build system triggering the required re-build steps. The only time, when CMake configure should be explicitly re-run is when you add a new source file to HARMONIE-AROME. The current implementation of the CMake build scans the file system looking for the source files to compile, so just putting a new file under, say, src/surfex/SURFEX/ and re-running the build isn't enough since this new file would be still unknown to the build system, thus the need of rerunning the configure step first.

I added some code and CMake build stopped working

Unlike makeup, CMake build for HARMONIE-AROME enforces inter-project boundaries and each project has an explicit list of its dependencies. For example, it is not possible to use modules from arpifs in surfex, but it is possible to use mse modules. If after a code modification CMake starts complaining about missing module files, then it means that this modification violates the project dependencies in the build. To fix this problem, please update your changeset to use only the available modules. If you believe that your modification is sound with respect to inter-project dependencies of HARMONIE-AROME and it's the CMake build which misses a dependency, please open a new GitHub issue explaining the problem.

Can I move/copy my build directory to another directory and re-use it?

No, it's generally a bad idea. CMake loves absolute paths and uses them in many parts of the generated build system, thus simply moving the build directory would break the build.

Something went wrong and CMake doesn't behave anymore, can I refresh the build without nuking the whole build directory?

You can try deleting just the CMakeCache.txt file from the build directory.

CMake picks a wrong compiler

Sometimes CMake selects a system default compiler instead of the compiler provided, for example, by loading a module. There are a few options available to force CMake to use a specific compiler, a straightforward one is to set the compiler via commonly-used environment variables (for example, export FC=ifort for a Fortran compiler). Another way, is to set the correct compilers in command-line arguments when configuring the CMake build (for example adding -DCMAKE_Fortran_COMPILER=ifort to the list of CMake arguments). CMake recognizes CMAKE_<LANG>_COMPILER passed from the command line where <LANG> can be Fortran, C or CXX.

Can I get more verbose output when compiling with CMake?

To get detailed information about individual steps and commands issued when compiling HARMONIE-AROME with CMake add -v to your build command:

cmake --build . --target install -v

Is there a way to visualise dependencies between individual targets of HARMONIE-AROME in CMake build?

Since all the inter-target dependencies are defined in CMake scripts it can be useful to have an option to produce a graphical overview of the dependency graph of HARMONIE-AROME without grepping all the CMakeLists.txt files. This can be achieved by adding the --graphviz=<output file name> to the list of CMake arguments, for example:

cmake $HM_LIB/src --graphviz=harmonie.dot

then the produced dependency graph can be visualized using the dot tool:

dot -Tx11 harmonie.dot

The full dependency graph may be very cluttered and take quite some time to render, so it might be a good idea to plot dependencies of a single target, for example:

dot -Tx11 harmonie.dot.surf-static

See the CMake documentation on graphviz for additional information about fine-tuning of the generated graphs.

I need more information about CMake, where do I find documentation?

CMake documentation portal is a great source of detailed information about the various aspects of the CMake build system.

+set(Fortran_DEFAULT_INT_64 "-fdefault-integer-8")

When running cmake configure, and depending on the build precision, a subset of these flags is added to the CMAKE_Fortran_FLAGS variable thus affecting all the Fortran targets. Currently, DEFAULT_INT variables are not used in CMake build, but are provided for consistency.

Note

When creating FortranCompilerFlags.<compiler type>.cmake, <compiler type> should follow the naming provided by CMAKE_Fortran_COMPILER_ID, for example, GNU for gfortran and Intel for ifort. See the CMake documentation for a list of all supported compiler vendors.

Note on generating different build systems with CMake

CMake is a build system generator and it can create different native build systems from the same CMakeLists.txt. The full list of supported generators is available in the CMake documentation, however in practice when building HARMONIE-AROME on a Linux machine (or on a UNIX-like one in general) there are two options: the Unix Makefiles generator and the Ninja generator:

Note

Specific CMake generator can be selected at the configure time by passing the correct -G <gen> flag to cmake. For example, cmake -G Ninja <...other CMake args...> or cmake -G "Unix Makefiles" <...other CMake args...>.

Practical considerations

When to re-run CMake configure in my experiment?

In principle, it should be enough to run CMake configure only once to generate the build system and after that any modification of the source code or configuration files should be detected by the build system triggering the required re-build steps. The only time, when CMake configure should be explicitly re-run is when you add a new source file to HARMONIE-AROME. The current implementation of the CMake build scans the file system looking for the source files to compile, so just putting a new file under, say, src/surfex/SURFEX/ and re-running the build isn't enough since this new file would be still unknown to the build system, thus the need of rerunning the configure step first.

I added some code and CMake build stopped working

Unlike makeup, CMake build for HARMONIE-AROME enforces inter-project boundaries and each project has an explicit list of its dependencies. For example, it is not possible to use modules from arpifs in surfex, but it is possible to use mse modules. If after a code modification CMake starts complaining about missing module files, then it means that this modification violates the project dependencies in the build. To fix this problem, please update your changeset to use only the available modules. If you believe that your modification is sound with respect to inter-project dependencies of HARMONIE-AROME and it's the CMake build which misses a dependency, please open a new GitHub issue explaining the problem.

Can I move/copy my build directory to another directory and re-use it?

No, it's generally a bad idea. CMake loves absolute paths and uses them in many parts of the generated build system, thus simply moving the build directory would break the build.

Something went wrong and CMake doesn't behave anymore, can I refresh the build without nuking the whole build directory?

You can try deleting just the CMakeCache.txt file from the build directory.

CMake picks a wrong compiler

Sometimes CMake selects a system default compiler instead of the compiler provided, for example, by loading a module. There are a few options available to force CMake to use a specific compiler, a straightforward one is to set the compiler via commonly-used environment variables (for example, export FC=ifort for a Fortran compiler). Another way, is to set the correct compilers in command-line arguments when configuring the CMake build (for example adding -DCMAKE_Fortran_COMPILER=ifort to the list of CMake arguments). CMake recognizes CMAKE_<LANG>_COMPILER passed from the command line where <LANG> can be Fortran, C or CXX.

Can I get more verbose output when compiling with CMake?

To get detailed information about individual steps and commands issued when compiling HARMONIE-AROME with CMake add -v to your build command:

cmake --build . --target install -v

Is there a way to visualise dependencies between individual targets of HARMONIE-AROME in CMake build?

Since all the inter-target dependencies are defined in CMake scripts it can be useful to have an option to produce a graphical overview of the dependency graph of HARMONIE-AROME without grepping all the CMakeLists.txt files. This can be achieved by adding the --graphviz=<output file name> to the list of CMake arguments, for example:

cmake $HM_LIB/src --graphviz=harmonie.dot

then the produced dependency graph can be visualized using the dot tool:

dot -Tx11 harmonie.dot

The full dependency graph may be very cluttered and take quite some time to render, so it might be a good idea to plot dependencies of a single target, for example:

dot -Tx11 harmonie.dot.surf-static

See the CMake documentation on graphviz for additional information about fine-tuning of the generated graphs.

I need more information about CMake, where do I find documentation?

CMake documentation portal is a great source of detailed information about the various aspects of the CMake build system.

diff --git a/previews/PR1153/Build/Build_with_makeup/index.html b/previews/PR1153/Build/Build_with_makeup/index.html index 75260d192..5710e4388 100644 --- a/previews/PR1153/Build/Build_with_makeup/index.html +++ b/previews/PR1153/Build/Build_with_makeup/index.html @@ -79,4 +79,4 @@ # or not to mess up the output, use just one process for compilations -gmake NPES=1 -i

Creating precompiled installation

If you want to provide precompiled libraries, objects, source code to other users so that they do not have to start compilation from scratch, then make a distribution or precompiled installation as follows:

gmake PRECOMPILED=/a/precompiled/rootdir precompiled

After this the stuff you just compiled ends up in directory /a/precompiled/rootdir with two subdirectories : src/ and util/. All executables are currently removed.

You can repeat this call, and it will just rsync the modified bits.

Update/check your interface blocks outside configure

The configure has options -c or -g to check up or enforce for (re-)creation of interface blocks of projects arp and ald. To avoid full and lengthy configure-run, you can just do the following:

gmake intfb
+gmake NPES=1 -i

Creating precompiled installation

If you want to provide precompiled libraries, objects, source code to other users so that they do not have to start compilation from scratch, then make a distribution or precompiled installation as follows:

gmake PRECOMPILED=/a/precompiled/rootdir precompiled

After this the stuff you just compiled ends up in directory /a/precompiled/rootdir with two subdirectories : src/ and util/. All executables are currently removed.

You can repeat this call, and it will just rsync the modified bits.

Update/check your interface blocks outside configure

The configure has options -c or -g to check up or enforce for (re-)creation of interface blocks of projects arp and ald. To avoid full and lengthy configure-run, you can just do the following:

gmake intfb
diff --git a/previews/PR1153/ClimateGeneration/ClimateGeneration/index.html b/previews/PR1153/ClimateGeneration/ClimateGeneration/index.html index e161057e8..c81b6a459 100644 --- a/previews/PR1153/ClimateGeneration/ClimateGeneration/index.html +++ b/previews/PR1153/ClimateGeneration/ClimateGeneration/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

Generation of climate and physiography files

Introduction

The generation of climate files includes two parts. The first part is the generation of climate files for the atmospheric model, the so called e923 configuration. The second part is the generation of the physiography information for SURFEX. In the following we describe how it is implemented in HARMONIE.

Input data for climate generation

The location of your input data for the climate generation is defined by the HM_CLDATA environment variable defined in the config-sh/config.yourhost. At ECMWF the climate data is stored on Atos here: hpc-login:/ec/res4/hpcperm/hlam/data/climate

Information on what data to download is available here. The input data contains physiography data, topography information and climatological values determined from a one year ARPEGE assimilation experiment with a resolution of T79. Climatological aerosol optical depths (tegen) or vertically integrated aerosol mass based on CAMS reanalysis 2003-2022 (camscms), can be included in the monthly climate files.

In the current version the option to use pre-generated climate files has been introduced to save time for quick experiments. To use pre-generated domains you need to set USE_REF_CLIMDIR=yes in Env_system. The regenerated domains location is defined in config_exp.h and in ECMWF are located in REF_CLIMDIR=ec:/hlam/harmonie_climdir/release-43h2.1.rc1/$DOMAIN/$ECOCLIMAP_VERSION.

Preparation of SURFEX physiography file

SURFEX needs information about the distribution of different available tiles like nature, sea, water and town. The nature tile also needs information about type of vegetation and soiltypes. The main input sources for this are found at SURFEX physiographic maps.

The data base for SURFEX-file preparation is located under HM_CLDATA/PGD

  • ecoclimats_v2.* : Landtypes
  • gtopo30.* : Topography
  • sand_fao.* : Soil type distribution
  • clay_fao.* : Soil type distribution

The generation of SURFEX physiography file (PGD.lfi) is done in scr/Prepare_pgd. The script creates the namelist OPTIONS.nam based on the DOMAIN settings in scr/Harmonie_domains.pm. Note that the SURFEX domain is only created over the C+I area. In the namelist we set which scheme that should be activated for each tile.

Tile
PHYSICSNatureSeaWaterTown
AROMEISBASEAFLXWATFLXTEB
ALAROISBASEAFLXWATFLXTown as rock

The program PGD produces one SURFEX physiography file PGD.lfi, which is stored in CLIMDIR directory.

To make sure we have the same topography input for the atmospheric part we call Prepare_pgd two times. One time to produce a PGD.lfi for SURFEX and a second time to produce a PGD.fa file that can be used as input for the climate generation described below. Note that for the atmosphere the topography will be spectrally filtered and the resulting topography will be imposed on SURFEX again.

Generation of non SURFEX monthly climate files

These files contain, among others, the surface elevation, land-sea mask, climatological aerosol and several near-surface variables for ALADIN/ALARO systems that may run without SURFEX. Climatological aerosol can be aerosol optical depth@550 nm - Tegen or CAMS, in the future also vertically integrated aerosol mass mixing ratios based on CAMS reanalysis.

scr/Climate is a script, which prepares climate file(s) for prefered forecast range. Climate files are produced for past, present and following month. The outline of Climate is as follows:

  • Check if climate files already exists.
  • Creation of namelists. The definition of domain and truncation values is taken from src/Harmonie_domains.pm.
  • Part 0: Read the PGD.fa file generated by SURFEX and write it to Neworog
  • Part 1: Filter Neworog to target grid with spectral smoothing to remove 2dx waves.
  • Part 2: generation of surface, soil and vegetation variables, without annual variation.
  • Part 3: creation of monthly climatological values and modification of albedo and emissivity according to the climatology of sea-ice limit.
  • Part 4: definition and modification of the vegetation and surface characteristics
  • Part 5: modification of fields created by step 2 and 4 over land from high resolution datasets (for each month)
  • Part 6: modification of climatological values

The result is climate files for the previous, current and next month. The files are named after their month like m01, m02 - m12 and stored in CLIMDIR.

Further reference e923

+

Generation of climate and physiography files

Introduction

The generation of climate files includes two parts. The first part is the generation of climate files for the atmospheric model, the so called e923 configuration. The second part is the generation of the physiography information for SURFEX. In the following we describe how it is implemented in HARMONIE.

Input data for climate generation

The location of your input data for the climate generation is defined by the HM_CLDATA environment variable defined in the config-sh/config.yourhost. At ECMWF the climate data is stored on Atos here: hpc-login:/ec/res4/hpcperm/hlam/data/climate

Information on what data to download is available here. The input data contains physiography data, topography information and climatological values determined from a one year ARPEGE assimilation experiment with a resolution of T79. Climatological aerosol optical depths (tegen) or vertically integrated aerosol mass based on CAMS reanalysis 2003-2022 (camscms), can be included in the monthly climate files.

In the current version the option to use pre-generated climate files has been introduced to save time for quick experiments. To use pre-generated domains you need to set USE_REF_CLIMDIR=yes in Env_system. The regenerated domains location is defined in config_exp.h and in ECMWF are located in REF_CLIMDIR=ec:/hlam/harmonie_climdir/release-43h2.1.rc1/$DOMAIN/$ECOCLIMAP_VERSION.

Preparation of SURFEX physiography file

SURFEX needs information about the distribution of different available tiles like nature, sea, water and town. The nature tile also needs information about type of vegetation and soiltypes. The main input sources for this are found at SURFEX physiographic maps.

The data base for SURFEX-file preparation is located under HM_CLDATA/PGD

  • ecoclimats_v2.* : Landtypes
  • gtopo30.* : Topography
  • sand_fao.* : Soil type distribution
  • clay_fao.* : Soil type distribution

The generation of SURFEX physiography file (PGD.lfi) is done in scr/Prepare_pgd. The script creates the namelist OPTIONS.nam based on the DOMAIN settings in scr/Harmonie_domains.pm. Note that the SURFEX domain is only created over the C+I area. In the namelist we set which scheme that should be activated for each tile.

Tile
PHYSICSNatureSeaWaterTown
AROMEISBASEAFLXWATFLXTEB
ALAROISBASEAFLXWATFLXTown as rock

The program PGD produces one SURFEX physiography file PGD.lfi, which is stored in CLIMDIR directory.

To make sure we have the same topography input for the atmospheric part we call Prepare_pgd two times. One time to produce a PGD.lfi for SURFEX and a second time to produce a PGD.fa file that can be used as input for the climate generation described below. Note that for the atmosphere the topography will be spectrally filtered and the resulting topography will be imposed on SURFEX again.

Generation of non SURFEX monthly climate files

These files contain, among others, the surface elevation, land-sea mask, climatological aerosol and several near-surface variables for ALADIN/ALARO systems that may run without SURFEX. Climatological aerosol can be aerosol optical depth@550 nm - Tegen or CAMS, in the future also vertically integrated aerosol mass mixing ratios based on CAMS reanalysis.

scr/Climate is a script, which prepares climate file(s) for prefered forecast range. Climate files are produced for past, present and following month. The outline of Climate is as follows:

  • Check if climate files already exists.
  • Creation of namelists. The definition of domain and truncation values is taken from src/Harmonie_domains.pm.
  • Part 0: Read the PGD.fa file generated by SURFEX and write it to Neworog
  • Part 1: Filter Neworog to target grid with spectral smoothing to remove 2dx waves.
  • Part 2: generation of surface, soil and vegetation variables, without annual variation.
  • Part 3: creation of monthly climatological values and modification of albedo and emissivity according to the climatology of sea-ice limit.
  • Part 4: definition and modification of the vegetation and surface characteristics
  • Part 5: modification of fields created by step 2 and 4 over land from high resolution datasets (for each month)
  • Part 6: modification of climatological values

The result is climate files for the previous, current and next month. The files are named after their month like m01, m02 - m12 and stored in CLIMDIR.

Further reference e923

diff --git a/previews/PR1153/ClimateGeneration/DownloadInputData/index.html b/previews/PR1153/ClimateGeneration/DownloadInputData/index.html index 7f535fb3c..7b587f820 100644 --- a/previews/PR1153/ClimateGeneration/DownloadInputData/index.html +++ b/previews/PR1153/ClimateGeneration/DownloadInputData/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

Download input data

Before you can start running HARMONIE experiments some input data (external from the code repository) needs to be available on your platform. The input data contains physiography data, topography information and climatological values determined from a one year ARPEGE assimilation experiment with a resolution of T79.

+

Download input data

Before you can start running HARMONIE experiments some input data (external from the code repository) needs to be available on your platform. The input data contains physiography data, topography information and climatological values determined from a one year ARPEGE assimilation experiment with a resolution of T79.

diff --git a/previews/PR1153/ClimateSimulations/ClimateSimulation/index.html b/previews/PR1153/ClimateSimulations/ClimateSimulation/index.html index 2f308a046..3557e682c 100644 --- a/previews/PR1153/ClimateSimulations/ClimateSimulation/index.html +++ b/previews/PR1153/ClimateSimulations/ClimateSimulation/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -
+
diff --git a/previews/PR1153/DataAssimilation/CHKEVO/index.html b/previews/PR1153/DataAssimilation/CHKEVO/index.html index f5d974bbd..6978b120f 100644 --- a/previews/PR1153/DataAssimilation/CHKEVO/index.html +++ b/previews/PR1153/DataAssimilation/CHKEVO/index.html @@ -20,4 +20,4 @@ CHKEVO : 1.3677546254375832 0.22965677860570116 CHKEVO : 1.1506125378848564 0.20575065246468008 CHKEVO : 0.98597708942270756 0.19299583141063531 -.....

The RMS of dps/dt alone can be extracted with:

grep "^ CHKEVO : " HM_Date_2013041118.html | tail -n +2 | awk '{print $3}'
+.....

The RMS of dps/dt alone can be extracted with:

grep "^ CHKEVO : " HM_Date_2013041118.html | tail -n +2 | awk '{print $3}'
diff --git a/previews/PR1153/DataAssimilation/DFS/index.html b/previews/PR1153/DataAssimilation/DFS/index.html index 361a7b74a..2848f1e67 100644 --- a/previews/PR1153/DataAssimilation/DFS/index.html +++ b/previews/PR1153/DataAssimilation/DFS/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -
+
diff --git a/previews/PR1153/DataAssimilation/DaAlgorithms/index.html b/previews/PR1153/DataAssimilation/DaAlgorithms/index.html index 22be4ce96..30710dd1a 100644 --- a/previews/PR1153/DataAssimilation/DaAlgorithms/index.html +++ b/previews/PR1153/DataAssimilation/DaAlgorithms/index.html @@ -79,4 +79,4 @@ },

Maybe the only other information that is required to effectively use this VC algorithm, concerns the parameters :

'VCWEIGHTHD' => '-1.50,',
 'VCWEIGHTT' => '-1.50,',
-'VCWEIGHTPS' => '-1.50,',

They enable the flexibility of considering different weights for different 3DVAR analysis increments. When VC operates with a low VCWEIGHT value (strongly constrained mode), it can remove some overfitting to wind, temperature and/or surface pressure observations, and this may produce an apparent degradation in operational verification curves close to t=0. These three parameters permit adjust individually each variable. Note that negative values ( as in the default ) automatically reverts to the VCWEIGHT value.

+'VCWEIGHTPS' => '-1.50,',

They enable the flexibility of considering different weights for different 3DVAR analysis increments. When VC operates with a low VCWEIGHT value (strongly constrained mode), it can remove some overfitting to wind, temperature and/or surface pressure observations, and this may produce an apparent degradation in operational verification curves close to t=0. These three parameters permit adjust individually each variable. Note that negative values ( as in the default ) automatically reverts to the VCWEIGHT value.

diff --git a/previews/PR1153/DataAssimilation/DigitalFilterInitialization/index.html b/previews/PR1153/DataAssimilation/DigitalFilterInitialization/index.html index 699efa5aa..24239093e 100644 --- a/previews/PR1153/DataAssimilation/DigitalFilterInitialization/index.html +++ b/previews/PR1153/DataAssimilation/DigitalFilterInitialization/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

Digital Filter Initialization

Digital Filter Initialization (DFI) is documented by Météo France here. This wiki page is based on the "Version cycle 40t1" document available on the gmapdoc web page. By default HARMONIE does not use DFI.

DFI

The use (or not) of DFI is controlled by the variable DFI in ecf/config_exp.h. By default it is set to none.

  • idfi, incremental DFI
  • fdfi, full DFI
  • none - no initialization (default)

scr/Dfi is the script which calls the model in order to carry out DFI.

References

+

Digital Filter Initialization

Digital Filter Initialization (DFI) is documented by Météo France here. This wiki page is based on the "Version cycle 40t1" document available on the gmapdoc web page. By default HARMONIE does not use DFI.

DFI

The use (or not) of DFI is controlled by the variable DFI in ecf/config_exp.h. By default it is set to none.

  • idfi, incremental DFI
  • fdfi, full DFI
  • none - no initialization (default)

scr/Dfi is the script which calls the model in order to carry out DFI.

References

diff --git a/previews/PR1153/DataAssimilation/LSMIXandJk/index.html b/previews/PR1153/DataAssimilation/LSMIXandJk/index.html index 15adde4ea..a91e48f4e 100644 --- a/previews/PR1153/DataAssimilation/LSMIXandJk/index.html +++ b/previews/PR1153/DataAssimilation/LSMIXandJk/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

Jk as a pre-mixing method

The 3D-Var cost function including the Jk term can be written:

\[J(x) = J_b + J_o + J_k = \frac{1}{2} (x - x_b)^{\rm T} B^{-1}(x - x_b) + \frac{1}{2} (y - Hx)^{\rm T}R^{-1}(y - Hx) + \frac{1}{2} (x - x_{LS})^{\rm T} V^{-1}(x - x_{LS})\]

Setting the gradient to zero, we have at the optimal $x$:

\[\nabla J = B^{-1}(x - x_b) - H^{\rm T}R^{-1}(y - Hx) + V^{-1}(x - x_{LS}) = 0 \]

or

\[\left[B^{-1} + V^{-1} + H^{\rm T}R^{-1}H\right] \left(x - x_b \right) = H^{\rm T}R^{-1}(y - Hx_b) + V^{-1}(x_{LS} - x_b). \]

Equivalent pre-mixed first guess

Assume now that $\widetilde{x_b}$ is some yet unknown, pre-mixed field depending on $x_b$ and $x_{LS}$ that we want to determine. By adding and subtracting identical terms to the gradient equation, we have

\[B^{-1}(x - x_b + \widetilde{x_b} - \widetilde{x_b}) - H^{\rm T}R^{-1}(y - Hx + H\widetilde{x_b} - H\widetilde{x_b}) + V^{-1}(x - x_{LS} + \widetilde{x_b} - \widetilde{x_b}) = 0,\]

which, when reorganized gives

\[\left[B^{-1} + V^{-1} + H^{\rm T}R^{-1}H \right] \left(x - \widetilde{x_b}\right) = H^{\rm T}R^{-1}(y - H\widetilde{x_b}) + B^{-1}(x_b - \widetilde{x_b}) + V^{-1}(x_{LS} - \widetilde{x_b}). \]

If the last two terms on the right hand side add up to zero, i.e.,

\[B^{-1}(x_b - \widetilde{x_b}) + V^{-1}(x_{LS} - \widetilde{x_b}) = 0, \]

which means that

\[\widetilde{x_b} = [B^{-1} + V^{-1}]^{-1} ( B^{-1} x_b + V^{-1} x_{LS} ), \]

then we see that by using this mixed first guess the Jk term can be omitted, provided we use a modified B-matrix with the property that

\[\widetilde{B}^{-1} = B^{-1} + V^{-1}. \]

By writing

\[B^{-1} + V^{-1} = B^{-1}(B + V)V^{-1} = V^{-1}(B + V)B^{-1} \]

we easily see by simply inverting that

\[\widetilde{B} = [B^{-1} + V^{-1}]^{-1} = B(B + V)^{-1}V = V(B + V)^{-1}B. \]

To conclude, a 3D-Var minimization with Jk is equivalent to a minimization without the Jk term, provided that one pre-mixes the two first guess fields according to

\[\widetilde{x_b} = [B^{-1} + V^{-1}]^{-1} ( B^{-1} x_b + V^{-1} x_{LS} ) = \widetilde{B}( B^{-1} x_b + V^{-1} x_{LS} ) = V(B + V)^{-1}x_b + B(B + V)^{-1}x_{LS} \]

and use the following covariance matrix for this mixed first guess:

\[\widetilde{B} = [B^{-1} + V^{-1}]^{-1} = B(B + V)^{-1}V = V(B + V)^{-1}B. \]

Whether this is implementable in practice is a different story, it just shows the theoretical equivalence, and how LSMIXBC should ideally be done if Jk is the right answer.

+

Jk as a pre-mixing method

The 3D-Var cost function including the Jk term can be written:

\[J(x) = J_b + J_o + J_k = \frac{1}{2} (x - x_b)^{\rm T} B^{-1}(x - x_b) + \frac{1}{2} (y - Hx)^{\rm T}R^{-1}(y - Hx) + \frac{1}{2} (x - x_{LS})^{\rm T} V^{-1}(x - x_{LS})\]

Setting the gradient to zero, we have at the optimal $x$:

\[\nabla J = B^{-1}(x - x_b) - H^{\rm T}R^{-1}(y - Hx) + V^{-1}(x - x_{LS}) = 0 \]

or

\[\left[B^{-1} + V^{-1} + H^{\rm T}R^{-1}H\right] \left(x - x_b \right) = H^{\rm T}R^{-1}(y - Hx_b) + V^{-1}(x_{LS} - x_b). \]

Equivalent pre-mixed first guess

Assume now that $\widetilde{x_b}$ is some yet unknown, pre-mixed field depending on $x_b$ and $x_{LS}$ that we want to determine. By adding and subtracting identical terms to the gradient equation, we have

\[B^{-1}(x - x_b + \widetilde{x_b} - \widetilde{x_b}) - H^{\rm T}R^{-1}(y - Hx + H\widetilde{x_b} - H\widetilde{x_b}) + V^{-1}(x - x_{LS} + \widetilde{x_b} - \widetilde{x_b}) = 0,\]

which, when reorganized gives

\[\left[B^{-1} + V^{-1} + H^{\rm T}R^{-1}H \right] \left(x - \widetilde{x_b}\right) = H^{\rm T}R^{-1}(y - H\widetilde{x_b}) + B^{-1}(x_b - \widetilde{x_b}) + V^{-1}(x_{LS} - \widetilde{x_b}). \]

If the last two terms on the right hand side add up to zero, i.e.,

\[B^{-1}(x_b - \widetilde{x_b}) + V^{-1}(x_{LS} - \widetilde{x_b}) = 0, \]

which means that

\[\widetilde{x_b} = [B^{-1} + V^{-1}]^{-1} ( B^{-1} x_b + V^{-1} x_{LS} ), \]

then we see that by using this mixed first guess the Jk term can be omitted, provided we use a modified B-matrix with the property that

\[\widetilde{B}^{-1} = B^{-1} + V^{-1}. \]

By writing

\[B^{-1} + V^{-1} = B^{-1}(B + V)V^{-1} = V^{-1}(B + V)B^{-1} \]

we easily see by simply inverting that

\[\widetilde{B} = [B^{-1} + V^{-1}]^{-1} = B(B + V)^{-1}V = V(B + V)^{-1}B. \]

To conclude, a 3D-Var minimization with Jk is equivalent to a minimization without the Jk term, provided that one pre-mixes the two first guess fields according to

\[\widetilde{x_b} = [B^{-1} + V^{-1}]^{-1} ( B^{-1} x_b + V^{-1} x_{LS} ) = \widetilde{B}( B^{-1} x_b + V^{-1} x_{LS} ) = V(B + V)^{-1}x_b + B(B + V)^{-1}x_{LS} \]

and use the following covariance matrix for this mixed first guess:

\[\widetilde{B} = [B^{-1} + V^{-1}]^{-1} = B(B + V)^{-1}V = V(B + V)^{-1}B. \]

Whether this is implementable in practice is a different story, it just shows the theoretical equivalence, and how LSMIXBC should ideally be done if Jk is the right answer.

diff --git a/previews/PR1153/DataAssimilation/MTEN/index.html b/previews/PR1153/DataAssimilation/MTEN/index.html index be43d5de2..0a7fa26ab 100644 --- a/previews/PR1153/DataAssimilation/MTEN/index.html +++ b/previews/PR1153/DataAssimilation/MTEN/index.html @@ -55,4 +55,4 @@ done done -

See (Storto and Randriamampianina, 2010) for more details.

+

See (Storto and Randriamampianina, 2010) for more details.

diff --git a/previews/PR1153/DataAssimilation/NWECHKEVO/index.html b/previews/PR1153/DataAssimilation/NWECHKEVO/index.html index 2e2255965..776cbb738 100644 --- a/previews/PR1153/DataAssimilation/NWECHKEVO/index.html +++ b/previews/PR1153/DataAssimilation/NWECHKEVO/index.html @@ -53,4 +53,4 @@ NWECHKEVO:UA 13 001 003 0.79264193785264E-05 -0.15031046611816E-04 0.21385134119954E+03 -0.33856415073342E-04 0.42661347477312E-05 NWECHKEVO:UA 13 001 004 0.21090675053822E-05 0.31713133370971E-05 0.21377935010403E+03 -0.40445121858208E-04 -0.54989449665528E-05 NWECHKEVO:UA 13 001 005 0.30451493480920E-04 -0.18284403001908E-04 0.21545646796919E+03 -0.42130887042681E-04 0.14684047934687E-04 -....

up to timestep 180 (hard-coded,the first 3 hours if timestep 1 minute)

Plotting

The results are easily plotted with any graphs utility (e.g. gnuplot)

+....

up to timestep 180 (hard-coded,the first 3 hours if timestep 1 minute)

Plotting

The results are easily plotted with any graphs utility (e.g. gnuplot)

diff --git a/previews/PR1153/DataAssimilation/ObservationOperators/index.html b/previews/PR1153/DataAssimilation/ObservationOperators/index.html index 93232c53d..64b9dc490 100644 --- a/previews/PR1153/DataAssimilation/ObservationOperators/index.html +++ b/previews/PR1153/DataAssimilation/ObservationOperators/index.html @@ -104,4 +104,4 @@ ENDDO : - : + : diff --git a/previews/PR1153/DataAssimilation/Screening/index.html b/previews/PR1153/DataAssimilation/Screening/index.html index 77c6af769..da77b495f 100644 --- a/previews/PR1153/DataAssimilation/Screening/index.html +++ b/previews/PR1153/DataAssimilation/Screening/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

Screening

Introduction

Screening (configuration 002 of ARPEGE/IFS model) carries out quality control of observations.

A useful presentation (Martin Ridal) from the "Hirlam-B Training Week on HARMONIE system" training course is available here: MR_screenandminim.pdf. Most of the information on this page is based on his presentation.

Inputs

  • First guess (the same file with 5 different names):

    • ICMSHMIN1INIT
    • ICMSHMIN1IMIN
    • ICMRFMIN10000
    • ELSCFMIN1ALBC000
    • ELSCFMIN1ALBC
  • Input/output ODB directory structure

    • ${d_DB}/ECMA
    • ${d_DB}/ECMA.${base1}
  • Constants and statistics (MAY NEED TO BE UPDATED)

    • correl.dat
    • sigmab.dat
    • rszcoef_fmt
    • errgrib
    • rt_coef_atovs_newpred_ieee.dat
    • bcor_noaa.dat
    • chanspec_noaa.dat
    • rmtberr_noaa.dat
    • cstlim_noaa.dat
  • Namelist: See %screening in nam/harmonie_namelists.pm

Screening tasks

(Based on Martin Ridal's presentation).

  • Preliminary check of observations
    • Check of completeness of the reports
    • Check if station altitude is present
    • Check of the reporting practice for SYNOP & TEMP mass observations
  • Blacklisting: A blacklist is applied to discard observations of known poor quality and/or that cannot be properly handled by the data assimilation. A selection of variables for assimilation is done using the data selection part of the blacklist file and the information hard-coded in Arpege/Aladin (orographic rejection limit, land-sea rejection...). Decisions based on the blacklist are feedback to the CMA. Blacklisting is defined in src/bla/mf_blacklist.b
  • Background quality control: flags are assigned to observations – 1 => probably correct, 2 => probably incorrect, 3 => incorrect.
  • Vertical consistency of multilevel report:
    • The duplicated levels, in multi-level reports, are removed from the reports
    • If 4 consecutive layers are found to be of suspicious quality then these layers are rejected
  • Removal of duplicated reports
    • In case of co-located airep reports of the same observation types (time, position), some or all of the content of one of the reports is rejected
  • Redundancy check
    • performed for active reports that are co-located and originate from the same station
    • LAND SYNOP: the report closest to the centre of the screening time window with most active data is retained
    • SHIP SYNOP: redundant if the moving platforms are within a circle of 1^o^ radius src/arpifs/obs_preproc/sufglim.F90 RSHIDIS = 111000._JPRB
    • TEMP and PILOT: same stations are considered at the same time in the redundancy check
    • A SYNOP mass observation is redundant if there are any TEMP geopotential height observations (made in the same time and the same station) that are no more than 50hPa above the SYNOP mass observation
  • Thinning: High resolution data needs to be reduced to reduce correlated errors and reduce the amount of data

Output

The quality control information will be put into the input ECMA ODB(s) and a newly created CCMA to used by the 3DVAR minimization.

A valuable summary about screening decisions can be found in HM_Date_YYYYMMDDHH.html:

  • Look for “SCREENING STATISTICS” to get:
    • STATUS summary
    • EVENT summary
    • Number of variables, departures and missing departures
    • Diagnostic JO-table
    • CCMA ODB and updated ECMA ODB

Screening Events listed under "EVENT SUMMARY OF REPORTS:"

Description
1NO DATA IN THE REPORT
2ALL DATA REJECTED
3BAD REPORTING PRACTICE
4REJECTED DUE TO RDB FLAG
5ACTIVATED DUE TO RDB FLAG
6ACTIVATED BY WHITELIST
7HORIZONTAL POSITION OUT OF RANGE
8VERTICAL POSITION OUT OF RANGE
9TIME OUT OF RANGE
10REDUNDANT REPORT
11REPORT OVER LAND
12REPORT OVER SEA
13MISSING STATION ALTITUDE
14MODEL SUR. TOO FAR FROM STAT. ALT.
15REPORT REJECTED THROUGH THE NAMELIST
16FAILED QUALITY CONTROL
+

Screening

Introduction

Screening (configuration 002 of ARPEGE/IFS model) carries out quality control of observations.

A useful presentation (Martin Ridal) from the "Hirlam-B Training Week on HARMONIE system" training course is available here: MR_screenandminim.pdf. Most of the information on this page is based on his presentation.

Inputs

  • First guess (the same file with 5 different names):

    • ICMSHMIN1INIT
    • ICMSHMIN1IMIN
    • ICMRFMIN10000
    • ELSCFMIN1ALBC000
    • ELSCFMIN1ALBC
  • Input/output ODB directory structure

    • ${d_DB}/ECMA
    • ${d_DB}/ECMA.${base1}
  • Constants and statistics (MAY NEED TO BE UPDATED)

    • correl.dat
    • sigmab.dat
    • rszcoef_fmt
    • errgrib
    • rt_coef_atovs_newpred_ieee.dat
    • bcor_noaa.dat
    • chanspec_noaa.dat
    • rmtberr_noaa.dat
    • cstlim_noaa.dat
  • Namelist: See %screening in nam/harmonie_namelists.pm

Screening tasks

(Based on Martin Ridal's presentation).

  • Preliminary check of observations
    • Check of completeness of the reports
    • Check if station altitude is present
    • Check of the reporting practice for SYNOP & TEMP mass observations
  • Blacklisting: A blacklist is applied to discard observations of known poor quality and/or that cannot be properly handled by the data assimilation. A selection of variables for assimilation is done using the data selection part of the blacklist file and the information hard-coded in Arpege/Aladin (orographic rejection limit, land-sea rejection...). Decisions based on the blacklist are feedback to the CMA. Blacklisting is defined in src/bla/mf_blacklist.b
  • Background quality control: flags are assigned to observations – 1 => probably correct, 2 => probably incorrect, 3 => incorrect.
  • Vertical consistency of multilevel report:
    • The duplicated levels, in multi-level reports, are removed from the reports
    • If 4 consecutive layers are found to be of suspicious quality then these layers are rejected
  • Removal of duplicated reports
    • In case of co-located airep reports of the same observation types (time, position), some or all of the content of one of the reports is rejected
  • Redundancy check
    • performed for active reports that are co-located and originate from the same station
    • LAND SYNOP: the report closest to the centre of the screening time window with most active data is retained
    • SHIP SYNOP: redundant if the moving platforms are within a circle of 1^o^ radius src/arpifs/obs_preproc/sufglim.F90 RSHIDIS = 111000._JPRB
    • TEMP and PILOT: same stations are considered at the same time in the redundancy check
    • A SYNOP mass observation is redundant if there are any TEMP geopotential height observations (made in the same time and the same station) that are no more than 50hPa above the SYNOP mass observation
  • Thinning: High resolution data needs to be reduced to reduce correlated errors and reduce the amount of data

Output

The quality control information will be put into the input ECMA ODB(s) and a newly created CCMA to used by the 3DVAR minimization.

A valuable summary about screening decisions can be found in HM_Date_YYYYMMDDHH.html:

  • Look for “SCREENING STATISTICS” to get:
    • STATUS summary
    • EVENT summary
    • Number of variables, departures and missing departures
    • Diagnostic JO-table
    • CCMA ODB and updated ECMA ODB

Screening Events listed under "EVENT SUMMARY OF REPORTS:"

Description
1NO DATA IN THE REPORT
2ALL DATA REJECTED
3BAD REPORTING PRACTICE
4REJECTED DUE TO RDB FLAG
5ACTIVATED DUE TO RDB FLAG
6ACTIVATED BY WHITELIST
7HORIZONTAL POSITION OUT OF RANGE
8VERTICAL POSITION OUT OF RANGE
9TIME OUT OF RANGE
10REDUNDANT REPORT
11REPORT OVER LAND
12REPORT OVER SEA
13MISSING STATION ALTITUDE
14MODEL SUR. TOO FAR FROM STAT. ALT.
15REPORT REJECTED THROUGH THE NAMELIST
16FAILED QUALITY CONTROL
diff --git a/previews/PR1153/DataAssimilation/SingleObs/index.html b/previews/PR1153/DataAssimilation/SingleObs/index.html index e7bb2af2c..d0ecda2fe 100644 --- a/previews/PR1153/DataAssimilation/SingleObs/index.html +++ b/previews/PR1153/DataAssimilation/SingleObs/index.html @@ -13,4 +13,4 @@ 37 } else { 38 $nprocx=1; 39 $nprocy=1; -40 }
  • Launch the single observation impact experiment:

    ./Harmonie start DTG=2012061003 DTGEND=2012061006
  • The resulting analysis file be found as $SCRATCH/hm_home/<exp>/archive/2012/06/10/06/MXMIN1999+0000. You can now diagnose the 3D-VAR analysis increments of the sinob-experiment taking the difference between the analysis MXMIN1999+0000 (analysis) and the first guess, $SCRATCH/hm_home/<exp>/archive/2012/06/10/03/ICMSHHARM+0003. Plot horizontal and vertical cross-sections of temperature and other variables using your favorite software (EpyGram for example).

  • Note that you can change position of observation, observation error, variable to be observed etc. Investigate these options by taking a closer look at the script Create_single_obs.

    Read more about radiance single observation experiments here. In ec:/smx/sinob_wiki_ml you will also find OBSOUL_amsua7, a file for generating a satellati radiance amsu a channel 7 single observation impact experiment.

    +40 }
  • Launch the single observation impact experiment:

    ./Harmonie start DTG=2012061003 DTGEND=2012061006
  • The resulting analysis file be found as $SCRATCH/hm_home/<exp>/archive/2012/06/10/06/MXMIN1999+0000. You can now diagnose the 3D-VAR analysis increments of the sinob-experiment taking the difference between the analysis MXMIN1999+0000 (analysis) and the first guess, $SCRATCH/hm_home/<exp>/archive/2012/06/10/03/ICMSHHARM+0003. Plot horizontal and vertical cross-sections of temperature and other variables using your favorite software (EpyGram for example).

  • Note that you can change position of observation, observation error, variable to be observed etc. Investigate these options by taking a closer look at the script Create_single_obs.

    Read more about radiance single observation experiments here. In ec:/smx/sinob_wiki_ml you will also find OBSOUL_amsua7, a file for generating a satellati radiance amsu a channel 7 single observation impact experiment.

    diff --git a/previews/PR1153/DataAssimilation/StructureFunctions/index.html b/previews/PR1153/DataAssimilation/StructureFunctions/index.html index 9333a391f..fb608af45 100644 --- a/previews/PR1153/DataAssimilation/StructureFunctions/index.html +++ b/previews/PR1153/DataAssimilation/StructureFunctions/index.html @@ -90,4 +90,4 @@ ecp stab_your_eda_exp.bal.gz ec:/smx/jbdata/. (with your own filename and directory) ``` - also create a tar-file with all `*.xy`, `*.y`, `*.cv`, `*.bal` and `*.cvt` and put on ecfs for future diagnostical purposes) These new files are you final background error statistics to be diagnosed (compared with STEP 1 ones perhaps) and inserted to your data assimilation by modyfying `include.ass` (as in bullet 3 above) to point to your new files.

    Diagnosis of background error statistics

    1. Diagnosis of background error statistics is a rather complicated task. To get an idea of what the correlations and covariances should look like take a look in the article: Berre, L., 2000: Estimation of synoptic and meso scale forecast error covariances in a limited area model. Mon. Wea. Rev., 128, 644-667. Software for investigating and graphically illustrate different aspects of the background error statistics has been developed and statistics generated for different domains has been investigated using the AccordDaTools package. With this software you can also compare your newly generated background error statistics with the one generated for other HARMONIE domains. This will give you and idea if your statistics seems reasonable. For diagnosing the newly derived background error statistics follow these instructions:

    2. Get the code and scripts:

    3. Run Jb diagnostics script:

    1. The AccordDaTools package also provides two tools for plotting the data produced by jbdiagnose, plotjbbal and plotjbdiag. plotjbbal plots Jb balances for different parameters. plotjbdiag produces spectral density (spdens) and vertical correlation (vercor) diagnostic plots for your structure funtions. For example:

    Run 3DVAR/4DVAR with the new background error statistics

    1. create hm_home/jb_da. Then cd $HOME/hm_home/jb_da.

    2. create experiment by typing

      ~hlam/Harmonie setup -r ~hlam/harmonie_release/git/tags/harmonie-43h2.2.1
    3. In scr/include.ass set JBDIR=ec:/$uid/jbdata (uid being your userid, in this example 'ec:/smx/jbdata') and f_JBCV is name of your .cv file in ec:/$uid/jbdata (without .gz) and f_JBBAL is 'name of your .bal file in ec:/$uid/jbdata (without .gz) (in this example, f_JBCV=stab_METCOOPD_65_20200601_360.cv, stab_METCOOPD_65_20200601_360.bal). Add these three lines instead of the three lines in include.ass that follows right after the elif statement: elif [ "$DOMAIN" = METCOOP25D]; then. If domain is other than METCOOP25D one has to look for the alternative name of the domain.

    4. From $HOME/hm_home/jb_da launch experiment by typing

      ~hlam/Harmonie start DTG=2021010100 DTGEND=2021010103
    5. The resulting analysis file be found under $TEMP/hm_home/jb_da/archive/2021/01/01/03 and it will be called MXMIN1999+0000 and on and ec:/$uid/harmonie/2021/01/01/03. To diagnose the 3D-VAR analysis increments of the jb_da-experiment, copy the files MXMIN1999+0000 (analysis) and ICMSHHARM+0003 (fg) to $SCRATCH. The first guess (background) file can be found on $TEMP/hm_home/jb_da/archive/2021/01/01/00 and ec:/$uid/harmonie/jb_da/2021/01/01/00. Convert from FA-file format to GRIB with the gl-software ($SCRATCH/hm_home/jb_da/bin/gl) by typing ./gl -p MXMIN1999+0000 and ./gl -p ICMSHANAL+0000. Then plot the difference between files file with your favorite software. Plot horizontal and vertical cross-sections of temperature and other variables using your favourite software (epygram for example).

    6. Now you have managed to insert the newly generated background error statistics to the assimilation system and managed to carry out a full scale data assimilation system and plot the analysis increments. The next natural step to further diagnose the background error statistics is to carry out a single observation impact experiment, utilizing your newly generated background error statistics. Note the variables REDNMC and REDZONE in include.ass. REDNMC is the scaling factor for the background error statistics (default value 0.6/0.9) for METCOOP25D/NEW_DOMAIN). REDZONE described how far from the lateral boundaries (in km) the observations need to be located to be assimilated (default value 150/100) for METCOOP25D/NEW_DOMAIN.

    In-line Interpolation and Extrapolation of Jb-statistics

    In case you do not have existing background error statistics derived for your domain there is a built technical possibility to use Jb-files from another domain derived with the same number of vertical levels. From this host Jb-files background error statistics are then interpolated or extrapolated to the current domain configuration. The assumption is then (which is in general questionable) that the statistics derived derived on the host domain is as well valid for the current domain. If the longest side of the host domain is shorter than the longest side of the current domain an extrapolation of background error covariance spectra is needed. Such extrapolation should be avoided over a wide range of wavenumbers. Therefore it is recommended that the longest side of the host Jb-file is as long or longer than the longest side of the current domain.The interpolation is invoked by in ecf/config_exp.h set JB_INTERPOL=yeś and JB_REF_DOMAIN=$HOST_JB, where $HOST_JB is for example METCOOP25B. These settings will activate runnning of script jbconv.sh (in case no Jb files present for current domain), called from Fetch_assim_data.

    On-going work & future developments

    Recent and on-going work as well as plans for future developments:

    References

    + also create a tar-file with all `*.xy`, `*.y`, `*.cv`, `*.bal` and `*.cvt` and put on ecfs for future diagnostical purposes) These new files are you final background error statistics to be diagnosed (compared with STEP 1 ones perhaps) and inserted to your data assimilation by modyfying `include.ass` (as in bullet 3 above) to point to your new files.

    Diagnosis of background error statistics

    1. Diagnosis of background error statistics is a rather complicated task. To get an idea of what the correlations and covariances should look like take a look in the article: Berre, L., 2000: Estimation of synoptic and meso scale forecast error covariances in a limited area model. Mon. Wea. Rev., 128, 644-667. Software for investigating and graphically illustrate different aspects of the background error statistics has been developed and statistics generated for different domains has been investigated using the AccordDaTools package. With this software you can also compare your newly generated background error statistics with the one generated for other HARMONIE domains. This will give you and idea if your statistics seems reasonable. For diagnosing the newly derived background error statistics follow these instructions:

    2. Get the code and scripts:

    3. Run Jb diagnostics script:

    1. The AccordDaTools package also provides two tools for plotting the data produced by jbdiagnose, plotjbbal and plotjbdiag. plotjbbal plots Jb balances for different parameters. plotjbdiag produces spectral density (spdens) and vertical correlation (vercor) diagnostic plots for your structure funtions. For example:

    Run 3DVAR/4DVAR with the new background error statistics

    1. create hm_home/jb_da. Then cd $HOME/hm_home/jb_da.

    2. create experiment by typing

      ~hlam/Harmonie setup -r ~hlam/harmonie_release/git/tags/harmonie-43h2.2.1
    3. In scr/include.ass set JBDIR=ec:/$uid/jbdata (uid being your userid, in this example 'ec:/smx/jbdata') and f_JBCV is name of your .cv file in ec:/$uid/jbdata (without .gz) and f_JBBAL is 'name of your .bal file in ec:/$uid/jbdata (without .gz) (in this example, f_JBCV=stab_METCOOPD_65_20200601_360.cv, stab_METCOOPD_65_20200601_360.bal). Add these three lines instead of the three lines in include.ass that follows right after the elif statement: elif [ "$DOMAIN" = METCOOP25D]; then. If domain is other than METCOOP25D one has to look for the alternative name of the domain.

    4. From $HOME/hm_home/jb_da launch experiment by typing

      ~hlam/Harmonie start DTG=2021010100 DTGEND=2021010103
    5. The resulting analysis file be found under $TEMP/hm_home/jb_da/archive/2021/01/01/03 and it will be called MXMIN1999+0000 and on and ec:/$uid/harmonie/2021/01/01/03. To diagnose the 3D-VAR analysis increments of the jb_da-experiment, copy the files MXMIN1999+0000 (analysis) and ICMSHHARM+0003 (fg) to $SCRATCH. The first guess (background) file can be found on $TEMP/hm_home/jb_da/archive/2021/01/01/00 and ec:/$uid/harmonie/jb_da/2021/01/01/00. Convert from FA-file format to GRIB with the gl-software ($SCRATCH/hm_home/jb_da/bin/gl) by typing ./gl -p MXMIN1999+0000 and ./gl -p ICMSHANAL+0000. Then plot the difference between files file with your favorite software. Plot horizontal and vertical cross-sections of temperature and other variables using your favourite software (epygram for example).

    6. Now you have managed to insert the newly generated background error statistics to the assimilation system and managed to carry out a full scale data assimilation system and plot the analysis increments. The next natural step to further diagnose the background error statistics is to carry out a single observation impact experiment, utilizing your newly generated background error statistics. Note the variables REDNMC and REDZONE in include.ass. REDNMC is the scaling factor for the background error statistics (default value 0.6/0.9) for METCOOP25D/NEW_DOMAIN). REDZONE described how far from the lateral boundaries (in km) the observations need to be located to be assimilated (default value 150/100) for METCOOP25D/NEW_DOMAIN.

    In-line Interpolation and Extrapolation of Jb-statistics

    In case you do not have existing background error statistics derived for your domain there is a built technical possibility to use Jb-files from another domain derived with the same number of vertical levels. From this host Jb-files background error statistics are then interpolated or extrapolated to the current domain configuration. The assumption is then (which is in general questionable) that the statistics derived derived on the host domain is as well valid for the current domain. If the longest side of the host domain is shorter than the longest side of the current domain an extrapolation of background error covariance spectra is needed. Such extrapolation should be avoided over a wide range of wavenumbers. Therefore it is recommended that the longest side of the host Jb-file is as long or longer than the longest side of the current domain.The interpolation is invoked by in ecf/config_exp.h set JB_INTERPOL=yeś and JB_REF_DOMAIN=$HOST_JB, where $HOST_JB is for example METCOOP25B. These settings will activate runnning of script jbconv.sh (in case no Jb files present for current domain), called from Fetch_assim_data.

    On-going work & future developments

    Recent and on-going work as well as plans for future developments:

    References

    diff --git a/previews/PR1153/DataAssimilation/Surface/CANARI/index.html b/previews/PR1153/DataAssimilation/Surface/CANARI/index.html index e2b9db92f..328d96f2c 100644 --- a/previews/PR1153/DataAssimilation/Surface/CANARI/index.html +++ b/previews/PR1153/DataAssimilation/Surface/CANARI/index.html @@ -17,4 +17,4 @@ export ODB_MERGEODB_DIRECT= ... optional direct ODB merge, If your ODB was not merged previously use 1
  • Concerning the observation use, another file is necessary, but it is without any interest for CANARI (just part of variational analysis code is not controlled by a logical keyt !) The file can be obtained on "tori" via gget var.misc.rszcoef_fmt.01.

    ln -s rszcoef_fmt var.misc.rszcoef_fmt.01
  • The climatological files

    ln  -s  climfile_${mm}  ICMSHANALCLIM
     ln  -s  climfile_${mm2} ICMSHANALCLI2
  • The namelist file

    ln -s namelist fort.4 
  • The ISBA files

  • run CANARI

    MASTERODB -c701 -vmeteo -maladin -eANAL -t1. -ft0 -aeul

    OUTPUTs

    NODE*

    Sample of script is attached.

    As a part of the system training in Copenhagen in 2008, Roger prepared an intoduction to CANARI, which is found in HarmonieSystemTraining2008/Lecture/SurfaceAssimilation on hirlam.org

    References

    +ln -s G_file ICMSHANALFGIN

    run CANARI

    MASTERODB -c701 -vmeteo -maladin -eANAL -t1. -ft0 -aeul

    OUTPUTs

    NODE*

    Sample of script is attached.

    As a part of the system training in Copenhagen in 2008, Roger prepared an intoduction to CANARI, which is found in HarmonieSystemTraining2008/Lecture/SurfaceAssimilation on hirlam.org

    References

    diff --git a/previews/PR1153/DataAssimilation/Surface/CANARI_EKF_SURFEX/index.html b/previews/PR1153/DataAssimilation/Surface/CANARI_EKF_SURFEX/index.html index eaa536e1a..92ffc2488 100644 --- a/previews/PR1153/DataAssimilation/Surface/CANARI_EKF_SURFEX/index.html +++ b/previews/PR1153/DataAssimilation/Surface/CANARI_EKF_SURFEX/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Surface variables assimilated / read in EKF_MAIN

    From cycle 37 EKF is implemented in research/development mode. The following tiles and variables are modified:

    NATURE

    WG2/WG1/TG2/TG1

    The uppermost two levels in ISBA of soil moisture and temperature are assimilated. With CANARI/CANARI_OI_MAIN by an OI method, by CANARI_SURFEX_EKF by an Extended Kalman Filter (EKF).

    For 2012 it is planned to have a re-writing of OI_MAIN/EKF_MAIN to be the same binary in order to be able to apply the work done for OI_MAIN in EKF_MAIN and thus reduce the maintainance costs.

    +

    Surface variables assimilated / read in EKF_MAIN

    From cycle 37 EKF is implemented in research/development mode. The following tiles and variables are modified:

    NATURE

    WG2/WG1/TG2/TG1

    The uppermost two levels in ISBA of soil moisture and temperature are assimilated. With CANARI/CANARI_OI_MAIN by an OI method, by CANARI_SURFEX_EKF by an Extended Kalman Filter (EKF).

    For 2012 it is planned to have a re-writing of OI_MAIN/EKF_MAIN to be the same binary in order to be able to apply the work done for OI_MAIN in EKF_MAIN and thus reduce the maintainance costs.

    diff --git a/previews/PR1153/DataAssimilation/Surface/CANARI_OI_MAIN/index.html b/previews/PR1153/DataAssimilation/Surface/CANARI_OI_MAIN/index.html index 0e201ba0d..69f54e7ae 100644 --- a/previews/PR1153/DataAssimilation/Surface/CANARI_OI_MAIN/index.html +++ b/previews/PR1153/DataAssimilation/Surface/CANARI_OI_MAIN/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Surface variables assimilated / read in OI_main

    CANARI_OI_MAIN is the surface assimilation scheme which emulates what is done in CANARI for old_surface, but by using the external surface schme SURFEX.

    The default surface model is SURFEX and the default surface assimilation scheme is CANARI_OI_MAIN.

    NATURE

    WG2/WG1/TG2/TG1

    The uppermost two levels in ISBA of soil moisture and temperature are assimilated. With CANARI/CANARI_OI_MAIN by an OI method, by CANARI_SURFEX_EKF by an Extended Kalman Filter (EKF).

    SNOW

    The snow analysis is performed in CANARI and is controlled by the key: LAESNM. This is set default to be true in scr/RunCanari. And if running with SURFEX this will need to be true also in scr/OI_main as the SURFEX snow then needs to be updated by the analysis done in CANARI.

    SEA

    SST/SIC

    The only option for SST/SIC at the moment is to take it from the boundaries.

    • ecf/config_exp.h :SST=BOUNDARY

    If you are using boundaries from IFS the task Interpol_sst will interpolate sst from your boundary file and take into account that SST in the IFS files is not defined over land (as for HIRLAM) and also use an extra-polation routine to propagate the SST into narrow fjords.

    There is a SST analysis built-in in CANARI but not used by HARMONIE or METEO-FRANCE.

    WATER

    LAKE temperature

    Lake temperatures are updated in OI_main and are extrapolated from the land surface temperatures.

    TOWN

    ROAD temperature

    Only used when TEB is activated (key: LAROME). Increment for TG2 is added to to ROAD layer 3.

    +

    Surface variables assimilated / read in OI_main

    CANARI_OI_MAIN is the surface assimilation scheme which emulates what is done in CANARI for old_surface, but by using the external surface schme SURFEX.

    The default surface model is SURFEX and the default surface assimilation scheme is CANARI_OI_MAIN.

    NATURE

    WG2/WG1/TG2/TG1

    The uppermost two levels in ISBA of soil moisture and temperature are assimilated. With CANARI/CANARI_OI_MAIN by an OI method, by CANARI_SURFEX_EKF by an Extended Kalman Filter (EKF).

    SNOW

    The snow analysis is performed in CANARI and is controlled by the key: LAESNM. This is set default to be true in scr/RunCanari. And if running with SURFEX this will need to be true also in scr/OI_main as the SURFEX snow then needs to be updated by the analysis done in CANARI.

    SEA

    SST/SIC

    The only option for SST/SIC at the moment is to take it from the boundaries.

    • ecf/config_exp.h :SST=BOUNDARY

    If you are using boundaries from IFS the task Interpol_sst will interpolate sst from your boundary file and take into account that SST in the IFS files is not defined over land (as for HIRLAM) and also use an extra-polation routine to propagate the SST into narrow fjords.

    There is a SST analysis built-in in CANARI but not used by HARMONIE or METEO-FRANCE.

    WATER

    LAKE temperature

    Lake temperatures are updated in OI_main and are extrapolated from the land surface temperatures.

    TOWN

    ROAD temperature

    Only used when TEB is activated (key: LAROME). Increment for TG2 is added to to ROAD layer 3.

    diff --git a/previews/PR1153/DataAssimilation/Surface/SurfaceAnalysis/index.html b/previews/PR1153/DataAssimilation/Surface/SurfaceAnalysis/index.html index 76cf6a91b..7931cfc21 100644 --- a/previews/PR1153/DataAssimilation/Surface/SurfaceAnalysis/index.html +++ b/previews/PR1153/DataAssimilation/Surface/SurfaceAnalysis/index.html @@ -16,4 +16,4 @@ (edit then nam/LISTE_NOIRE_DIAP to insert, e.g. at the last line, following 1 SHIP 24 11 DBKR 03062012 - + diff --git a/previews/PR1153/EPS/BDSTRATEGY/index.html b/previews/PR1153/EPS/BDSTRATEGY/index.html index 60689a3b7..cfe54ccfb 100644 --- a/previews/PR1153/EPS/BDSTRATEGY/index.html +++ b/previews/PR1153/EPS/BDSTRATEGY/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Boundary strategies for HarmonEPS: SLAF and EC ENS

    Presently there are two available options for choosing boundaries when running HarmonEPS: EC ENS or SLAF In the branch harmonEPS-40h1.1 SLAF is set as default

    Settings for SLAF (default in branch harmonEPS-40h1.1 )Settings for EC ENS
    ecf/config_exp.hBDSTRATEGY=simulate_operationalBDSTRATEGY=eps_ec
    BDINT=1 (can be set to larger value)BDINT=3 (or larger, hourly input is not possible)
    suites/harmonie.pmComment out SLAF settings: #SLAFLAG, #SLAFDIFF, #SLAFK
    'ENSBDMBR' => [ 0]'ENSBDMBR' => [ 0, 1..10] (or any other members from EC ENS you would like to use)

    More information about how to treat the settings in harmonie.pm, see: here Note that BDSTRATEGY=eps_ec uses EC ENS data as stored in the GLAMEPS archive (as ECMWF does not store model levels in MARS). Only EC ENS at 00UTC and 12UTC are in this archive, and with 3h output, hence you need to use BDINT=3 for this option.

    +

    Boundary strategies for HarmonEPS: SLAF and EC ENS

    Presently there are two available options for choosing boundaries when running HarmonEPS: EC ENS or SLAF In the branch harmonEPS-40h1.1 SLAF is set as default

    Settings for SLAF (default in branch harmonEPS-40h1.1 )Settings for EC ENS
    ecf/config_exp.hBDSTRATEGY=simulate_operationalBDSTRATEGY=eps_ec
    BDINT=1 (can be set to larger value)BDINT=3 (or larger, hourly input is not possible)
    suites/harmonie.pmComment out SLAF settings: #SLAFLAG, #SLAFDIFF, #SLAFK
    'ENSBDMBR' => [ 0]'ENSBDMBR' => [ 0, 1..10] (or any other members from EC ENS you would like to use)

    More information about how to treat the settings in harmonie.pm, see: here Note that BDSTRATEGY=eps_ec uses EC ENS data as stored in the GLAMEPS archive (as ECMWF does not store model levels in MARS). Only EC ENS at 00UTC and 12UTC are in this archive, and with 3h output, hence you need to use BDINT=3 for this option.

    diff --git a/previews/PR1153/EPS/Howto/index.html b/previews/PR1153/EPS/Howto/index.html index c2db0435e..83017d698 100644 --- a/previews/PR1153/EPS/Howto/index.html +++ b/previews/PR1153/EPS/Howto/index.html @@ -47,4 +47,4 @@ 'FirstHour' => sub { my $mbr = shift; return $ENV{StartHour} % &Env('FCINT',$mbr); } - );

    ANAATMO is straightforward, only the control members need an exception from blending, so using a hash is most appropriate. Similarly for FCINT. For PHYSICS we have used an array and the fact that the array will be recycled. Thus member 0 will be the AROME control, while member 1 will be the ALARO control. The reason why we did not simply put a 2-element array [ 'arome','alaro'] to be repeated is that since the ECMWF perturbations come in +/- pairs, we don't want all the '+' perturbations to be always with the same physics (and the '-' perturbations with the other type). Therefore, we added a second pair with the order reversed, to alternate +/- perturbations between AROME and ALARO members. ENSCTL follows the same pattern as PHYSICS. Note the need for 3-digit numbers in ENSCTL, at present this is necessary to avoid parsing errors in the preparation step of ecFlow.

    Note also how we have used ENSBDMBR. For both the AROME control (member 0) and ALARO control (member 1), we have used the EC EPS control member 0 to provide boundaries. The syntax 1..20 is a perl shorthand for the list 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20.

    Note added after changeset [12537]: The setting of ENSBDMBR created a race condition in the boundary extraction for runs at ECMWF. This is hopefully solved by the new definition for BDDIR, which makes use of the possibility of having a subroutine to compute the member specific settings. Another example where a subroutine came out handy was for the setting of FirstHour.

    Further reading

    More specific instructions and information about known problems can be found here.

    + );

    ANAATMO is straightforward, only the control members need an exception from blending, so using a hash is most appropriate. Similarly for FCINT. For PHYSICS we have used an array and the fact that the array will be recycled. Thus member 0 will be the AROME control, while member 1 will be the ALARO control. The reason why we did not simply put a 2-element array [ 'arome','alaro'] to be repeated is that since the ECMWF perturbations come in +/- pairs, we don't want all the '+' perturbations to be always with the same physics (and the '-' perturbations with the other type). Therefore, we added a second pair with the order reversed, to alternate +/- perturbations between AROME and ALARO members. ENSCTL follows the same pattern as PHYSICS. Note the need for 3-digit numbers in ENSCTL, at present this is necessary to avoid parsing errors in the preparation step of ecFlow.

    Note also how we have used ENSBDMBR. For both the AROME control (member 0) and ALARO control (member 1), we have used the EC EPS control member 0 to provide boundaries. The syntax 1..20 is a perl shorthand for the list 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20.

    Note added after changeset [12537]: The setting of ENSBDMBR created a race condition in the boundary extraction for runs at ECMWF. This is hopefully solved by the new definition for BDDIR, which makes use of the possibility of having a subroutine to compute the member specific settings. Another example where a subroutine came out handy was for the setting of FirstHour.

    Further reading

    More specific instructions and information about known problems can be found here.

    diff --git a/previews/PR1153/EPS/SLAF/Get_pertdia.pl.pm/index.html b/previews/PR1153/EPS/SLAF/Get_pertdia.pl.pm/index.html index 8c821df0d..2b56544b3 100644 --- a/previews/PR1153/EPS/SLAF/Get_pertdia.pl.pm/index.html +++ b/previews/PR1153/EPS/SLAF/Get_pertdia.pl.pm/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -
    +
    diff --git a/previews/PR1153/EPS/SLAF/index.html b/previews/PR1153/EPS/SLAF/index.html index 45d0a4413..5e21d373e 100644 --- a/previews/PR1153/EPS/SLAF/index.html +++ b/previews/PR1153/EPS/SLAF/index.html @@ -23,4 +23,4 @@ ... 009 42 36 06 0.95 28.92 130.72 127.48 11 009 48 36 06 0.95 14.80 176.10 175.48 11 -

    The SLAKF can then be adjusted to achieve a uniform level of STDV for all member. Note that the response may be different for different seasons and will vary between IFS versions. An example of SLAF diagnostics from MetCoOp can be seen in the figure below

    Examples

    Below is an example for 2016052006 for the two different approaches of SLAF described above:

    +

    The SLAKF can then be adjusted to achieve a uniform level of STDV for all member. Note that the response may be different for different seasons and will vary between IFS versions. An example of SLAF diagnostics from MetCoOp can be seen in the figure below

    Examples

    Below is an example for 2016052006 for the two different approaches of SLAF described above:

    diff --git a/previews/PR1153/EPS/SPP/index.html b/previews/PR1153/EPS/SPP/index.html index e67c80a74..2910cc28b 100644 --- a/previews/PR1153/EPS/SPP/index.html +++ b/previews/PR1153/EPS/SPP/index.html @@ -65,4 +65,4 @@ pattern 3 for CLDDPTHDP using seed 980493159 KGET_SEED_SPP: ICE_CLD_WGT 10008 1362729695 pattern 4 for ICE_CLD_WGT using seed 1362729695 -...

    would give us

    Perturbationraw patternscaled pattern
    PSIGQSATS001EZDIAG01S002EZDIAG01
    CLDDPTHS003EZDIAG01S004EZDIAG01
    CLDDPTHDPS005EZDIAG01S006EZDIAG01
    ICE_CLD_WGTS007EZDIAG01S008EZDIAG01

    and so on

    SPPT pattern EZDIAG02 (same in all levels)

    SPP tendencies PtendU EZDIAG03

    SPP tendencies PtendV EZDIAG04

    SPP tendencies PtendT EZDIAG05

    SPP tendencies PtendQ EZDIAG06

    Suggestions for parameters to include in SPP:

    ParameterDescriptionDeterministic value cy43Suggested range of valuessuggestion for parameter to correlate withPerson responsible for implementing
    Terminal fall velocities of rain, snow and graupelSibbo
    RFRMIN(39)Depo_rate_graupelRFRMIN 39 and 40 should approximately respect log10C = -3.55 x + 3.89, see eq. 6.2 on p. 108 in the meso-NH documentation: [https://hirlam.org/trac/attachment/wiki/HarmonieSystemDocumentation/EPS/SPP/sciICE3doc_p3.pdf Doc]Pirkka
    RFRMIN(40)Depo_rate_snow)RFRMIN 39 and 40 should approximately respect log10C = -3.55 x + 3.89, see eq. 6.2 on p. 108 in the meso-NH documentation: [https://hirlam.org/trac/attachment/wiki/HarmonieSystemDocumentation/EPS/SPP/sciICE3doc_p3.pdf Doc]Pirkka
    RFRMIN(16)Distr_snow_cto be correlated with RFRMIN(17)
    RFRMIN(17)Distr_snow_xto be correlated with RFRMIN(16)

    Experiments

    List with cy43h22 experiments is here: [wiki:HarmonieSystemDocumentation/EPS/ExplistSPPcy43 List of experiments]

    A guide for running the tuning experiments is here: [wiki:HarmonieSystemDocumentation/EPS/HowtoSPPcy43 Guide]

    +...

    would give us

    Perturbationraw patternscaled pattern
    PSIGQSATS001EZDIAG01S002EZDIAG01
    CLDDPTHS003EZDIAG01S004EZDIAG01
    CLDDPTHDPS005EZDIAG01S006EZDIAG01
    ICE_CLD_WGTS007EZDIAG01S008EZDIAG01

    and so on

    SPPT pattern EZDIAG02 (same in all levels)

    SPP tendencies PtendU EZDIAG03

    SPP tendencies PtendV EZDIAG04

    SPP tendencies PtendT EZDIAG05

    SPP tendencies PtendQ EZDIAG06

    Suggestions for parameters to include in SPP:

    ParameterDescriptionDeterministic value cy43Suggested range of valuessuggestion for parameter to correlate withPerson responsible for implementing
    Terminal fall velocities of rain, snow and graupelSibbo
    RFRMIN(39)Depo_rate_graupelRFRMIN 39 and 40 should approximately respect log10C = -3.55 x + 3.89, see eq. 6.2 on p. 108 in the meso-NH documentation: [https://hirlam.org/trac/attachment/wiki/HarmonieSystemDocumentation/EPS/SPP/sciICE3doc_p3.pdf Doc]Pirkka
    RFRMIN(40)Depo_rate_snow)RFRMIN 39 and 40 should approximately respect log10C = -3.55 x + 3.89, see eq. 6.2 on p. 108 in the meso-NH documentation: [https://hirlam.org/trac/attachment/wiki/HarmonieSystemDocumentation/EPS/SPP/sciICE3doc_p3.pdf Doc]Pirkka
    RFRMIN(16)Distr_snow_cto be correlated with RFRMIN(17)
    RFRMIN(17)Distr_snow_xto be correlated with RFRMIN(16)

    Experiments

    List with cy43h22 experiments is here: [wiki:HarmonieSystemDocumentation/EPS/ExplistSPPcy43 List of experiments]

    A guide for running the tuning experiments is here: [wiki:HarmonieSystemDocumentation/EPS/HowtoSPPcy43 Guide]

    diff --git a/previews/PR1153/EPS/SPPImplementation/index.html b/previews/PR1153/EPS/SPPImplementation/index.html index 60628f1ce..ae623856e 100644 --- a/previews/PR1153/EPS/SPPImplementation/index.html +++ b/previews/PR1153/EPS/SPPImplementation/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    The SPP implementation in IAL and HARMONIE

    The Stochastically Perturbed Parameterizations scheme (SPP) introduces stochastic perturbations to values of chosen closure parameters representing efficiencies or rates of change in parameterized atmospheric (sub)processes. See here for more information. See the main SPP documentation for selection of settings.

    Controling routines

    The SPP data structure and logics is controlled by the following routines

    RoutineDescription
    src/arpifs/module/spp_mod.F90Defines SPP scheme types TSPP_CONFIG_PAR and TSPP_CONFIG for the parameter config and the overall config respectively
    src/arpifs/module/spp_mod_type.F90Harmonie specific data types TSPP_CONFIG_TYPE, ATM_SPP_VARS, SFX_VARS, control and the methods CLEAR_SSP_TYPE, SET_SPP_TYPE, APPLY_SPP, APPLY_SPP_SURFEX, DIA_SPP, SET_ALL_ATM_SPP, SET_ALL_SFX_SPP, CLEAR_ALL_ATM_SPP, CLEAR_ALL_SFX_SPP
    src/surfex/SURFEX/modd_sfx_spp.F90SURFEX specific data types, control and methods CLEAR_SFX_SPP, SET_SFX_SPP, APPLY_SFX_SPP, CLEAR_ALL_SFX_SPP, SPP_MASK, SPP_DEMASK, PREP_SPP_SFX. Partly duplicates spp_mod_type.F90
    src/arpifs/namelist/namspp.nam.hThe SPP namelist
    src/arpifs/setup/get_spp_conf.F90Setup defaults and read the SPP namelist. Initialises the SPG parameters
    src/arpifs/phys_dmn/ini_spp.F90Initialises the pattern used for SPP
    src/arpifs/phys_dmn/evolve_spp.F90Control routine for pattern propagation
    src/mse/internals/aroset_spp.F90Initialises the SURFEX part of SPP

    Note that the control routines shared with IFS will be totally rewritten, and much neater, with the introduction of CY49T1. See e.g. spp_def_mod.F90, spp_gen_mod.F90

    SPG routines

    The pattern used for SPP within HARMONIE is SPG and the code for this is found under src/utilities/spg. For the propagation of the pattern we find the routine EVOLVE_ARP_SPG in src/arp/module/spectral_arp_mod.F90

    Applying the patterns

    In apl_arome.F90 the HARMONIE specific data types are initialised with SET_ALL_ATM_SPP and SET_ALL_SFX_SPP. These routine groups the different parameters and connects them to a pattern and a the correct diagnostic field EZDIAG if requested.

    Applying the patterns in the upper air part

    In the routine were a specific parameter is used the pattern is applied by calling APPLY_SPP. This is done for each parameter accoding to the table below.

    PerturbationRoutine
    RADGRsrc/arpifs/phys_dmn/apl_arome.F90
    RADSNsrc/arpifs/phys_dmn/apl_arome.F90
    RFAC_TWOCsrc/arpifs/phys_dmn/vdfexcuhl.F90
    RZC_Hsrc/arpifs/phys_dmn/vdfexcuhl.F90
    RZL_INFsrc/arpifs/phys_dmn/vdfexcuhl.F90
    RZMFDRYsrc/arpifs/phys_dmn/vdfhghtnhl.F90
    RZMBCLOSUREsrc/arpifs/phys_dmn/vdfhghtnhl.F90
    CLDDPTHDPsrc/arpifs/phys_dmn/vdfhghtnhl.F90
    RLWINHFsrc/arpifs/phys_radi/recmwf.F90
    RSWINHFsrc/arpifs/phys_radi/recmwf.F90
    PSIGQSATsrc/mpa/micro/internals/condensation.F90
    ICE_CLD_WGTsrc/mpa/micro/internals/condensation.F90
    ICENUsrc/mpa/micro/internals/rain_ice_old.F90
    KGN_ACONsrc/mpa/micro/internals/rain_ice_old.F90
    KGN_SBGRsrc/mpa/micro/internals/rain_ice_old.F90
    ALPHAsrc/mpa/micro/internals/rain_ice_old.F90
    RZNUCsrc/mpa/micro/internals/rain_ice_old.F90

    Applying the patterns in SURFEX

    As SURFEX should have no dependencies to external modules the data is copied into the internalt SURFEX SPP data structure in AROSET_SPP called from ARO_GROUND_PARAM.

    For SURFEX the parameter table looks like

    PerturbationRoutine
    CVsrc/surfex/SURFEX/coupling_isban.F90
    LAIsrc/surfex/SURFEX/coupling_isban.F90
    RSMINsrc/surfex/SURFEX/coupling_isban.F90
    CD_COEFFsrc/surfex/SURFEX/surface_cd.F90
    CH_COEFFsrc/surfex/SURFEX/surface_aero_cond.F90

    In SURFEX we also have to pack/unpack the data arrays to only use the active points for a specific tile or patch. This is done in the SPP_MASK and SPP_DEMASK routines found in src/surfex/SURFEX/modd_sfx_spp.F90 and called from src/surfex/SURFEX/coupling_surf_atmn.F90. At the time of writing returning the diagnostics of the pattern doesn't work satisfactory.

    The additional code changes done for SPP in SURFEX can be viewed here

    +

    The SPP implementation in IAL and HARMONIE

    The Stochastically Perturbed Parameterizations scheme (SPP) introduces stochastic perturbations to values of chosen closure parameters representing efficiencies or rates of change in parameterized atmospheric (sub)processes. See here for more information. See the main SPP documentation for selection of settings.

    Controling routines

    The SPP data structure and logics is controlled by the following routines

    RoutineDescription
    src/arpifs/module/spp_mod.F90Defines SPP scheme types TSPP_CONFIG_PAR and TSPP_CONFIG for the parameter config and the overall config respectively
    src/arpifs/module/spp_mod_type.F90Harmonie specific data types TSPP_CONFIG_TYPE, ATM_SPP_VARS, SFX_VARS, control and the methods CLEAR_SSP_TYPE, SET_SPP_TYPE, APPLY_SPP, APPLY_SPP_SURFEX, DIA_SPP, SET_ALL_ATM_SPP, SET_ALL_SFX_SPP, CLEAR_ALL_ATM_SPP, CLEAR_ALL_SFX_SPP
    src/surfex/SURFEX/modd_sfx_spp.F90SURFEX specific data types, control and methods CLEAR_SFX_SPP, SET_SFX_SPP, APPLY_SFX_SPP, CLEAR_ALL_SFX_SPP, SPP_MASK, SPP_DEMASK, PREP_SPP_SFX. Partly duplicates spp_mod_type.F90
    src/arpifs/namelist/namspp.nam.hThe SPP namelist
    src/arpifs/setup/get_spp_conf.F90Setup defaults and read the SPP namelist. Initialises the SPG parameters
    src/arpifs/phys_dmn/ini_spp.F90Initialises the pattern used for SPP
    src/arpifs/phys_dmn/evolve_spp.F90Control routine for pattern propagation
    src/mse/internals/aroset_spp.F90Initialises the SURFEX part of SPP

    Note that the control routines shared with IFS will be totally rewritten, and much neater, with the introduction of CY49T1. See e.g. spp_def_mod.F90, spp_gen_mod.F90

    SPG routines

    The pattern used for SPP within HARMONIE is SPG and the code for this is found under src/utilities/spg. For the propagation of the pattern we find the routine EVOLVE_ARP_SPG in src/arp/module/spectral_arp_mod.F90

    Applying the patterns

    In apl_arome.F90 the HARMONIE specific data types are initialised with SET_ALL_ATM_SPP and SET_ALL_SFX_SPP. These routine groups the different parameters and connects them to a pattern and a the correct diagnostic field EZDIAG if requested.

    Applying the patterns in the upper air part

    In the routine were a specific parameter is used the pattern is applied by calling APPLY_SPP. This is done for each parameter accoding to the table below.

    PerturbationRoutine
    RADGRsrc/arpifs/phys_dmn/apl_arome.F90
    RADSNsrc/arpifs/phys_dmn/apl_arome.F90
    RFAC_TWOCsrc/arpifs/phys_dmn/vdfexcuhl.F90
    RZC_Hsrc/arpifs/phys_dmn/vdfexcuhl.F90
    RZL_INFsrc/arpifs/phys_dmn/vdfexcuhl.F90
    RZMFDRYsrc/arpifs/phys_dmn/vdfhghtnhl.F90
    RZMBCLOSUREsrc/arpifs/phys_dmn/vdfhghtnhl.F90
    CLDDPTHDPsrc/arpifs/phys_dmn/vdfhghtnhl.F90
    RLWINHFsrc/arpifs/phys_radi/recmwf.F90
    RSWINHFsrc/arpifs/phys_radi/recmwf.F90
    PSIGQSATsrc/mpa/micro/internals/condensation.F90
    ICE_CLD_WGTsrc/mpa/micro/internals/condensation.F90
    ICENUsrc/mpa/micro/internals/rain_ice_old.F90
    KGN_ACONsrc/mpa/micro/internals/rain_ice_old.F90
    KGN_SBGRsrc/mpa/micro/internals/rain_ice_old.F90
    ALPHAsrc/mpa/micro/internals/rain_ice_old.F90
    RZNUCsrc/mpa/micro/internals/rain_ice_old.F90

    Applying the patterns in SURFEX

    As SURFEX should have no dependencies to external modules the data is copied into the internalt SURFEX SPP data structure in AROSET_SPP called from ARO_GROUND_PARAM.

    For SURFEX the parameter table looks like

    PerturbationRoutine
    CVsrc/surfex/SURFEX/coupling_isban.F90
    LAIsrc/surfex/SURFEX/coupling_isban.F90
    RSMINsrc/surfex/SURFEX/coupling_isban.F90
    CD_COEFFsrc/surfex/SURFEX/surface_cd.F90
    CH_COEFFsrc/surfex/SURFEX/surface_aero_cond.F90

    In SURFEX we also have to pack/unpack the data arrays to only use the active points for a specific tile or patch. This is done in the SPP_MASK and SPP_DEMASK routines found in src/surfex/SURFEX/modd_sfx_spp.F90 and called from src/surfex/SURFEX/coupling_surf_atmn.F90. At the time of writing returning the diagnostics of the pattern doesn't work satisfactory.

    The additional code changes done for SPP in SURFEX can be viewed here

    diff --git a/previews/PR1153/EPS/SPPT/index.html b/previews/PR1153/EPS/SPPT/index.html index ba3261ad1..df007a9e3 100644 --- a/previews/PR1153/EPS/SPPT/index.html +++ b/previews/PR1153/EPS/SPPT/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    SPPT

    ** Since CY46h1 SPPT is no longer supported in HarmonEPS **

    The SPPT configuration within HarmonEPS is being tested over the period 2016053000 to 2016060500 using the !MetCoOp domain. It has been found that there are some problems with the default pattern generator and thus it has been decided to use the Stochastic Pattern Generator (SPG).

    Below is a table of experiments which will be completed in order to find a suitable configuration of the SPG control parameters TAU (time correlation scale) and XLCOR (length correlation scale). The value of the standard deviation of the perturbation amplitudes (SDEV_SDT) is kept fixed at 0.20 as is the clipping ratio of the perturbations (XCLIP_RATIO_SDT=5.0). These values along with the default value for XLCOR come from suggested settings used by Mihaly Szucs.

    First of all, keeping the XLCOR parameter constant (set at the default value of 2000000), TAU will be varied between 1h and 24h as shown in the table. The value of TAU is in seconds in the table below. The value of XLCOR is in metres.

    The experiments are started by typing ~hlam/Harmonie start DTG=2016053000 DTGEND=2016060500 BUILD=yes

    = Experiment Name == Who == DTG == DTGEND == Version == Domain == TAU == XLCOR == Description and Comments == Status == Verification =
    SPPT_only_40h111_2000km_1hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B36002000000XLCOR constant, TAU varyingSuspendedNo
    SPPT_only_40h111_2000km_3hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B108002000000XLCOR constant, TAU varyingCrashNo
    SPPT_only_40h111_2000km_6hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B216002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_9hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B324002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_12hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B432002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_15hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B540002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_18hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B648002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_21hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B756002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_24hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B864002000000XLCOR constant, TAU varyingCompleteYes

    Once these experiments have been completed testing will commence on keeping the time correlation scale constant and the spatial scale will be varied. Below is a table of experiments to this effect.

    A default value of 8h will be used for TAU as per the suggested value from Mihaly Szucs.

    = Experiment Name == Who == DTG == DTGEND == Version == Domain == TAU == XLCOR == Description and Comments == Status == Verification =
    SPPTonly40h111100km8hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B28800100000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h111200km8hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B28800200000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h111400km8hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B28800400000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h111600km8hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B28800600000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h111800km8hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B28800800000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h1111000km8hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B288001000000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h1111200km8hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B288001200000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h1111500km8hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B288001500000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h1111800km8hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B288001800000XLCOR varying, TAU constantCompleteYes

    The next step in the SPPT sensitivity analysis will be a set of experiments designed to test the impact of the SDEV parameter. Default values of 8h and 2000000 for TAU and XLCOR are used respectively.

    The XCLIPRATIOSDT parameter will also be adjusted as a function of the SDEVSDT value. Initially keeping the clipping at 1.0 (clipping value is XCLIPRATIOSDT * SDEVSDT), but also exploring other options.

    = Experiment Name == Who == DTG == DTGEND == Version == Domain == SDEV_SDT == XCLIPRATIOSDT == Description and Comments == Status == Verification =
    SPPTonly40h111sdev01Alan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.110.0SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev02Janne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.25.0SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev03Karoliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.33.3SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev04Alan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.42.5SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev05Janne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.52.0SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev06Karoliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.61.65SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev07Alan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.71.4SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev08Janne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.81.25SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev09Karoliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.91.1SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev10Alan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B1.01.0SDEV and XCLIP varyingCompleteYes
    +

    SPPT

    ** Since CY46h1 SPPT is no longer supported in HarmonEPS **

    The SPPT configuration within HarmonEPS is being tested over the period 2016053000 to 2016060500 using the !MetCoOp domain. It has been found that there are some problems with the default pattern generator and thus it has been decided to use the Stochastic Pattern Generator (SPG).

    Below is a table of experiments which will be completed in order to find a suitable configuration of the SPG control parameters TAU (time correlation scale) and XLCOR (length correlation scale). The value of the standard deviation of the perturbation amplitudes (SDEV_SDT) is kept fixed at 0.20 as is the clipping ratio of the perturbations (XCLIP_RATIO_SDT=5.0). These values along with the default value for XLCOR come from suggested settings used by Mihaly Szucs.

    First of all, keeping the XLCOR parameter constant (set at the default value of 2000000), TAU will be varied between 1h and 24h as shown in the table. The value of TAU is in seconds in the table below. The value of XLCOR is in metres.

    The experiments are started by typing ~hlam/Harmonie start DTG=2016053000 DTGEND=2016060500 BUILD=yes

    = Experiment Name == Who == DTG == DTGEND == Version == Domain == TAU == XLCOR == Description and Comments == Status == Verification =
    SPPT_only_40h111_2000km_1hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B36002000000XLCOR constant, TAU varyingSuspendedNo
    SPPT_only_40h111_2000km_3hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B108002000000XLCOR constant, TAU varyingCrashNo
    SPPT_only_40h111_2000km_6hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B216002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_9hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B324002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_12hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B432002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_15hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B540002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_18hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B648002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_21hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B756002000000XLCOR constant, TAU varyingCompleteYes
    SPPT_only_40h111_2000km_24hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B864002000000XLCOR constant, TAU varyingCompleteYes

    Once these experiments have been completed testing will commence on keeping the time correlation scale constant and the spatial scale will be varied. Below is a table of experiments to this effect.

    A default value of 8h will be used for TAU as per the suggested value from Mihaly Szucs.

    = Experiment Name == Who == DTG == DTGEND == Version == Domain == TAU == XLCOR == Description and Comments == Status == Verification =
    SPPTonly40h111100km8hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B28800100000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h111200km8hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B28800200000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h111400km8hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B28800400000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h111600km8hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B28800600000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h111800km8hJanne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B28800800000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h1111000km8hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B288001000000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h1111200km8hAlan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B288001200000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h1111500km8hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B288001500000XLCOR varying, TAU constantCompleteYes
    SPPTonly40h1111800km8hKaroliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B288001800000XLCOR varying, TAU constantCompleteYes

    The next step in the SPPT sensitivity analysis will be a set of experiments designed to test the impact of the SDEV parameter. Default values of 8h and 2000000 for TAU and XLCOR are used respectively.

    The XCLIPRATIOSDT parameter will also be adjusted as a function of the SDEVSDT value. Initially keeping the clipping at 1.0 (clipping value is XCLIPRATIOSDT * SDEVSDT), but also exploring other options.

    = Experiment Name == Who == DTG == DTGEND == Version == Domain == SDEV_SDT == XCLIPRATIOSDT == Description and Comments == Status == Verification =
    SPPTonly40h111sdev01Alan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.110.0SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev02Janne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.25.0SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev03Karoliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.33.3SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev04Alan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.42.5SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev05Janne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.52.0SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev06Karoliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.61.65SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev07Alan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.71.4SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev08Janne20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.81.25SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev09Karoliina20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B0.91.1SDEV and XCLIP varyingCompleteYes
    SPPTonly40h111sdev10Alan20160530002016060500harmonEPS40h1.1.1(17985)METCOOP25B1.01.0SDEV and XCLIP varyingCompleteYes
    diff --git a/previews/PR1153/EPS/Setup/index.html b/previews/PR1153/EPS/Setup/index.html index 90dcf496f..042c79ca2 100644 --- a/previews/PR1153/EPS/Setup/index.html +++ b/previews/PR1153/EPS/Setup/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -
    +
    diff --git a/previews/PR1153/EPS/System/index.html b/previews/PR1153/EPS/System/index.html index 3ed600583..029818581 100644 --- a/previews/PR1153/EPS/System/index.html +++ b/previews/PR1153/EPS/System/index.html @@ -46,4 +46,4 @@ ),

    To activate the change we also need to change scr/Get_namelist, the script that builds the namelist for us to take the member_$ENSMBR change into account.

     ...
      forecast|dfi|traj4d)
         NAMELIST_CONFIG="$DEFAULT dynamics $DYNAMICS $PHYSICS ${DYNAMICS}_${PHYSICS} $SURFACE $EXTRA_FORECAST_OPTIONS member_$ENSMBR"
    - ...

    Repeat this for all your members with the changes you would like to apply.

    + ...

    Repeat this for all your members with the changes you would like to apply.

    diff --git a/previews/PR1153/ExperimentConfiguration/ConfigureYourExperiment/index.html b/previews/PR1153/ExperimentConfiguration/ConfigureYourExperiment/index.html index 74caf8d24..56127decf 100644 --- a/previews/PR1153/ExperimentConfiguration/ConfigureYourExperiment/index.html +++ b/previews/PR1153/ExperimentConfiguration/ConfigureYourExperiment/index.html @@ -316,4 +316,4 @@ MAIL_TESTBED= # testbed results summary

    Testbed

    export TESTBED_LIST="AROME AROME_1D AROME_3DVAR \
                          AROME_BD_ARO AROME_BD_ARO_IO_SERV \
                          HarmonEPS HarmonEPS_IFSENS \
    -                     AROME_CLIMSIM"
    + AROME_CLIMSIM" diff --git a/previews/PR1153/ExperimentConfiguration/How_to_use_hires_topography/index.html b/previews/PR1153/ExperimentConfiguration/How_to_use_hires_topography/index.html index 02dab46a7..25a3d9a36 100644 --- a/previews/PR1153/ExperimentConfiguration/How_to_use_hires_topography/index.html +++ b/previews/PR1153/ExperimentConfiguration/How_to_use_hires_topography/index.html @@ -187,4 +187,4 @@ AOSIP > 001:013-00600-105@20051219_00:00+0000 000 A/S i+ 0.000E+000 8.055E-003 548.990E-003 14.087E-003 AOSJP > 001:014-00600-105@20051219_00:00+0000 000 A/S j+ 0.000E+000 8.297E-003 461.020E-003 14.306E-003 AOSIM > 001:015-00600-105@20051219_00:00+0000 000 A/S i- 0.000E+000 8.280E-003 521.020E-003 14.863E-003 - AOSJM > 001:016-00600-105@20051219_00:00+0000 000 A/S i- 0.000E+000 8.454E-003 471.930E-003 15.079E-003

    Presently, derivations are done automatically, so there is nothing to worry for the user from the point of view of technical implementation. However, eventually there are needs for further development and improvements when the high-resolution source data on topography will be used.

    Conclusion

    In order to replace the (relatively) coarse-resolution GTOPO30 topography with higher-resolution data (e.g., from Aster), it is enough to generate replacements for the gtopo30.hdr and gtopo30.dir files in the $HM_CLDATA/PGD directory, as described in the upper part of this page.

    + AOSJM > 001:016-00600-105@20051219_00:00+0000 000 A/S i- 0.000E+000 8.454E-003 471.930E-003 15.079E-003

    Presently, derivations are done automatically, so there is nothing to worry for the user from the point of view of technical implementation. However, eventually there are needs for further development and improvements when the high-resolution source data on topography will be used.

    Conclusion

    In order to replace the (relatively) coarse-resolution GTOPO30 topography with higher-resolution data (e.g., from Aster), it is enough to generate replacements for the gtopo30.hdr and gtopo30.dir files in the $HM_CLDATA/PGD directory, as described in the upper part of this page.

    diff --git a/previews/PR1153/ExperimentConfiguration/ModelDomain/index.html b/previews/PR1153/ExperimentConfiguration/ModelDomain/index.html index 34ac1574a..dc406ad39 100644 --- a/previews/PR1153/ExperimentConfiguration/ModelDomain/index.html +++ b/previews/PR1153/ExperimentConfiguration/ModelDomain/index.html @@ -23,4 +23,4 @@ OUTGEO%PROJLAT = 60.0 OUTGEO%PROJLAT2 = 60.0 OUTGEO%PROJLON = 0.0, -/

    Running gl using this namelist by

    gl -n namelist_file

    will create an GRIB file with a constant orography which you can use for plotting.

    +/

    Running gl using this namelist by

    gl -n namelist_file

    will create an GRIB file with a constant orography which you can use for plotting.

    diff --git a/previews/PR1153/ExperimentConfiguration/Namelists/index.html b/previews/PR1153/ExperimentConfiguration/Namelists/index.html index e9b1b572a..d5e40d6a1 100644 --- a/previews/PR1153/ExperimentConfiguration/Namelists/index.html +++ b/previews/PR1153/ExperimentConfiguration/Namelists/index.html @@ -292,4 +292,4 @@ #NAMELIST=$WRK/$WDIR/namelist_forecast #Get_namelist forecast $NAMELIST NAMELIST=$HM_LIB/nam/namelist_forecast_with_a_unique_name -

    For namelists not present in the dictionary you just copy them to you local nam directory.

    There is also a description on how to generate new namelist dictionaries here.

    +

    For namelists not present in the dictionary you just copy them to you local nam directory.

    There is also a description on how to generate new namelist dictionaries here.

    diff --git a/previews/PR1153/ExperimentConfiguration/PlatformConfiguration/index.html b/previews/PR1153/ExperimentConfiguration/PlatformConfiguration/index.html index b78bc4545..8c256c899 100644 --- a/previews/PR1153/ExperimentConfiguration/PlatformConfiguration/index.html +++ b/previews/PR1153/ExperimentConfiguration/PlatformConfiguration/index.html @@ -12,4 +12,4 @@ ./config-sh/config.YOURHOST ## YOURHOST task submission settings ./suites/harmonie.pm ## perl module to define ensemble settings ./ecf/config_exp.h ## your experiment definition (scientific type options) -./scr/include.ass ## assimilation specific settings

    But, what if your host configuration is not available in the HARMONIE system? Host specific configuration files in PATH_TO_HARMONIE/config-sh must be available for your host and configuration files for the compilation of the code must be available. This documentation attempts to describe what is required.

    Host config files

    Env_system -> config-sh/config.YOURHOST

    The config.YOURHOST file defines host specific variables such as some input directory locations. If your YOURHOST is not already included in HARMONIE it may be work looking at config.* files in config-sh/ to see what other people have done. The table below outlines variables set in config-sh/config-sh.YOURHOST and what the variables do:

    Variable nameDescription
    COMPCENTREcontrols special ECMWF solutions (such as MARS) where required. Set to LOCAL if you are unsure
    HARMONIE_CONFIGdefines the config file used by Makeup compilation
    MAKEUP_BUILD_DIRlocation of where Makeup compiles the HARMONIE code
    MAKE_OWN_PRECOMPILEDyes=>install pre-compiled code in $PRECOMPILED
    PRECOMPILEDlocation of (optional) pre-compiled HARMONIE code
    E923_DATA_PATHlocation of input data for E923, climate generation
    PGD_DATA_PATHlocation of input data for PGD, surfex climate generation
    ECOSG_DATA_PATHlocation of input data for ECOCLIMAP2G
    GMTED2010_DATA_PATHlocation of HRES DEM
    SOILGRID_DATA_PATHlocation of SOILGRID data
    HM_SAT_CONSTlocation of constants for satellite assimilation
    RTTOV_COEFDIRlocation of RTTOV coefficients
    HM_DATAlocation of top working directory for the experiment
    HM_LIBlocation of src/scripts and compiled code
    TASK_LIMITMaximum number of jobs submitted by ECFLOW
    RSYNC_EXCLUDEused to exclude .git* sub-directories from copy of source code for compilation
    DR_HOOK_IGNORE_SIGNALSenvironment variable used by Dr Hook to ignore certain "signals"
    HOST0define primary host name
    HOSTNdefine other host name(s)
    HOST_INSTALL0=> install on HOST0, 0:...:N => install on HOST0,...,HOSTN
    MAKEmake command may need to be explicity defined. Set to make for most platforms
    MKDIRmkdir command (default: mkdir -p)
    JOBOUTDIRwhere ECFLOW writes its log files
    ECCODES_DEFINITION_PATHlocation of local ecCodes definition files
    BUFR_TABLESlocation of local BUFR tables

    Env_submit -> config-sh/submit.YOURHOST

    The Env_submit file uses perl to tell the HARMONIE scheduler how to execute programs - which programs should be run on multiple processors and define batch submissions if required.

    perldescription
    %backg_jobdefines variables for jobs run in the background on HOST0
    %scalar_jobdefines variables for single processor batch jobs
    %par_jobdefines variables for multi-processor batch jobs
    @backg_listlist of tasks to be submitted as a background job
    @scalar_listlist of tasks to be submitted as a scalar job
    @par_listlist of tasks to be submitted as parallel job
    default"wildcard" task name to defined default type of job for unlisted tasks

    Host summary

    YOURHOSTHost typebatchContactNotes
    AEMET.cirrus.gnu
    AEMET.nimbus.ifort.mpi
    biSMHI
    centos8
    cirrus
    debian11
    ecgbswitched off
    ecgb-ccaECMWF HPC with MPI dual hostslurm/PBSswitched off
    ECMWF.atosECMWF Atos HPC with MPIslurm
    fedora33
    fedora34
    KNMI.bullx_b720KNMI Atos HPC with MPIslurmBert van Ulft
    LinuxPCGeneral Linux PC no MPInone
    LinuxPC-MPIGeneral Linux PC with MPInone
    LinuxPC-MPI-KNMIKNMI Linux workstation (Fedora)none
    LinuxPC-MPI-ubuntuUbuntu Linux PC with MPInone
    LinuxPC-serial
    METIE.LinuxPCMETIE CentOS 6 PC with MPInoneEoin Whelan
    METIE.LinuxPC8
    METIE.LinuxRH7gnuMETIE Redhat 7 server with MPInoneEoin Whelan
    METIE.LinuxRH7gnu-dev
    METIE.reaserve8
    METIE.reaserve8musc
    nebula
    nebula-gnu
    opensuse
    SMHI.Linda4SMHI
    SMHI.Linda5SMHI
    stratus
    teho
    ubuntu18
    ubuntu20
    ubuntu20_nompi
    ubuntu2204.GNUpodman containernoneEoin Whelancontainer
    ubuntu2304.GNUpodman containernoneEoin Whelancontainer
    ubuntu2310.GNUpodman containernoneEoin Whelancontainer
    ubuntu2404.GNUpodman containernoneEoin Whelancontainer
    ubuntu2410.GNUpodman containernoneEoin Whelancontainer
    voima

    Compilation config files

    Makeup

    config files required for compilation of code using Makeup ...

    More information on Makeup is available here: Build with Makeup

    Obsmon

    For config files required for compilation of obsmon check util/obsmon/config

    +./scr/include.ass ## assimilation specific settings

    But, what if your host configuration is not available in the HARMONIE system? Host specific configuration files in PATH_TO_HARMONIE/config-sh must be available for your host and configuration files for the compilation of the code must be available. This documentation attempts to describe what is required.

    Host config files

    Env_system -> config-sh/config.YOURHOST

    The config.YOURHOST file defines host specific variables such as some input directory locations. If your YOURHOST is not already included in HARMONIE it may be work looking at config.* files in config-sh/ to see what other people have done. The table below outlines variables set in config-sh/config-sh.YOURHOST and what the variables do:

    Variable nameDescription
    COMPCENTREcontrols special ECMWF solutions (such as MARS) where required. Set to LOCAL if you are unsure
    HARMONIE_CONFIGdefines the config file used by Makeup compilation
    MAKEUP_BUILD_DIRlocation of where Makeup compiles the HARMONIE code
    MAKE_OWN_PRECOMPILEDyes=>install pre-compiled code in $PRECOMPILED
    PRECOMPILEDlocation of (optional) pre-compiled HARMONIE code
    E923_DATA_PATHlocation of input data for E923, climate generation
    PGD_DATA_PATHlocation of input data for PGD, surfex climate generation
    ECOSG_DATA_PATHlocation of input data for ECOCLIMAP2G
    GMTED2010_DATA_PATHlocation of HRES DEM
    SOILGRID_DATA_PATHlocation of SOILGRID data
    HM_SAT_CONSTlocation of constants for satellite assimilation
    RTTOV_COEFDIRlocation of RTTOV coefficients
    HM_DATAlocation of top working directory for the experiment
    HM_LIBlocation of src/scripts and compiled code
    TASK_LIMITMaximum number of jobs submitted by ECFLOW
    RSYNC_EXCLUDEused to exclude .git* sub-directories from copy of source code for compilation
    DR_HOOK_IGNORE_SIGNALSenvironment variable used by Dr Hook to ignore certain "signals"
    HOST0define primary host name
    HOSTNdefine other host name(s)
    HOST_INSTALL0=> install on HOST0, 0:...:N => install on HOST0,...,HOSTN
    MAKEmake command may need to be explicity defined. Set to make for most platforms
    MKDIRmkdir command (default: mkdir -p)
    JOBOUTDIRwhere ECFLOW writes its log files
    ECCODES_DEFINITION_PATHlocation of local ecCodes definition files
    BUFR_TABLESlocation of local BUFR tables

    Env_submit -> config-sh/submit.YOURHOST

    The Env_submit file uses perl to tell the HARMONIE scheduler how to execute programs - which programs should be run on multiple processors and define batch submissions if required.

    perldescription
    %backg_jobdefines variables for jobs run in the background on HOST0
    %scalar_jobdefines variables for single processor batch jobs
    %par_jobdefines variables for multi-processor batch jobs
    @backg_listlist of tasks to be submitted as a background job
    @scalar_listlist of tasks to be submitted as a scalar job
    @par_listlist of tasks to be submitted as parallel job
    default"wildcard" task name to defined default type of job for unlisted tasks

    Host summary

    YOURHOSTHost typebatchContactNotes
    AEMET.cirrus.gnu
    AEMET.nimbus.ifort.mpi
    biSMHI
    centos8
    cirrus
    debian11
    ecgbswitched off
    ecgb-ccaECMWF HPC with MPI dual hostslurm/PBSswitched off
    ECMWF.atosECMWF Atos HPC with MPIslurm
    fedora33
    fedora34
    KNMI.bullx_b720KNMI Atos HPC with MPIslurmBert van Ulft
    LinuxPCGeneral Linux PC no MPInone
    LinuxPC-MPIGeneral Linux PC with MPInone
    LinuxPC-MPI-KNMIKNMI Linux workstation (Fedora)none
    LinuxPC-MPI-ubuntuUbuntu Linux PC with MPInone
    LinuxPC-serial
    METIE.LinuxPCMETIE CentOS 6 PC with MPInoneEoin Whelan
    METIE.LinuxPC8
    METIE.LinuxRH7gnuMETIE Redhat 7 server with MPInoneEoin Whelan
    METIE.LinuxRH7gnu-dev
    METIE.reaserve8
    METIE.reaserve8musc
    nebula
    nebula-gnu
    opensuse
    SMHI.Linda4SMHI
    SMHI.Linda5SMHI
    stratus
    teho
    ubuntu18
    ubuntu20
    ubuntu20_nompi
    ubuntu2204.GNUpodman containernoneEoin Whelancontainer
    ubuntu2304.GNUpodman containernoneEoin Whelancontainer
    ubuntu2310.GNUpodman containernoneEoin Whelancontainer
    ubuntu2404.GNUpodman containernoneEoin Whelancontainer
    ubuntu2410.GNUpodman containernoneEoin Whelancontainer
    voima

    Compilation config files

    Makeup

    config files required for compilation of code using Makeup ...

    More information on Makeup is available here: Build with Makeup

    Obsmon

    For config files required for compilation of obsmon check util/obsmon/config

    diff --git a/previews/PR1153/ExperimentConfiguration/UpdateNamelists/index.html b/previews/PR1153/ExperimentConfiguration/UpdateNamelists/index.html index 6e0315a40..0a59a7da1 100644 --- a/previews/PR1153/ExperimentConfiguration/UpdateNamelists/index.html +++ b/previews/PR1153/ExperimentConfiguration/UpdateNamelists/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -
    +
    diff --git a/previews/PR1153/ExperimentConfiguration/UseofObservation/index.html b/previews/PR1153/ExperimentConfiguration/UseofObservation/index.html index abba0897f..3959b5530 100644 --- a/previews/PR1153/ExperimentConfiguration/UseofObservation/index.html +++ b/previews/PR1153/ExperimentConfiguration/UseofObservation/index.html @@ -19,4 +19,4 @@ export PAOB_OBS=0 # PAOB not defined everywhere export SCATT_OBS=0 # Scatterometer data not defined everywhere export LIMB_OBS=0 # LIMB observations, GPS Radio Occultations -export RADAR_OBS=0 # Radar +export RADAR_OBS=0 # Radar diff --git a/previews/PR1153/ExperimentConfiguration/VerticalGrid/index.html b/previews/PR1153/ExperimentConfiguration/VerticalGrid/index.html index 502a1c019..5cbae465e 100644 --- a/previews/PR1153/ExperimentConfiguration/VerticalGrid/index.html +++ b/previews/PR1153/ExperimentConfiguration/VerticalGrid/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    HARMONIE Vertical Model Level Definitions

    HARMONIE vertical coordinate

    HARMONIE model, similar to that of HIRLAM, is constructed for a general pressure based and terrain following vertical coordinate $\eta(p,p_s)$, where

    \[\eta(0,P_s) = 0\]

    and

    \[\eta(p_s,p_s) = 1\]

    The formulation corresponds to the ECMWF hybrid system. The model is formulated for a spherical coordinate system ($\lambda$, $\theta$), but in the code two metric coefficients $(h_x,h_y)$ have been introduced. This is done to prepare the model for any orthogonal coordinate system or map projection with axes (x,y).

    To represent the vertical variation of the dependent variables (U, V, T and Q), the atmosphere is divided into "nlev" layers. These layers are defined by the pressures at the interfaces between them (the `half-levels'). From the general expression

    \[p_{k+1/2} = A_{k+1/2} (n) + B_{k+1/2}(n) * p_s(x,y)\]

    for $k=0,1,...,nlev$

    the vertical surfaces for half-levels are defined. Pure pressure surfaces are obtained for $B=0$ and pure $\sigma$ surfaces for $A=0$. `full-level' pressure associated with each model level (middle of two half layers) is then determined accordingly.

    Definition of model levels in HARMONIE

    The script src/Vertical_levels.pl contains definition of vertical levels that have been used in the HIRLAM community for research and/or operational purposes. Currently the default model setup defines 65-level structure as derived by Per Unden, SMHI. Model level definitions for commonly used vertical structures in HARMONIE are listed below.

    • FourtyLevel: HIRLAM_40 model levels (same as Hirlam 6.2.1, Nov 2003 - HIRLAM 7.0, 2006 )
    • SixtyLevel: HIRLAM-60 model levels (same as Hirlam 7.1, March 2007 - 2012 )
    • [wiki:MFSixtyLevel MF_60]: MF-60 model levels (same as Meteo France AROME since 2010 )
    • SixtyfiveLevel: 65 model levels (same as Hirlam 7.4, March 2012 - )
    • other levels: Prague87, MF70, 40 (ALADIN-40), ECMWF_60.

    Note that VLEV is the name of the set of A/B coefficients defining your levels set in ecf/config_exp.h. There are e.g. more than one definition for 60 levels. To print the levels just run scr/Vertical_levels.pl

    Usage: scr/Vertical_levels.pl [VLEV PRINT_OPTION] where:

    • VLEV: name of your level definition
    • PRINT_OPTION=AHALF: print A coefficients for VLEV
    • PRINT_OPTION=BHALF: print B coefficients for VLEV
    • PRINT_OPTION=NLEV: print number of levels for VLEV
    • PRINT_OPTION=NRFP3S: print NRFP3S namelist values for VLEV

    See here for ECMWF level definitions.

    When performing HARMONIE experiment, users can select vertical levels by changing VLEV in ecf/config_exp.h. If a non-standard level number is to be chosen, the script scr/Vertical_levels.pl needs to be edited to add layer definition.

    Define new eta levels

    A brief description and some code on how to create new eta levels can be found in here.

    There is also an interactive tool that can help you in creating a new set of levels.

    The method is based on a program by Pierre Bénard, Meteo France, that is described in this gmapdoc article.

    Relevant corresponding data set for different vertical structure

    HARMONIE 3D-VAR and 4DVAR upper air data assimilation needs background error structure function for each given vertical layer structure. It is noted that the structure function data included in the reference HARMONIE repository const/jb_data is only useful for reference configuration. Users that runs 3DVAR/4DVAR are strongly recommended to derive proper structure function data following instructions in the Harmonie wiki using own data archive to avoid improper use of structure function.

    +

    HARMONIE Vertical Model Level Definitions

    HARMONIE vertical coordinate

    HARMONIE model, similar to that of HIRLAM, is constructed for a general pressure based and terrain following vertical coordinate $\eta(p,p_s)$, where

    \[\eta(0,P_s) = 0\]

    and

    \[\eta(p_s,p_s) = 1\]

    The formulation corresponds to the ECMWF hybrid system. The model is formulated for a spherical coordinate system ($\lambda$, $\theta$), but in the code two metric coefficients $(h_x,h_y)$ have been introduced. This is done to prepare the model for any orthogonal coordinate system or map projection with axes (x,y).

    To represent the vertical variation of the dependent variables (U, V, T and Q), the atmosphere is divided into "nlev" layers. These layers are defined by the pressures at the interfaces between them (the `half-levels'). From the general expression

    \[p_{k+1/2} = A_{k+1/2} (n) + B_{k+1/2}(n) * p_s(x,y)\]

    for $k=0,1,...,nlev$

    the vertical surfaces for half-levels are defined. Pure pressure surfaces are obtained for $B=0$ and pure $\sigma$ surfaces for $A=0$. `full-level' pressure associated with each model level (middle of two half layers) is then determined accordingly.

    Definition of model levels in HARMONIE

    The script src/Vertical_levels.pl contains definition of vertical levels that have been used in the HIRLAM community for research and/or operational purposes. Currently the default model setup defines 65-level structure as derived by Per Unden, SMHI. Model level definitions for commonly used vertical structures in HARMONIE are listed below.

    • FourtyLevel: HIRLAM_40 model levels (same as Hirlam 6.2.1, Nov 2003 - HIRLAM 7.0, 2006 )
    • SixtyLevel: HIRLAM-60 model levels (same as Hirlam 7.1, March 2007 - 2012 )
    • [wiki:MFSixtyLevel MF_60]: MF-60 model levels (same as Meteo France AROME since 2010 )
    • SixtyfiveLevel: 65 model levels (same as Hirlam 7.4, March 2012 - )
    • other levels: Prague87, MF70, 40 (ALADIN-40), ECMWF_60.

    Note that VLEV is the name of the set of A/B coefficients defining your levels set in ecf/config_exp.h. There are e.g. more than one definition for 60 levels. To print the levels just run scr/Vertical_levels.pl

    Usage: scr/Vertical_levels.pl [VLEV PRINT_OPTION] where:

    • VLEV: name of your level definition
    • PRINT_OPTION=AHALF: print A coefficients for VLEV
    • PRINT_OPTION=BHALF: print B coefficients for VLEV
    • PRINT_OPTION=NLEV: print number of levels for VLEV
    • PRINT_OPTION=NRFP3S: print NRFP3S namelist values for VLEV

    See here for ECMWF level definitions.

    When performing HARMONIE experiment, users can select vertical levels by changing VLEV in ecf/config_exp.h. If a non-standard level number is to be chosen, the script scr/Vertical_levels.pl needs to be edited to add layer definition.

    Define new eta levels

    A brief description and some code on how to create new eta levels can be found in here.

    There is also an interactive tool that can help you in creating a new set of levels.

    The method is based on a program by Pierre Bénard, Meteo France, that is described in this gmapdoc article.

    Relevant corresponding data set for different vertical structure

    HARMONIE 3D-VAR and 4DVAR upper air data assimilation needs background error structure function for each given vertical layer structure. It is noted that the structure function data included in the reference HARMONIE repository const/jb_data is only useful for reference configuration. Users that runs 3DVAR/4DVAR are strongly recommended to derive proper structure function data following instructions in the Harmonie wiki using own data archive to avoid improper use of structure function.

    diff --git a/previews/PR1153/ExperimentConfiguration/namelist_sfx_forecast/index.html b/previews/PR1153/ExperimentConfiguration/namelist_sfx_forecast/index.html index 2e9d058ce..d0a84d9cc 100644 --- a/previews/PR1153/ExperimentConfiguration/namelist_sfx_forecast/index.html +++ b/previews/PR1153/ExperimentConfiguration/namelist_sfx_forecast/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -
    +
    diff --git a/previews/PR1153/ForecastModel/DDH/index.html b/previews/PR1153/ForecastModel/DDH/index.html index 1db100aa6..7ab21efe5 100644 --- a/previews/PR1153/ForecastModel/DDH/index.html +++ b/previews/PR1153/ForecastModel/DDH/index.html @@ -23,4 +23,4 @@ 'LHDMCI' => '.FALSE.,', 'LHDENT' => '.FALSE.,', }, -);
  • Description of the variables

  • VariableDescription
    LHDGLBtype of domain: global domain
    LHDZONtype of domain: zonal bands
    LHDDOPtype of domain: limited and isolated point
    LHDPRGwrite global diagnostics on listing
    LHDPRZwrite zonal bands
    LHDPRDwrite limited domains diagnostics on listing
    LHDEFGwrite global diagnostics on file
    LHDEFZwrite zonal bands diagnostics on file
    LHDEFDwrite limited domain diagnostic on file
    LHDHKSbudget of mass, energy, momentum, RH, soil
    LHDMCIbudget of kinetic momentum
    LHDENTbudget of entropy
    +);
  • Description of the variables

  • VariableDescription
    LHDGLBtype of domain: global domain
    LHDZONtype of domain: zonal bands
    LHDDOPtype of domain: limited and isolated point
    LHDPRGwrite global diagnostics on listing
    LHDPRZwrite zonal bands
    LHDPRDwrite limited domains diagnostics on listing
    LHDEFGwrite global diagnostics on file
    LHDEFZwrite zonal bands diagnostics on file
    LHDEFDwrite limited domain diagnostic on file
    LHDHKSbudget of mass, energy, momentum, RH, soil
    LHDMCIbudget of kinetic momentum
    LHDENTbudget of entropy
    diff --git a/previews/PR1153/ForecastModel/Forecast/index.html b/previews/PR1153/ForecastModel/Forecast/index.html index 6675bdbfc..3a596b34f 100644 --- a/previews/PR1153/ForecastModel/Forecast/index.html +++ b/previews/PR1153/ForecastModel/Forecast/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Forecast

    scr/Forecast is the script, which initiates actual forecast run (ALADIN/AROME/ALARO depending on FLAG and PHFLAG).

    • Input parameters: none.
    • Data: Boundary files (ELSCF*-files). Initial file (fc_start). If data assimilation is used, fc_start is the analysis file. In case of dynamical adaptation, fc_start is the first boundary file. In case of AROME, Surfex initial file (SURFXINI.lfi) is also needed scr/Prep_ini_surfex.
    • Namelists: namelist templates nam/namelist_fcst${FLAG}_default are fetched based on FLAG and PHFLAG. The templates are completed in scr/Forecast based on the choices of NPROCX, NPROCY (see config-sh/submit.*), TFLAG, OUTINT, BDINT and REDUCELFI. In case of AROME also the namelists to control SURFEX-scheme nam/TEST.des and nam/EXSEG1.nam are needed.
    • Executables: as defined by MODEL.
    • Output: Forecast files (spectral files ICMSHALAD+*). In case of AROME, Surfex files containing the surface data (AROMOUT_*.lfi).

    Forecast namelists

    The current switches in the HARMONIE system (in ecf/config_exp.h) provide only very limited possibility to control the different aspects of the model. If the user wants to have more detailed control on the specific schemes etc., one has to modify the variety of the namelists options.

    In general, the different namelist options are documented in the source code modules (e.g. src/arp/module/*.F90). Below is listed information on some of the choices.

    NH-dynamics/advection/time stepping:

    • A detailed overview of the such options has been given by Vivoda (2008).

    Upper air physics switches

    • Switches related to different schemes of ALADIN/ALARO physics, src/arp/module/yomphy.F90.
    • Switches related to physics schemes in AROME src/arp/module/yomarphy.F90.
    • Switches to tune different aspects of physics, src/arp/module/yomphy0.F90, src/arp/module/yomphy1.F90, src/arp/module/yomphy2.F90 and src/arp/module/yomphy3.F90
    • Switches related to HIRLAM physics, src/arp/module/yhloption.F90 and src/arp/setup/suhloption.F90.

    Initialization switch

    • Initialization is controlled by namelist NAMINI/NEINI, src/arp/module/yomini.F90.

    Horizontal diffusion switches

    • Horizontal diffusion is controlled by namelist NAMDYN/RDAMP*, src/arp/module/yomdyn.F90#L55. Larger the coefficient, less diffusion.

    MPP switches

    • The number of processors in HARMONIE are given in config-sh/submit.*. These values are transfered in to src/arp/module/yomct0.F90#L276 and src/arp/module/yommp.F90.

    Surface SURFEX switches

    • The SURFEX scheme is controlled through namelist settings in nam/surfex_namelists.pm. The different options are described here.

    Archiving

    Archiving has a two layer structure. Firstly, all the needed analysis forecast and field extract files are stored in ARCHIVE directory by scr/Archive_fc. This is the place where the postprocessing step expects to find the files.

    At ECMWF all the requested files are stored to ECFS into directory ECFSLOC by the script scr/Archive_ECMWF

    +

    Forecast

    scr/Forecast is the script, which initiates actual forecast run (ALADIN/AROME/ALARO depending on FLAG and PHFLAG).

    • Input parameters: none.
    • Data: Boundary files (ELSCF*-files). Initial file (fc_start). If data assimilation is used, fc_start is the analysis file. In case of dynamical adaptation, fc_start is the first boundary file. In case of AROME, Surfex initial file (SURFXINI.lfi) is also needed scr/Prep_ini_surfex.
    • Namelists: namelist templates nam/namelist_fcst${FLAG}_default are fetched based on FLAG and PHFLAG. The templates are completed in scr/Forecast based on the choices of NPROCX, NPROCY (see config-sh/submit.*), TFLAG, OUTINT, BDINT and REDUCELFI. In case of AROME also the namelists to control SURFEX-scheme nam/TEST.des and nam/EXSEG1.nam are needed.
    • Executables: as defined by MODEL.
    • Output: Forecast files (spectral files ICMSHALAD+*). In case of AROME, Surfex files containing the surface data (AROMOUT_*.lfi).

    Forecast namelists

    The current switches in the HARMONIE system (in ecf/config_exp.h) provide only very limited possibility to control the different aspects of the model. If the user wants to have more detailed control on the specific schemes etc., one has to modify the variety of the namelists options.

    In general, the different namelist options are documented in the source code modules (e.g. src/arp/module/*.F90). Below is listed information on some of the choices.

    NH-dynamics/advection/time stepping:

    • A detailed overview of the such options has been given by Vivoda (2008).

    Upper air physics switches

    • Switches related to different schemes of ALADIN/ALARO physics, src/arp/module/yomphy.F90.
    • Switches related to physics schemes in AROME src/arp/module/yomarphy.F90.
    • Switches to tune different aspects of physics, src/arp/module/yomphy0.F90, src/arp/module/yomphy1.F90, src/arp/module/yomphy2.F90 and src/arp/module/yomphy3.F90
    • Switches related to HIRLAM physics, src/arp/module/yhloption.F90 and src/arp/setup/suhloption.F90.

    Initialization switch

    • Initialization is controlled by namelist NAMINI/NEINI, src/arp/module/yomini.F90.

    Horizontal diffusion switches

    • Horizontal diffusion is controlled by namelist NAMDYN/RDAMP*, src/arp/module/yomdyn.F90#L55. Larger the coefficient, less diffusion.

    MPP switches

    • The number of processors in HARMONIE are given in config-sh/submit.*. These values are transfered in to src/arp/module/yomct0.F90#L276 and src/arp/module/yommp.F90.

    Surface SURFEX switches

    • The SURFEX scheme is controlled through namelist settings in nam/surfex_namelists.pm. The different options are described here.

    Archiving

    Archiving has a two layer structure. Firstly, all the needed analysis forecast and field extract files are stored in ARCHIVE directory by scr/Archive_fc. This is the place where the postprocessing step expects to find the files.

    At ECMWF all the requested files are stored to ECFS into directory ECFSLOC by the script scr/Archive_ECMWF

    diff --git a/previews/PR1153/ForecastModel/ForecastSettings/index.html b/previews/PR1153/ForecastModel/ForecastSettings/index.html index aabe38848..cfc92c0fe 100644 --- a/previews/PR1153/ForecastModel/ForecastSettings/index.html +++ b/previews/PR1153/ForecastModel/ForecastSettings/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Forecast Settings

    This page gives some details and advice on appropriate settings for the HARMONIE-AROME forecast

    Microphysics

    ICE-T

    Switch ICE-T on by setting LICET=.TRUE. in harmonie_namelist.pm under &NAMPARAR in %arome. When using ICE-T (LICET), LOCND2 should be set to True, and LMODICEDEP preferably to False. LICET will override LKOGAN, so by default LKOGAN=F. Documentation: (Engdahl et al., 2020)

    Description: ICE-T is a modified cloud microphysics scheme that builds upon ICE3 and OCN2D, with elements from the Thompson scheme from WRF. ICE-T was developed in cy40h1.1 for the purpose of better representation of supercooled liquid water, and downstream forecasts of atmospheric icing. The changes include stricter conditions for ice nucleation, less efficient collection of liquid water by snow and graupel, and variable rain size distribution depending on the source of the rain. (Rain originating from melting snow or graupel have larger drops, than rain originating from warm processes.)

    Shallow Convection

    LSHALLOWMF activates (.TRUE.) or de-activates (.FALSE.) the DUAL (dry and moist) mass flux shallow convection parameterisation. Note that with LSHALLOWMF=.FALSE. the mass flux activity as a source term for TKE in the turbulence scheme (energy cascade) will be also eliminated. Also the moist updraft transport contribution to the cloud scheme is eliminated with LSHALLOWMF=.FALSE.. See for details of the convection scheme and links to the cloud and turbulence scheme: https://doi.org/10.5194/gmd-15-1513-2022.

    The scale-aware convection scheme is activated by setting LSCAWAREMF=.TRUE.. Setting this reduces the dry and moist (if present) mass flux using a tangent function scaled with the dry boundary layer height $h$ for the dry updraft and sub-cloud height plus cloud layer depth $h+h_c$ for the moist updraft:

    \[f = \tanh\left(1.86 \frac{\Delta x}{h+h_c}\right)\]

    NOTE: this option can only be used when LSHALLOWMF=.TRUE..

    To support the model when it is trying to build up convection itself, the setting LWTHRESH=.TRUE. can be used. Depending on the gridsize, a vertical velocity threshold is defined. If the absolute value of the vertical velocity in a grid column exceeds this threshold the shallow convection is shut down.

    LWTHRESH option is updated as the vertical velocity is now only diagnosed in the lowest 6km , this to prevent that high vertical velocities not related to convection are used.

    LWTHRESHMOIST option works similarly to option LWTHRESH but now only the paramterized moist convection is shut down as the threshold is met, the dry convection is not affected by this option (but could be affected by LSCAWAREMF).

    Note

    LWTHRESH or LWTRESHMOIST options can only be active when LSHALLOWMF=.TRUE..

    Turbulence scheme

    HARATU

    HARATU (HArmonie with RAcmo TUrbulence scheme) is the default (HARATU=yes in config_exp.h) turbulence scheme in HARMONIE-AROME originally developed for RACMO (Regional Atmospheric Climate MOdel). The length scale of this turbulence scheme is described by @(Lenderink and Holtslag, 2004). Note that HARATU is only tested in combination with LSHALLOWMF=.TRUE. and CMF_UPDRAFT='DUAL'. The later convection scheme provides input to the HARATU turbulence scheme to present the important energy cascade (from large to small scales), see https://doi.org/10.5194/gmd-15-1513-2022

    +

    Forecast Settings

    This page gives some details and advice on appropriate settings for the HARMONIE-AROME forecast

    Microphysics

    ICE-T

    Switch ICE-T on by setting LICET=.TRUE. in harmonie_namelist.pm under &NAMPARAR in %arome. When using ICE-T (LICET), LOCND2 should be set to True, and LMODICEDEP preferably to False. LICET will override LKOGAN, so by default LKOGAN=F. Documentation: (Engdahl et al., 2020)

    Description: ICE-T is a modified cloud microphysics scheme that builds upon ICE3 and OCN2D, with elements from the Thompson scheme from WRF. ICE-T was developed in cy40h1.1 for the purpose of better representation of supercooled liquid water, and downstream forecasts of atmospheric icing. The changes include stricter conditions for ice nucleation, less efficient collection of liquid water by snow and graupel, and variable rain size distribution depending on the source of the rain. (Rain originating from melting snow or graupel have larger drops, than rain originating from warm processes.)

    Shallow Convection

    LSHALLOWMF activates (.TRUE.) or de-activates (.FALSE.) the DUAL (dry and moist) mass flux shallow convection parameterisation. Note that with LSHALLOWMF=.FALSE. the mass flux activity as a source term for TKE in the turbulence scheme (energy cascade) will be also eliminated. Also the moist updraft transport contribution to the cloud scheme is eliminated with LSHALLOWMF=.FALSE.. See for details of the convection scheme and links to the cloud and turbulence scheme: https://doi.org/10.5194/gmd-15-1513-2022.

    The scale-aware convection scheme is activated by setting LSCAWAREMF=.TRUE.. Setting this reduces the dry and moist (if present) mass flux using a tangent function scaled with the dry boundary layer height $h$ for the dry updraft and sub-cloud height plus cloud layer depth $h+h_c$ for the moist updraft:

    \[f = \tanh\left(1.86 \frac{\Delta x}{h+h_c}\right)\]

    NOTE: this option can only be used when LSHALLOWMF=.TRUE..

    To support the model when it is trying to build up convection itself, the setting LWTHRESH=.TRUE. can be used. Depending on the gridsize, a vertical velocity threshold is defined. If the absolute value of the vertical velocity in a grid column exceeds this threshold the shallow convection is shut down.

    LWTHRESH option is updated as the vertical velocity is now only diagnosed in the lowest 6km , this to prevent that high vertical velocities not related to convection are used.

    LWTHRESHMOIST option works similarly to option LWTHRESH but now only the paramterized moist convection is shut down as the threshold is met, the dry convection is not affected by this option (but could be affected by LSCAWAREMF).

    Note

    LWTHRESH or LWTRESHMOIST options can only be active when LSHALLOWMF=.TRUE..

    Turbulence scheme

    HARATU

    HARATU (HArmonie with RAcmo TUrbulence scheme) is the default (HARATU=yes in config_exp.h) turbulence scheme in HARMONIE-AROME originally developed for RACMO (Regional Atmospheric Climate MOdel). The length scale of this turbulence scheme is described by @(Lenderink and Holtslag, 2004). Note that HARATU is only tested in combination with LSHALLOWMF=.TRUE. and CMF_UPDRAFT='DUAL'. The later convection scheme provides input to the HARATU turbulence scheme to present the important energy cascade (from large to small scales), see https://doi.org/10.5194/gmd-15-1513-2022

    diff --git a/previews/PR1153/ForecastModel/HR/index.html b/previews/PR1153/ForecastModel/HR/index.html index f919a1382..eacca378b 100644 --- a/previews/PR1153/ForecastModel/HR/index.html +++ b/previews/PR1153/ForecastModel/HR/index.html @@ -29,4 +29,4 @@ 'RDAMPVD' => '20.,', 'RDAMPVOR' => '20.,', },

    With a quadratic or cubic grid with non-zero VESL, these defaults have been found to be adequate. Without VESL, higher levels of diffusion through lover RDAMP* values of 10 or even 1 are necessary.

    SLHD

    Experiments at Météo France suggest not to use SLHD on hydrometeors: c.f. ASM 2020 presentation by Yann Seity.

    In ecf/config_exp.h

    LGRADSP=yes                             # Apply Wedi/Hortal vorticity dealiasing (yes|no)
    -LUNBC=yes                               # Apply upper nested boundary condition (yes|no)

    Sample configurations

    Coming soon...

    +LUNBC=yes # Apply upper nested boundary condition (yes|no)

    Sample configurations

    Coming soon...

    diff --git a/previews/PR1153/ForecastModel/NearRealTimeAerosols/index.html b/previews/PR1153/ForecastModel/NearRealTimeAerosols/index.html index cd800442d..4a0bc8a08 100644 --- a/previews/PR1153/ForecastModel/NearRealTimeAerosols/index.html +++ b/previews/PR1153/ForecastModel/NearRealTimeAerosols/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Near Real Time Aerosols

    The model can be configured to use near-real-time aerosols from CAMS. This is done by setting USEAERO=camsnrt in ecf/config_exp.h, which leads to retrieval of boundary files containing aerosol mass mixing ratio fields from CAMS. Other values of USEAERO are related to use and generation of climatological (2D) aerosol. Please see scr/forecastmodelsettings.sh for some further details.

    • Namelist NAMNRTAER contains switches related to n.r.t. aerosols in cloud microphysics under src/mpa.
    • Namelist NAMPHY contains definitions for src/arpifs LAEROSEA,LAEROLAN,LAEROSOO,LAERODES,LAEROVOL,LAEROSUL,LAEROMMR,LAERONRT . LAERONRT is set true when n.r.t. aerosols are used. The others are related to climatological aerosol and are set false when n.r.t. aerosols are used.
    • Aerosol fields in YAERO_NL are defined in namelist NAMGFL. Variable NAERO defines the number of available n.r.t. aerosol species (14).
    • Namelist NAERAD contains definition of NAER=1/0 to use or not to use climatological aerosol for radiation. When LAERONRT is set true, NAER is set to 0.

    NAMNRTAER namelist

    The switches and some parameters can be set in NAMNRTAER (in nam/harmonie_namelists.pm)

    • LCAMS_NRT: switch on the use of CAMS aerosols in HARMONIE-AROME, the Mass mixing ratio fields must be present in the first guess and the boundary conditions. The number and name of those fields are specified in the namelist NAMGFL.
    • SSMINLO: Supersaturation at sfc level. (default 0.05%). The supersaturation activates the condensation nuclei (CN) to obtain CCN.
    • SSMINUP: Supersaturation over SSHEIGHT height (default 0.08%).
    • SSHEIGHT: Height over wich minimum SS is SSMINUP (default 100 m).
    • SSMAX: Maximum supersaturation (default 1.0%).
    • SSFACVV: Factor for dependence of SS with vertical velocity (0.0-1.0).
    • SSFACSS: Factor for dependence of SS with coarse sea salt (0.0-1.0).
    • CCNMIN: Minimum number concentration of Cloud Condensation Nuclei (CCN) inside the cloud: It is considered 10E6 (10 cm-3). Other values can be considered, but probably not over 50cm-3.
    • CLDROPMIN: Minimum CDNC inside the cloud. It is practically the same as CCNMIN. Other values can be considered, but probably not over 50cm-3.
    • IFNMINSIZE: Minimum radius of aerosol ice nucleating particles (default 0.01 micrometer).
    • LMOCA_NRT: In case of getting the aerosol fields from MOCAGE (still not in use).
    • LAEIFN: To activate Ice nuclei (mainly dust and hydrophobic organic matter and Black carbon).
    • LAERDRDEP: Activates the aerosol dry deposition. (FALSE by default).
    • LAECCN2CLDR: By default LAECCN2CLDR=FALSE, that is CDNC=CCN.
    • LAERSSEM: switch for sea salt emission (FALSE by default).
    +

    Near Real Time Aerosols

    The model can be configured to use near-real-time aerosols from CAMS. This is done by setting USEAERO=camsnrt in ecf/config_exp.h, which leads to retrieval of boundary files containing aerosol mass mixing ratio fields from CAMS. Other values of USEAERO are related to use and generation of climatological (2D) aerosol. Please see scr/forecastmodelsettings.sh for some further details.

    • Namelist NAMNRTAER contains switches related to n.r.t. aerosols in cloud microphysics under src/mpa.
    • Namelist NAMPHY contains definitions for src/arpifs LAEROSEA,LAEROLAN,LAEROSOO,LAERODES,LAEROVOL,LAEROSUL,LAEROMMR,LAERONRT . LAERONRT is set true when n.r.t. aerosols are used. The others are related to climatological aerosol and are set false when n.r.t. aerosols are used.
    • Aerosol fields in YAERO_NL are defined in namelist NAMGFL. Variable NAERO defines the number of available n.r.t. aerosol species (14).
    • Namelist NAERAD contains definition of NAER=1/0 to use or not to use climatological aerosol for radiation. When LAERONRT is set true, NAER is set to 0.

    NAMNRTAER namelist

    The switches and some parameters can be set in NAMNRTAER (in nam/harmonie_namelists.pm)

    • LCAMS_NRT: switch on the use of CAMS aerosols in HARMONIE-AROME, the Mass mixing ratio fields must be present in the first guess and the boundary conditions. The number and name of those fields are specified in the namelist NAMGFL.
    • SSMINLO: Supersaturation at sfc level. (default 0.05%). The supersaturation activates the condensation nuclei (CN) to obtain CCN.
    • SSMINUP: Supersaturation over SSHEIGHT height (default 0.08%).
    • SSHEIGHT: Height over wich minimum SS is SSMINUP (default 100 m).
    • SSMAX: Maximum supersaturation (default 1.0%).
    • SSFACVV: Factor for dependence of SS with vertical velocity (0.0-1.0).
    • SSFACSS: Factor for dependence of SS with coarse sea salt (0.0-1.0).
    • CCNMIN: Minimum number concentration of Cloud Condensation Nuclei (CCN) inside the cloud: It is considered 10E6 (10 cm-3). Other values can be considered, but probably not over 50cm-3.
    • CLDROPMIN: Minimum CDNC inside the cloud. It is practically the same as CCNMIN. Other values can be considered, but probably not over 50cm-3.
    • IFNMINSIZE: Minimum radius of aerosol ice nucleating particles (default 0.01 micrometer).
    • LMOCA_NRT: In case of getting the aerosol fields from MOCAGE (still not in use).
    • LAEIFN: To activate Ice nuclei (mainly dust and hydrophobic organic matter and Black carbon).
    • LAERDRDEP: Activates the aerosol dry deposition. (FALSE by default).
    • LAECCN2CLDR: By default LAECCN2CLDR=FALSE, that is CDNC=CCN.
    • LAERSSEM: switch for sea salt emission (FALSE by default).
    diff --git a/previews/PR1153/ForecastModel/OCDN2/index.html b/previews/PR1153/ForecastModel/OCDN2/index.html index 88eaa3977..288305b6f 100644 --- a/previews/PR1153/ForecastModel/OCDN2/index.html +++ b/previews/PR1153/ForecastModel/OCDN2/index.html @@ -5,4 +5,4 @@ gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash});

    Documentation of OCND2 modification of ICE3/ICE4 microphysics in AROME

    Introduction

    This option was implemented in ICE3/ICE4 microphysics in 2014 in order to improve the performance of the HARMONIE-AROME model configuration in Winter over the Arctic/Subarctic region. Errors corrected were mainly for missing low clouds in moderately cold conditions, an excess of ice clouds in the case of severe cold weather and an excess of cirrus clouds.

    Implementation in CY46 - switching on the parameterisation

    To use the parameterisation go to nam/harmonie_namelists.pm and set LOCND2 = .TRUE. in the namparar namelist.

    About the Code

    In CY46, there are two coding versions of ICE3/ICE4 - rain_ice_old.F90 and rain_ice.F90. The variable CMICRO determines which is used, OLD3 for rain_ice_old.F90 and ICE3 for rain_ice.F90. The structure of the code differs between these versions, and since the time stepping procedure is different, the result differs too. But the content of the modification for OCND2 is the same in both versions. The modifications can be be found by searching for

    IF(OCND2) THEN \
         --- new code ---\
    -ENDIF 

    The main OCDN2 modifications are

    1. Tuning factors for reducing the rate of deposition/evaporation of snow and graupel. See code block “1.2 COMPUTE SOME CONSTANT PARAMETERS” in rain_ice_old.F90 or in ice4_slow.F90. The tuning factors are then used later in rain_ice_old.F90, see code block “3.4.3 compute the deposition on rs: RVDEPS" for snow and in “3.4.6 compute the deposition on rg: RVDEPG” for graupel. In the rain_ice.F90 framework it is all done in the routine ice4_slow.F90. More information about the tuning parameters is included later in this documentation.
    2. Mask to limit computation: Set by tuning parameters in the code block “1.2 COMPUTE SOME CONSTANT PARAMETERS“ in rain_ice_old.F90 or in aro_rain_ice.F90 within the rain_ice.F90 framework. For OCND2=FALSE the limits are hard-coded.
    3. The cloud ice crystal concentration: Modified with OCND2, see code block “3.1.1 compute the cloud ice concentration” in rain_ice_old or ice4_nucleation.F90 within the rain_ice.F90 framework.
    4. Turn large cloud ice crystals into snow: See code block “3.4.5 B:” in rain_ice_old.F90 or ice4_fast_si.F90 within the rain_ice.F90 framework.
    5. Omit collision between snow and graupel since the effect in nature is very small and thus better to omit and speed up the computation a little. See code block “6.2.5” in rain_ice_old.F90 or ice4_fast_rg.F90 respectively
    6. Sub grid-scale calculation of deposition/evaporation of cloud ice. See code block “7.2 Bergeron-Findeisen effect: RCBERI” in rain_ice_old.F90 or ice4_fast_si.F90 for the rain_ice.F90 set up.

    There is also an important difference in condensation.F90: With OCND2, only liquid cloud water is handled within the statistical cloud scheme, not both ice and water as is the case with OCND2=F. With OCND2=F, the total cloud cover is calculated directly from the statistical cloud scheme. With OCND2=T, the total cloud cover is calculated as a sum of a liquid part, which is basically just the cloud cover from the statistical cloud scheme and an ice part which is based on the relative humidity with respect to ice and on the content of solid water species.

    There are two new routines for OCND2:

    1. icecloud.F90 is used for the sub grid-scale handling of relative humidity with respect to ice and thus for ice clouds. It is called from condensation.F90.
    2. ice4_fast_si.F90 is only used by the newer rain_ice.F90 routine. As already mentioned, it deals with deposition/evaporation of cloud ice.

    Tuning parameters

    The tuning parameters used specifically for OCND2 can be divided into three categories:

    Only having an effect if OCND2 is set to TRUE and used for SPP (April 2023).

    VariableDescription
    RFMIN(21)Tuning factor for ice clouds, such as cirrus. A larger value means a larger effect of the presence of solid water and thus more ice clouds. (The value is somewhat dependent on what kind of measurement one compares with, and how thin a cirrus cloud should be to be counted as a cloud. A range of 0.5 to 3 should be enough.)

    Only having effect if OCND2 is set to TRUE but currently (April 2023) not used in SPP.

    VariableDescription
    RFMIN(12)Threshold supersaturation with respect to ice in the supersaturated part of the grid-box for treatment in the microphysics computation. A larger value gives more supersaturation and a somewhat faster computation. Values that are too large are physically unrealistic, but there seems to be no consensus about the best value.
    RFMIN(13)Threshold mixing ratio for different non-vapor water species treated in the microphysics computation. Larger values result in faster computation, but possible important processes, when only small mixing ratios of water species are present, may be missed.
    RFMIN(15)Ice crystal diameter(m) for conversion from cloud ice to snow. Larger values lead to more ice and less snow.
    RFMIN(27)Experimental! Minimum temperature (K) used for Meyers ice number concentration. Larger values give less ice for temperatures below RFRMIN(27).
    RFMIN(39)Speed factor for deposition/evaporation rate of graupel. Larger values give faster deposition /evaporation.
    RFMIN(40)Speed factor for deposition/evaporation rate of snow. Larger values give faster deposition /evaporation.

    Have an effect even when OCND2 is not used, but designed for OCND2

    VariableDescription
    RFRMIN(1),RFRMIN(2),RFRMIN(3) and RFRMIN(4)Different thresholds for snow, ice, graupel and graupel again, respectively, leading to conversion of super-cooled rain into graupel. A higher value gives more super-cooled rain, but may be less physically realistic.
    RFRMIN(7)Tuning factor for the collisions between rain and snow. Higher values give less super-cooled rain and more snow. Zero means that those collisions are disregarded (probably OK).

    Full list of RFRMIN variables (included here for completeness, not all OCDN2-related)

    VariableValueDescription
    RFMIN(1)1.0E−5Higher value means more supercooled rain and somewhat less graupel.
    RFMIN(2)1.0E−8""
    RFMIN(3)3.0E-7""
    RFMIN(4)3.0E-7""
    RFMIN(5)1.0E-7Higher value means less graupel and more snow. Experimental.
    RFMIN(6)0.15Higher value means more graupel and less snow. Experimental.
    RFMIN(7)0.Higher value means less supercooled rain and somewhat more snow.
    RFMIN(8)1.> 1. Increase melt of graupel, < 1 decrease it. Experimental.
    RFMIN(9)1.> 1 means increase IN-concentration and <1 decrease.
    RFMIN(10)10.>10 means faster Kogan autoconversion <10 slower, only active for LKOGAN=T. This originates from the fact that the formula was based on an LES model with a higher horizontal resolution. It is easy to show that with a coarser resolution and an inhomogeneous cloud liquid field one has to add a compensating factor in order to retain the original mean autoconversion. Tests shows that a lower value e.g 3 would be better, and more in line with what ECMWF is using. The value 10 is, to some extent, a way of decreasing fog, but now we have a lot of other ways to reduce fog.
    RFMIN(11)1.Setting e.g. 0.01 means that subgrid-scale fraction of cloud water is used. Minimum cloud fraction=0.01. Only active for LKOGAN=T.
    RFMIN(12)0.The level of supersaturation in the ice-supersaturated part of grid-box needed to be treated in ice microphysics. (Greg Thompson recommend a higher value 0.05-0.25, in MetCoOp 0.05 is used) Higher value means faster computations, but also that any ice deposition in clear sky is neglected for ice-supersaturated between zero and RFRMIN(12). Only used with OCND2.
    RFMIN(13)1.0E-15The mixing-ratio of any water species needed to be treated in ice microphysics. The value 1.0E-15 is taken from old Hirlam. Only used with OCND2.
    RFMIN(14)120.Time scale for conversion of large ice crystals to snow. Only used with LMODICEDEP (Experimental).
    RFMIN(15)1.0E-4Diameter for conversion ice crystals into snow. Larger value gives more ice and less snow.
    RFMIN(16)0.“C” parameter for size distribution of snow. (constant for number concentration, N=Cλ^x) Only active if non-zero. Experimental
    RFMIN(17)0.“x” parameter for size distribution of snow. (slope for number concentration, N=Cλ^x) Only active if RFRMIN(16) is non-zero. Experimental.
    RFMIN(18)0.With RFRMIN(18)=1, snow and graupel melt are based on wet bulb temperature, instead of temperature and leads to slower melting. Experimental.
    RFMIN(19)0.Threshold cloud thickness for StCu/Cu transition [m] Only active for EDMF scheme and if non-zero, but very small effect.
    RFMIN(20)0.Threshold cloud thickness used in shallow/deep decision [m]. Only active for EDMF scheme and if non-zero, higher value gives more shallow convection and less deep model resolved convection.
    RFMIN(21)1.Tuning parameter for ice clouds. Larger value gives more cirrus and other ice clouds.
    RFMIN(22)1.Tuning parameter for CDNC at lowest model level . Lower value give lower CDNC. RFRMIN(22)=0.5 means CDNC= old CDNC x 0.5.
    RFMIN(23)0.5Tuning parameter only active with LHGT_QS. The lower limit for reduction of VSIGQSAT.
    RFMIN(24)1.5Tuning parameter only active with LHGT_QS. The upper limit for increase of VSIGQSAT.
    RFMIN(25)30.Tuning parameter only active with LHGT_QS. The level thickness for which VSIGQSAT is unchanged with LHGT_QS.
    RFMIN(26)0.If > 0.01, it replaces default CDNC everywhere. So RFRMIN(26)=50E6 (Beware of that it is in m-3!) gives CDNC = 50 cm-3 at reference level (1000 hPa) and RFRMIN(26) x pressure/ ref-pressure elsewhere.
    RFMIN(27)0.Minimum assumed temperature with respect to Meyers IN - concentration (K). Gives less IN concentration for temperatures below the value set. Experimental!
    RFMIN(28)0.Currently not used.
    RFMIN(29)0.If >0. and RFRMIN(22)>0 it gives the upper limit in metres for which the reduction of CDNC has an effect. A linear decrease from the lowest level to RFRMIN(29) meters is assumed.
    RFMIN(30)1.If not unity, CDNC is reduced/increased over sea with a factor RFRMIN(30) for the lowest model level and linearly reaching "no change" at RFRMIN(29) m height. If RFRMIN(29) is unset, RFRMIN(30) only affects the lowest model level.
    RFRMIN(31:38)0.Currently not used.
    RFRMIN(39)0.25.Reduction factor for deposition/evaporation of graupel. Only used when OCND2=T and LMODICEDEP=F.
    RFRMIN(40)0.15Reduction factor for deposition/evaporation of snow. Only used when OCND2=T and LMODICEDEP=F.
    +ENDIF

    The main OCDN2 modifications are

    1. Tuning factors for reducing the rate of deposition/evaporation of snow and graupel. See code block “1.2 COMPUTE SOME CONSTANT PARAMETERS” in rain_ice_old.F90 or in ice4_slow.F90. The tuning factors are then used later in rain_ice_old.F90, see code block “3.4.3 compute the deposition on rs: RVDEPS" for snow and in “3.4.6 compute the deposition on rg: RVDEPG” for graupel. In the rain_ice.F90 framework it is all done in the routine ice4_slow.F90. More information about the tuning parameters is included later in this documentation.
    2. Mask to limit computation: Set by tuning parameters in the code block “1.2 COMPUTE SOME CONSTANT PARAMETERS“ in rain_ice_old.F90 or in aro_rain_ice.F90 within the rain_ice.F90 framework. For OCND2=FALSE the limits are hard-coded.
    3. The cloud ice crystal concentration: Modified with OCND2, see code block “3.1.1 compute the cloud ice concentration” in rain_ice_old or ice4_nucleation.F90 within the rain_ice.F90 framework.
    4. Turn large cloud ice crystals into snow: See code block “3.4.5 B:” in rain_ice_old.F90 or ice4_fast_si.F90 within the rain_ice.F90 framework.
    5. Omit collision between snow and graupel since the effect in nature is very small and thus better to omit and speed up the computation a little. See code block “6.2.5” in rain_ice_old.F90 or ice4_fast_rg.F90 respectively
    6. Sub grid-scale calculation of deposition/evaporation of cloud ice. See code block “7.2 Bergeron-Findeisen effect: RCBERI” in rain_ice_old.F90 or ice4_fast_si.F90 for the rain_ice.F90 set up.

    There is also an important difference in condensation.F90: With OCND2, only liquid cloud water is handled within the statistical cloud scheme, not both ice and water as is the case with OCND2=F. With OCND2=F, the total cloud cover is calculated directly from the statistical cloud scheme. With OCND2=T, the total cloud cover is calculated as a sum of a liquid part, which is basically just the cloud cover from the statistical cloud scheme and an ice part which is based on the relative humidity with respect to ice and on the content of solid water species.

    There are two new routines for OCND2:

    1. icecloud.F90 is used for the sub grid-scale handling of relative humidity with respect to ice and thus for ice clouds. It is called from condensation.F90.
    2. ice4_fast_si.F90 is only used by the newer rain_ice.F90 routine. As already mentioned, it deals with deposition/evaporation of cloud ice.

    Tuning parameters

    The tuning parameters used specifically for OCND2 can be divided into three categories:

    Only having an effect if OCND2 is set to TRUE and used for SPP (April 2023).

    VariableDescription
    RFMIN(21)Tuning factor for ice clouds, such as cirrus. A larger value means a larger effect of the presence of solid water and thus more ice clouds. (The value is somewhat dependent on what kind of measurement one compares with, and how thin a cirrus cloud should be to be counted as a cloud. A range of 0.5 to 3 should be enough.)

    Only having effect if OCND2 is set to TRUE but currently (April 2023) not used in SPP.

    VariableDescription
    RFMIN(12)Threshold supersaturation with respect to ice in the supersaturated part of the grid-box for treatment in the microphysics computation. A larger value gives more supersaturation and a somewhat faster computation. Values that are too large are physically unrealistic, but there seems to be no consensus about the best value.
    RFMIN(13)Threshold mixing ratio for different non-vapor water species treated in the microphysics computation. Larger values result in faster computation, but possible important processes, when only small mixing ratios of water species are present, may be missed.
    RFMIN(15)Ice crystal diameter(m) for conversion from cloud ice to snow. Larger values lead to more ice and less snow.
    RFMIN(27)Experimental! Minimum temperature (K) used for Meyers ice number concentration. Larger values give less ice for temperatures below RFRMIN(27).
    RFMIN(39)Speed factor for deposition/evaporation rate of graupel. Larger values give faster deposition /evaporation.
    RFMIN(40)Speed factor for deposition/evaporation rate of snow. Larger values give faster deposition /evaporation.

    Have an effect even when OCND2 is not used, but designed for OCND2

    VariableDescription
    RFRMIN(1),RFRMIN(2),RFRMIN(3) and RFRMIN(4)Different thresholds for snow, ice, graupel and graupel again, respectively, leading to conversion of super-cooled rain into graupel. A higher value gives more super-cooled rain, but may be less physically realistic.
    RFRMIN(7)Tuning factor for the collisions between rain and snow. Higher values give less super-cooled rain and more snow. Zero means that those collisions are disregarded (probably OK).

    Full list of RFRMIN variables (included here for completeness, not all OCDN2-related)

    VariableValueDescription
    RFMIN(1)1.0E−5Higher value means more supercooled rain and somewhat less graupel.
    RFMIN(2)1.0E−8""
    RFMIN(3)3.0E-7""
    RFMIN(4)3.0E-7""
    RFMIN(5)1.0E-7Higher value means less graupel and more snow. Experimental.
    RFMIN(6)0.15Higher value means more graupel and less snow. Experimental.
    RFMIN(7)0.Higher value means less supercooled rain and somewhat more snow.
    RFMIN(8)1.> 1. Increase melt of graupel, < 1 decrease it. Experimental.
    RFMIN(9)1.> 1 means increase IN-concentration and <1 decrease.
    RFMIN(10)10.>10 means faster Kogan autoconversion <10 slower, only active for LKOGAN=T. This originates from the fact that the formula was based on an LES model with a higher horizontal resolution. It is easy to show that with a coarser resolution and an inhomogeneous cloud liquid field one has to add a compensating factor in order to retain the original mean autoconversion. Tests shows that a lower value e.g 3 would be better, and more in line with what ECMWF is using. The value 10 is, to some extent, a way of decreasing fog, but now we have a lot of other ways to reduce fog.
    RFMIN(11)1.Setting e.g. 0.01 means that subgrid-scale fraction of cloud water is used. Minimum cloud fraction=0.01. Only active for LKOGAN=T.
    RFMIN(12)0.The level of supersaturation in the ice-supersaturated part of grid-box needed to be treated in ice microphysics. (Greg Thompson recommend a higher value 0.05-0.25, in MetCoOp 0.05 is used) Higher value means faster computations, but also that any ice deposition in clear sky is neglected for ice-supersaturated between zero and RFRMIN(12). Only used with OCND2.
    RFMIN(13)1.0E-15The mixing-ratio of any water species needed to be treated in ice microphysics. The value 1.0E-15 is taken from old Hirlam. Only used with OCND2.
    RFMIN(14)120.Time scale for conversion of large ice crystals to snow. Only used with LMODICEDEP (Experimental).
    RFMIN(15)1.0E-4Diameter for conversion ice crystals into snow. Larger value gives more ice and less snow.
    RFMIN(16)0.“C” parameter for size distribution of snow. (constant for number concentration, N=Cλ^x) Only active if non-zero. Experimental
    RFMIN(17)0.“x” parameter for size distribution of snow. (slope for number concentration, N=Cλ^x) Only active if RFRMIN(16) is non-zero. Experimental.
    RFMIN(18)0.With RFRMIN(18)=1, snow and graupel melt are based on wet bulb temperature, instead of temperature and leads to slower melting. Experimental.
    RFMIN(19)0.Threshold cloud thickness for StCu/Cu transition [m] Only active for EDMF scheme and if non-zero, but very small effect.
    RFMIN(20)0.Threshold cloud thickness used in shallow/deep decision [m]. Only active for EDMF scheme and if non-zero, higher value gives more shallow convection and less deep model resolved convection.
    RFMIN(21)1.Tuning parameter for ice clouds. Larger value gives more cirrus and other ice clouds.
    RFMIN(22)1.Tuning parameter for CDNC at lowest model level . Lower value give lower CDNC. RFRMIN(22)=0.5 means CDNC= old CDNC x 0.5.
    RFMIN(23)0.5Tuning parameter only active with LHGT_QS. The lower limit for reduction of VSIGQSAT.
    RFMIN(24)1.5Tuning parameter only active with LHGT_QS. The upper limit for increase of VSIGQSAT.
    RFMIN(25)30.Tuning parameter only active with LHGT_QS. The level thickness for which VSIGQSAT is unchanged with LHGT_QS.
    RFMIN(26)0.If > 0.01, it replaces default CDNC everywhere. So RFRMIN(26)=50E6 (Beware of that it is in m-3!) gives CDNC = 50 cm-3 at reference level (1000 hPa) and RFRMIN(26) x pressure/ ref-pressure elsewhere.
    RFMIN(27)0.Minimum assumed temperature with respect to Meyers IN - concentration (K). Gives less IN concentration for temperatures below the value set. Experimental!
    RFMIN(28)0.Currently not used.
    RFMIN(29)0.If >0. and RFRMIN(22)>0 it gives the upper limit in metres for which the reduction of CDNC has an effect. A linear decrease from the lowest level to RFRMIN(29) meters is assumed.
    RFMIN(30)1.If not unity, CDNC is reduced/increased over sea with a factor RFRMIN(30) for the lowest model level and linearly reaching "no change" at RFRMIN(29) m height. If RFRMIN(29) is unset, RFRMIN(30) only affects the lowest model level.
    RFRMIN(31:38)0.Currently not used.
    RFRMIN(39)0.25.Reduction factor for deposition/evaporation of graupel. Only used when OCND2=T and LMODICEDEP=F.
    RFRMIN(40)0.15Reduction factor for deposition/evaporation of snow. Only used when OCND2=T and LMODICEDEP=F.
    diff --git a/previews/PR1153/ForecastModel/Outputlist/index.html b/previews/PR1153/ForecastModel/Outputlist/index.html index 202e1b3a3..25682e2f3 100644 --- a/previews/PR1153/ForecastModel/Outputlist/index.html +++ b/previews/PR1153/ForecastModel/Outputlist/index.html @@ -7,4 +7,4 @@ table { font-size: 11px; } -

    Parameter list and GRIB definitions

    HARMONIE system output

    The HARMONIE system writes its primary output, in FA format, to the upper air history files ICMSHHARM+llll and the SURFEX history files ICMSHHARM+llll.sfx, where HARM is the four-character experiment identifier set in the configuration file config_exp.h, and llll is normally the current timestep in hours. The files are designed to be complete snapshots of respective model state described by the system for a particular time point. In addition more model output including post-processing/diagnostic fields can be written out during the forecast model integration, such as those model diagnostics or pressure level diagnostics, also in FA format, as PFHARMDOMAIN+llll. The FA files can be considered to be internal format files. All of them can be converted to GRIB files during the run for external usage. The name convention is as follows:

    GRIB1 table 2 version in HARMONIE

    To avoid conflicts with archived HIRLAM data HARMONIE uses version 253 of table 2. The table is based on the standard WMO version 3 of table 2 and postion 000-127 is kept the same as in the WMO. Note that accumulated and instantaneous versions of the same parameter differ only by the time range indicator. It is thus not sufficient to specify parameter, type and level when you refer to an accumulated parameter, but the time range indicator has to be included as well.

    The translation of SURFEX files to GRIB1 is still incomplete and contains several WMO violations. This is not changed in the current release but will revised later. However, the upper air history file also includes the most common surface parameters and should be sufficient for most users.

    The current table 2 version 253 definition files for gribapi can be found in `util/glgrib_api/definitions/`. These local definition files assume centre=233 (Dublin) and should be copied to your own GRIB-API installation. You are strongly recommended to set your own code for generating centre fore operational usage of the data.

    GRIB2 in HARMONIE

    The possibility to convert to GRIB2 has been introduced in release-43h2. So far the conversion is restricted to atmospheric history and fullpos files only. To get the output in GRIB2 set ARCHIVE_FORMAT=GRIB2 in ecf/config_exp.h. Please notice that if ARCHIVE_FORMAT=GRIB2 is selected, SURFEX files will be converted to GRIB1 anyway (for the time being). To convert from GRIB1 with GRIB2 using grib_filter we have to tell EcCodes how to translate the parameters. This is done by using the internal HARMONIE tables and setting

    export ECCODES_DEFINITION_PATH=$SOME_PATH_TO_GL/gl/definitions:$SOME_PATH_TO_ECCODES/share/eccodes/definitions

    Note that there are a few parameters that are not translated to GRIB2 to and those has to be excluded explicitly.

    List of parameters

    header abbreviations in the tables:

    abbr.descriptionsee table
    lvTlevelTypelevel types
    iOPindicatorOfParameterindicator of parameter
    ddiscipline
    pCparameterCategory
    pNparameterNumber
    levlevel
    sTstepTypetime range indicator

    3D model state variables on model levels (1-NLEV), levelType=hybrid

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    SNNNHUMI.SPECIFIqhushy510101inskg/kgSpecific humidity
    SNNNLIQUID_WATERcwat_condclwhy7601831inskg/kgSpecific cloud liquid water content
    SNNNSOLID_WATERciwc_condclihy5801841inskg/kgSpecific cloud ice water content
    SNNNSNOWsnow_cond#hy18401861inskg/kgSpecific snow water content
    SNNNRAINrain_cond#hy18101851inskg/kgSpecific rain water content
    SNNNGRAUPELgrpl_cond#hy20101321inskg/kgSpecific graupel
    SNNNTKEtketkehy200019111insJ/kgTurbulent Kinetic Energy
    SNNNCLOUD_FRACTItccclthy71061921ins0-1Total cloud cover
    SNNNPRESS.DEPARTpdep#hy2120381insPaPressure departure
    SNNNTEMPERATUREttahy110001insKTemperature
    SNNNVERTIC.DIVERvdiv#hy213021921inss-1Vertical Divergence
    SNNNWIND.U.PHYSuuahy330221insm/su-component of wind
    SNNNWIND.V.PHYSvvahy340231insm/sv-component of wind

    2D Surface, prognostic/diagnostic near-surface and soil variables, levelType=heightAboveGround

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    SURFPRESSIONprespshag10300insPaSurface pressure
    SURFTEMPERATUREtts_radhag110000insKSurface temperature
    CLSTEMPERATUREttashag110002insKTemperature at 2m
    CLSMAXI.TEMPERATtmaxtasmaxhag150002maxKMaximum temperature (FREQ_RESET_TEMP)
    CLSMINI.TEMPERATtmintasminhag160002minKMinimum temperature (FREQ_RESET_TEMP)
    CLSVENT.ZONALuuashag3302210insm/su-component of wind at 10m, relative to model coordinates
    CLSVENT.MERIDIENvvashag3402310insm/sv-component of wind at 10m, relative to model coordinates
    CLSHUMI.SPECIFIQqhusshag510102inskg/kgSpecific humidity at 2m
    CLSHUMI.RELATIVErhurshag52011922ins0-1Relative humidity at 2m
    SURFRESERV.NEIGEsdwesnwhag6501600inskg/m2Snow depth water equivalent
    CLPMHAUT.MOD.XFUmldzmlahag6701930insmHeight (in meters) of the PBL out of the model
    SURFNEBUL.TOTALEtccclt_inshag71061920ins0-1Total cloud cover
    SURFNEBUL.CONVECcccclc_inshag72061930ins0-1Convective cloud cover
    SURFNEBUL.BASSElcccll_inshag73061940ins0-1Low cloud cover
    SURFNEBUL.MOYENNmccclm_inshag74061950ins0-1Medium cloud cover
    SURFNEBUL.HAUTEhccclh_inshag75061960ins0-1High cloud cover
    SURFRAYT.SOLAIREswavr#hag1160470insW/m2Instantaneous surface solar radiation (SW down global) Parameter identifier was 116, again is???
    SURFRAYT.TERRESTlwavr#hag1150540insW/m2Instantaneous longwave radiation flux
    SURFCAPE.MOD.XFUcapecapehag1600760insJ/kgModel output CAPE (not calculated by AROME physics)
    SURFDIAGHAILxhail#hag161012030ins0-1AROME hail diagnostic, LXXDIAGH = .TRUE.
    CLSU.RAF.MOD.XFUugstugshag162022310maxm/sU-momentum of gusts from the model. LXXGST = .TRUE. in NAMXFU. gives gust between current and previous output time step (FREQ_RESET_GUST)
    CLSV.RAF.MOD.XFUvgstvgshag163022410maxm/sV-momentum of gusts from the model. LXXGST = .TRUE. in NAMXFU. gives gust between current and previous output time step (FREQ_RESET_GUST)
    SURFINSPLUIErain#hag18101650inskg/m2Instantaneous rain
    SURFINSNEIGEsnow#hag18401530inskg/m2Instantaneous snow
    SURFINSGRAUPELgrpl#hag20101750inskg/m2Instantaneous graupel
    CLSMINI.HUMI.RELrmn2m#hag2410112min0-1Minimum relative moisture at 2m over 3h
    CLSMAXI.HUMI.RELrmx2m#hag2420112max0-1Maximum relative moisture at 2m over 3h
    CLSRAFALES.POSfgwsgsmaxhag228022210maxm/sGust wind speed

    2D Surface, accumulated near-surface and soil variables

    Note that all these are coded with stepType=accum

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    S065RAYT SOL CLcssw#hy130041165accJ/m2SW net clear sky rad
    S065RAYT THER CLcslw#hy13105665accJ/m2LW net clear sky rad
    SURFACCGRAUPELgrplprgrplhag20101750acckg/m2Accumulated graupel
    SURFACCNEIGEsnowprsnhag18401530acckg/m2Accumulated snowfall
    SURFACCPLUIErainprrainhag18101650acckg/m2Accumulated rain
    SURFDIR NORM IRRdneridshag1403630accJ/m2Direct normal exposure
    SURFFLU.CHA.SENSshfhfsshag12200110accJ/m2Sensible heat flux
    SURFFLU.LAT.MEVAlhehfls_evahag132011930accJ/m2Latent heat flux through evaporation
    SURFFLU.LAT.MSUBlhsubhfls_sblhag244012020accJ/kgLatent Heat Sublimation
    SURFFLU.MEVAP.EAwevapevspsblhag2450160acckg/m2Water evaporation
    SURFFLU.MSUBL.NEsnsubsbl_snowhag24601620acckg/m2Snow sublimation
    SURFFLU.RAY.SOLAnswrsrsnshag1110490accJ/m2Net shortwave radiation flux (surface)
    SURFFLU.RAY.THERnlwrsrlnshag1120550accJ/m2Net longwave radiation flux (surface)
    SURFRAYT DIR SURswavrrsdsdirhag1160470accJ/m2Shortwave radiation flux
    SURFRAYT SOLA DEgradrsdshag1170430accJ/m2Global radiation flux
    SURFRAYT THER DElwavrrldshag1150540accJ/m2Longwave radiation flux
    SURFTENS.TURB.MEvflxtauvhag125021990accN/m2Momentum flux, v-component
    SURFTENS.TURB.ZOuflxtauuhag124021980accN/m2Momentum flux, u-component

    2D TOA, diagnostic and accumulated variables, levelType=nominalTop

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    SOMMFLU.RAY.SOLAnswrtrsntnt1130490accJ/m2Net shortwave radiation flux(atmosph.top)
    SOMMFLU.RAY.THERnlwrtrlntnt1140550accJ/m2Net longwave radiation flux(atmosph.top)
    SOMMRAYT.SOLAIREnswrt#nt1130490insW/m2Net shortwave radiation flux(atmosph.top)
    SOMMRAYT.TERRESTnlwrt#nt1140550insW/m2Net longwave radiation flux(atmosph.top)
    SOMMRAYT SOL CLcsswrsntcsnt13004110accJ/m2TOA Net shortwave clear sky radiation(atmosph.top)
    SOMMRAYT THER CLcslwrlntcsnt1310560accJ/m2TOA Net longwave clear sky radiation(atmosph.top)
    TOPRAYT DIR SOMswavrrsdtnt1160470accJ/m2TOA Accumulated SW down radiation Parameter identifier was 117
    SOMMTBOZCLEARbtozcs#nt170-1-1-10-KBrightness temperature OZ clear
    SOMMTBOZCLOUDbtozcl#nt171-1-1-10-KBrightness temperature OZ cloud
    SOMMTBIRCLEARbtircs#nt172-1-1-10-KBrightness temperature IR clear
    SOMMTBIRCLOUDbtircl#nt173-1-1-10-KBrightness temperature IR cloud
    SOMMTBWVCLEARbtwvcs#nt174-1-1-10-KBrightness temperature WV clear
    SOMMTBWVCLOUDbtwvcl#nt175-1-1-10-KBrightness temperature WV cloud

    2D Surface, Postprocessed variables (fullpos)

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    SURFCAPE.POS.F00capecapehag1600760insJ/kgConvective available potential energy (CAPE)
    SURFCIEN.POS.F00cincinhag1650770insJ/kgConvective inhibition (CIN)
    SURFLIFTCONDLEVlcl#ac1670360insmLifting condensation level (LCL)
    SURFFREECONVLEVlfc#lfc1680360insmLevel of free convection (LFC)
    SURFEQUILIBRLEVlnb#lnb1690360insmLevel of neutral buoyancy (LNB)

    2D Surface, constant near-surface and soil variables

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    SPECSURFGEOPOTENzphis_shag60340insm2/s2Geopotential relative to mean sea level. "... contains a GRID POINT orography which is the interpolation of the departure orography"
    SURFIND.TERREMERlsmlsmhag812000ins0-1Land-sea mask
    SURFAEROS.SEAaers#hag2510131920inskg/kgSurface aerosol sea (Marine aerosols, locally defined GRIB)
    SURFAEROS.LANDaerl#hag2520131930inskg/kgSurface aerosol land (Continental aerosols, locally defined GRIB)
    SURFAEROS.SOOTaerc#hag2530131940inskg/kgSurface carbon aerosol (Carbone aerosols, locally defined GRIB)
    SURFAEROS.DESERTaerd#hag2540131950inskg/kgSurface aerosol desert (Desert aerosols, locally defined GRIB)
    SURFAEROS.VOLCAN##hag197-1-1-1-1Surface aerosol volcan (Stratospheric ash, to be locally defined GRIB)
    SURFAEROS.SULFAT##hag198-1-1-1-1Surface aerosol sulfate (Stratospheric sulfate, to be locally defined GRIB)
    SURFA.OF.OZONEao#hag2480141920inskg/kgA Ozone, First ozone profile (A), locally defined GRIB
    SURFB.OF.OZONEbo#hag2490141930inskg/kgB Ozone, Second ozone profile (B), locally defined GRIB
    SURFC.OF.OZONEco#hag2500141940inskg/kgC Ozone, Third ozone profile (C), locally defined GRIB
    PROFTEMPERATUREslt#dbl8523180insKSoil Temperature
    PROFRESERV.EAUsm#dbl8623200inskg/m2Deep Soil Wetness
    PROFPROP.RMAX.EAswv#dbl23823250inskg/m2Climate relaxed deep soil wetness
    PROFRESERV.GLACEwsoice#dbl19323220inskg/m2Deep soil ice

    2D variables on special surfaces

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    KT273ISOT_ALTITh#isot803627315insmAltitude of 0-degree isotherm
    KT263ISOT_ALTITh#isot803626315insmAltitude of -10-degree isotherm
    SURFISOTPW0.MALTh#isot0wb80360insmAltitude of iso-tprimw=0
    SURFTOT.WAT.VAPOwvintprwea5401640inskg/m2Total column integral water vapour
    WFPOWERINSwfpower_inswfpower_insea21102390insMWWind power production, instantaneous (LWINDFARM=.TRUE. in NAMPHY)
    WFPOWERACCwfpower_accwfpower_accea21102390accMJWind power production, accumulated (LWINDFARM=.TRUE. in NAMPHY)

    Postprocessed variables on different surface types

    Through the postprocessing sofware fullpos HARMONIE offers a number of variables postprocessed on different surface types. For the current choice of variables, surfaces and levels please see scr/Select_postp.pl.

    State variables and diagnostics on pressure levels, leveltype=isobaricInhPa

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    PNNNNNWIND.U.PHYuuapl33022NNNNNinsm/su-component of wind
    PNNNNNWIND.V.PHYvvapl34023NNNNNinsm/sv-component of wind
    PNNNNNTEMPERATURttapl11000NNNNNinsKTemperature
    PNNNNNHUMI.SPECIqhuspl51010NNNNNinskg/kgSpecific humidity
    PNNNNNLIQUID_WATcwat_condclwpl760183NNNNNinskg/kgSpecific cloud liquid water content
    PNNNNNSOLID_WATEciwc_condclipl580184NNNNNinskg/kgSpecific cloud ice water content
    PNNNNNCLOUD_FRACtcc#pl7106192NNNNNins0-1Total cloud cover
    PNNNNNSNOWsnow_cond#pl1840186NNNNNinskg/kgSpecific snow water content
    PNNNNNRAINrain_cond#pl1810185NNNNNinskg/kgSpecific rain water content
    PNNNNNGRAUPELgrpl_cond#pl2010132NNNNNinskg/kgSpecific graupel
    PNNNNNGEOPOTENTIzphipl6034NNNNNinsm2/s2Geopotential
    PNNNNNHUMI_RELATrhurpl5201192NNNNNins0-1Relative humidity
    PNNNNNTHETA_PRIMpaptthetaEpl14003NNNNNinsKPseudo-adiabatic potential temperature
    PNNNNNTHETA_VIRTvptmp#pl1760015NNNNNinsKVirtual potential temperature
    PNNNNNVERT.VELOCwwapl40029NNNNNinsm/sGeometrical vertical velocity
    PNNNNNPOT_VORTICpvpvpl40214NNNNNinsK m2/kg/sPotential vorticity
    PNNNNNABS_VORTICabsv#pl410210NNNNNinss-1Absolute vorticity
    PNNNNNDIVERGENCEd#pl440213NNNNNinss-1Relative divergence

    State variables and diagnostics on height levels, levelType=heightAboveGround

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    HNNNNNWIND.U.PHYuuahag33022NNNNNinsm/su-component of wind
    HNNNNNWIND.V.PHYvvahag34023NNNNNinsm/sv-component of wind
    HNNNNNTEMPERATURttahag11000NNNNNinsKTemperature
    HNNNNNLIQUID_WATcwat_condclwhag760183NNNNNinskg/kgSpecific cloud liquid water content
    HNNNNNSOLID_WATEciwc_condclihag580184NNNNNinskg/kgSpecific cloud ice water content
    HNNNNNCLOUD_FRACtccclthag7106192NNNNNins0-1Total cloud cover
    HNNNNNSNOWsnow_cond#hag1840186NNNNNinskg/kgSpecific snow water content
    HNNNNNRAINrain_cond#hag1810185NNNNNinskg/kgSpecific rain water content
    HNNNNNGRAUPELgrpl_cond#hag2010132NNNNNinskg/kgSpecific graupel
    HNNNNNHUMI_RELATrhurhag5201192NNNNNins0-1Relative humidity
    HNNNNNPRESSUREpresphag1030NNNNNinsPaPressure

    State variables and diagnostics on PV levels, GRIB1 level type 117, levelType=potentialVorticity

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    VNNNGEOPOTENTIELz#pv6034NNNinsm2/s2Geopotential
    VNNNTEMPERATUREt#pv11000NNNinsKTemperature
    VNNNPRESSUREpres#pv1030NNNinsPaPressure
    VNNNHUMI_RELATIVr#pv5201192NNNins0-1Relative humidity
    VNNNHUMI.SPECIFIq#pv51010NNNinskg/kgSpecific humidity
    VNNNWIND.U.PHYSu#pv33022NNNinsm/su-component of wind
    VNNNWIND.V.PHYSv#pv34023NNNinsm/sv-component of wind
    VNNNVITESSE_VERTomega#pv39028NNNinsPa/sPressure vertical velocity (DYNAMICS=h)
    VNNNVERT.VELOCITw#pv40029NNNinsm/sGeometrical vertical velocity (DYNAMICS=nh)
    VNNNTEMPE_POTENTpt#pv13002NNNinsKPotential temperature
    VNNNABS_VORTICITabsv#pv410210NNNinss-1Absolute vorticity
    VNNNDIVERGENCEd#pv440213NNNinss-1Relative divergence
    VNNNTHETAPRIMWpapt#pv14003NNNinsKPseudo-adiabatic potential temperature

    State variables and diagnostics on Theta levels, GRIB1 level type 113, levelType=theta

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    TNNNGEOPOTENTIELz#th6034NNNinsm2/s2Geopotential
    TNNNTEMPERATUREt#th11000NNNinsKTemperature
    TNNNPRESSUREpres#th1030NNNinsPaPressure
    TNNNHUMI_RELATIVr#th5201192NNNins0-1Relative humidity
    TNNNHUMI.SPECIFIq#th51010NNNinskg/kgSpecific humidity
    TNNNWIND.U.PHYSu#th33022NNNinsm/su-component of wind
    TNNNWIND.V.PHYSv#th34023NNNinsm/sv-component of wind
    TNNNVITESSE_VERTomega#th39028NNNinsPa/sPressure vertical velocity (DYNAMICS=h)
    TNNNVERT.VELOCITw#th40029NNNinsm/sGeometrical vertical velocity (DYNAMICS=nh)
    TNNNABS_VORTICITabsv#th410210NNNinss-1Absolute vorticity
    TNNNPOT_VORTICITpv#th40214NNNinsK m2/kg/sPotential vorticity
    TNNNDIVERGENCEd#th440213NNNinss-1Relative divergence

    FA fields without any default GRIB1 translation

    Some very special fields are left without any default translation. Please see in the gl documentation on how to add you own translation.

    FA nameUnitComment
    CUF1PRESSURECoupling error field.
    THETAPWP_FLUXK m-4 s-1Instantaneous thetaprimwprim surface flux
    CLPMOCON.MOD.XFUkg kg-1 s-1MOCON model output
    ATMONEBUL.TOTALEAccumulated Total cloud cover.
    ATMONEBUL.CONVECAccumulated Convective cloud cover.
    ATMONEBUL.BASSEAccumulated Low cloud cover.
    ATMONEBUL.MOYENNAccumulated Medium cloud cover.
    ATMONEBUL.HAUTEAccumulated High cloud cover.
    SURFCFU.Q.TURBULAccumulated contribution of Turbulence to Q.
    SURFCFU.CT.TURBULAccumulated contribution of Turbulence to CpT
    SUNSHI. DURATIONSunshine duration.
    SURFFL.U TURBULContribution of Turbulence to U.
    SURFFL.V TURBULContribution of Turbulence to V.
    SURFFL.Q TURBULContribution of Turbulence to Q.
    SURFFL.CT TURBULContribution of Turbulence to CpT
    SNNNSRCSecond order flux.

    Variables postprocessed by gl

    The following fields are can be generated by gl from a history file and are thus not necessarily available as FA fields in Harmonie's FA output. When calculating these post-processed fields, make sure the required fields to derive them are in the input files! For details, check util/gl/grb/postprocess.f90 & the routines called therein.

    Single level fields

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    MSLPRESSUREprespslhas10300insPaMSLP. gl calculates MSLP independent of AROME/!FullPos
    #tdtdhag170062insKDew point temperature
    #vis#hag2001900insmVisibility
    #wdir#ttt31020lllinsDeg. trueWind direction. gl calculates based on u[33,ttt,lll] and v[34,ttt,lll] wind components
    #ws#ttt32021lllinsm/sWind speed. gl calculates based on u[33,ttt,lll] and v[34,ttt,lll] wind components
    TOT.WATER.PRECIPtpprhag610180acckg/m2Total precipitation, gl calculates TP![61,105,0]=rain![181,105,0]+snow![184,105,0]+graupel![201,105,0]+hail![204,105,0]
    TOT.SOLID.PRECIPtpsolidprsolidhag185012000acckg/m2Total solid precipitation, gl calculates ![185,105,0]=snow![184,105,0]+graupel![201,105,0]+hail![204,105,0]
    #mldzmlahag6701930insmMixed layer depth/boundary layer height
    #tcc#hag71061922ins0-1Fog, cloud fraction of lowest model level
    #icei#hag1350ins-Icing index
    #atmiceg#hy??01205insm/sIcing index, Atmospheric ice growth rate
    #icei2#hag/?134011940ins-Icing index version 2
    #psct#hag/ct?1360400insKPseudo satellite image, cloud top temperature (infrared)
    #pstb#hag137041980insKPseudo satellite image, water vapour brightness temperature
    #pstbc#hag138041990insKPseudo satellite image, water vapour br. temp. + correction for clouds
    #pscw#hag139042000ins-Pseudo satellite image, cloud water reflectivity (visible)
    #prtp#hag14401190inscodePrecipitation type, 0:drizzle, 1:rain, 2:sleet, 3:snow, 4:freezing drizzle, 5:freezing rain, 6:graupel, 7:hail
    #fg#ttt2280222lllmaxm/sGust wind speed, calculated from ugst & vgst on corresponding level & levelType
    #hti#hag1480171930ins-Helicopter Triggered lightning Index
    #transmit#hag149061990ins-Transmittance
    #cat#hag145019220ins-|%CAT (clear air turbulence) index
    #bvf#hag1590192020inss-1Brunt Vaisala frequency

    Integrated quantities

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    TOT.NEB.ICEciwc_vintcliviea5801700inskg/m2Vertical integral of cloud ice
    TOT.NEB.WATcwat_vintclqviea7601690inskg/m2Vertical integral of cloud liquid water
    #msca#ea133061970ins0-1Mask of significant cloud amount
    #cape#hag1600760insJ/kgConvective Available Potential Energy, comes in two flavours, cape_version=1|2, where the second is compatible with the ECMWF version
    #cin#hag1650770insJ/kgConvective inhibition, , comes in two flavours, cape_version=1|2, where the second is compatible with the ECMWF version
    #rain_vintclrviea18101450inskg/m2Vertical integral of rain
    #snow_vintclsviea18401460inskg/m2Vertical integral of snow
    #grpl_vintclgviea20101740inskg/m2Vertical integral of garupel
    #cb#ea18606110insmCloud base
    #ct#ea18706120insmCloud top
    #cb38#hag?189061983insmCloud base >3/8
    #lgt#ea2090171920insflash/hLightning intensity
    #lmxws#ea/mw?1420360insmLevel of max wind speed
    #maxucol#ea1640220insm/sMax u-component in column
    #maxvcol#ea1770230insm/sMax v-component in column
    #lmxice#ea143011990insmIcing index, Level of max icing
    #mxicegr#ea141012040ins1Icing index, Max icing growth index (0-4)
    #blice#ea14601960insmIcing index, Bottom level of icing
    #tlice#ea14701950insmIcing index, Top level of icing
    #cat_maxlev#ea1500192010insmLevel of max CAT index
    #cat_max#ea1510191970ins-Max CAT index
    #cat_b#ea1520191980insmBottom level of CAT
    #cat_t#ea1530191990insmTop level of CAT

    GRIB encoding information

    Time units, WMO code table 4

    The following time units are used to encode GRIB edition 1 data

    CodeUnit
    0Minute
    1Hour
    1315 minutes
    1430 minutes

    Time range indicator, WMO code TABLE 5

    CodeabbrDefinition
    0insForecast product valid for reference time + P1 (P1 > 0), or Uninitialized analysis product for reference time (P1 = 0)
    2min/maxProduct with a valid time ranging between reference time + P1 and reference time + P2. Used for min/max values
    3avgAverage (reference time + P1 to reference time + P2)
    4accAccumulation (reference time + P1 to reference time + P2) product considered valid at reference time + P2

    Note that fields available as both instanteous and accumulated values like e.g. rain has the same parameter values and can only be distinguished by the time range indicator.

    Level types, WMO Code table 3

    level typenameabbrWMO/HIRLAM type definitionUnitsnotes
    001surfacesfcGround or water surfaceWMO
    002cloudBasecbCloud base levelWMO
    003cloudTopctLevel of cloud topsWMO
    004isothermZeroisot0Level of 0°C isothermWMO
    005adiabaticCondensationacLevel of Adiabatic Condensation Lifted from the SurfaceWMO
    006maxWindmwMaximum wind levelWMO
    007tropopausetpTropopauseWMO
    008nominalTopntTop-of-atmosphereWMO
    020isothermalisotIsothermal levelTemperature in 1/100 KWMO
    100isobaricInhPaplIsobaric levelhPaWMO
    102meanSeamsAt mean sea level
    103heightAboveSeahasSpecified altitude above mean sea levelAltitude in mWMO
    105heightAboveGroundhagSpecified height above groundAltitude in mWMO
    107sigmaSigma levelSigma value in 1/10000WMO
    109hybridhyHybrid levelWMO
    112depthBelowLandLayerdbl
    113thetathIsentropic (theta) levelPotential temperature in KWMO
    117potentialVorticitypvPotential vorticity surface10-9 K m2 kg-1 s-1WMO
    192isothermZeroWetBulbisot0wb
    200entireAtmosphereeaEntire atmosphere (considered as a single layer)WMO, vertically integrated
    levelFreeConvectionlfcas heightAboveGround in GRIB1
    levelNeutralBuoyancylnbas heightAboveGround in GRIB1

    Harmonie GRIB1 code table 2 version 253 - Indicator of parameter

    Below the indicator of parameter code table for the Harmonie model. It is based on the WMO code table 2 version 3 with local parameters added. Parameter indicators 128-254 are reserved for originating center use. Parameter indicators 000-127 should not be altered. In HARMONIE, radiation fluxes are assumed positive downwards (against the recommendation by WMO).

    ParDescriptionSI Units
    000Reservedn/a
    001PressurePa
    002Pressure reduced to MSLPa
    003Pressure tendencyPa s-1
    004Potential vorticityK m2 kg-1 s-1
    005ICAO Standard Atmosphere reference heightm
    006Geopotentialm2 s-2
    007Geopotential heightgpm
    008Geometrical heightm
    009Standard deviation of heightm
    010Total ozoneDobson
    011TemperatureK
    012Virtual temperatureK
    013Potential temperatureK
    014Pseudo-adiabatic potential temperatureK
    015Maximum temperatureK
    016Minimum temperatureK
    017Dew-point temperatureK
    018Dew-point depression (or deficit)K
    019Lapse rateK m-1
    020Visibilitym
    021Radar spectra (1)-
    022Radar spectra (2)-
    023Radar spectra (3)-
    024Parcel lifted index (to 500 hPa)K
    025Temperature anomalyK
    026Pressure anomalyPa
    027Geopotential height anomalygpm
    028Wave spectra (1)-
    029Wave spectra (2)-
    030Wave spectra (3)-
    031Wind directionDegree true
    032Wind speedm s-1
    033u-component of windm s-1
    034v-component of windm s-1
    035Stream functionm2 s-1
    036Velocity potentialm2 s-1
    037Montgomery stream functionm2 s-1
    038Sigma coordinate vertical velocitys-1
    039Vertical velocityPa s-1
    040Vertical velocitym s-1
    041Absolute vorticitys-1
    042Absolute divergences-1
    043Relative vorticitys-1
    044Relative divergences-1
    045Vertical u-component shears-1
    046Vertical v-component shears-1
    047Direction of currentDegree true
    048Speed of currentm s-1
    049u-component of currentm s-1
    050v-component of currentm s-1
    051Specific humiditykg kg-1
    052Relative humidity%
    053Humidity mixing ratiokg kg-1
    054Precipitable waterkg m-2
    055Vapor pressurePa
    056Saturation deficitPa
    057Evaporationkg m-2
    058Cloud icekg m-2
    059Precipitation ratekg m-2 s-1
    060Thunderstorm probability%
    061Total precipitationkg m-2
    062Large scale precipitationkg m-2
    063Convective precipitationkg m-2
    064Snowfall rate water equivalentkg m-2 s-1
    065Water equivalent of accumulated snow depthkg m-2
    066Snow depthm
    067Mixed layer depthm
    068Transient thermocline depthm
    069Main thermocline depthm
    070Main thermocline anomalym
    071Total cloud cover%
    072Convective cloud cover%
    073Low cloud cover%
    074Medium cloud cover%
    075High cloud cover%
    076Cloud waterkg m-2
    077Best lifted index (to 500 hPa)K
    078Convective snowkg m-2
    079Large scale snowkg m-2
    080Water temperatureK
    081Land cover (1 = land, 0 = sea)Proportion
    082Deviation of sea level from meanm
    083Surface roughnessm
    084Albedo%
    085Soil temperatureK
    086Soil moisture contentkg m-2
    087Vegetation%
    088Salinitykg kg-1
    089Densitykg m-3
    090Water run-offkg m-2
    091Ice cover (1 = ice, 0 = no ice)Proportion
    092Ice thicknessm
    093Direction of ice driftDegree true
    094Speed of ice driftm s-1
    095u-component of ice driftm s-1
    096v-component of ice driftm s-1
    097Ice growth ratem s-1
    098Ice divergences-1
    099Snow meltkg m-2
    100Significant height of combined wind waves and swellm
    101Direction of wind wavesDegree true
    102Significant height of wind wavesm
    103Mean period of wind wavess
    104Direction of swell wavesDegree true
    105Significant height of swell wavesm
    106Mean period of swell wavess
    107Primary wave directionDegree true
    108Primary wave mean periods
    109Secondary wave directionDegree true
    110Secondary wave mean periods
    111Net short-wave radiation flux (surface)W m-2
    112Net long-wave radiation flux (surface)W m-2
    113Net short-wave radiation flux (top of atmosphere)W m-2
    114Net long-wave radiation flux (top of atmosphere)W m-2
    115Long-wave radiation fluxW m-2
    116Short-wave radiation fluxW m-2
    117Global radiation fluxW m-2
    118Brightness temperatureK
    119Radiance (with respect to wave number)W m-1 sr-1
    120Radiance (with respect to wave length)W m-3 sr-1
    121Latent heat fluxW m-2
    122Sensible heat fluxW m-2
    123Boundary layer dissipationW m-2
    124Momentum flux, u-componentN m-2
    125Momentum flux, v-componentN m-2
    126Wind mixing energyJ
    127Image data-
    128Analysed RMS of PHI (CANARI)m2 s-2
    129Forecasted RMS of PHI (CANARI)m2 s-2
    130SW net clear sky radW m-2
    131LW net clear sky radW m-2
    132Latent heat flux through evaporationW m-2
    133Mask of significant cloud amount0-1
    134Icing index version 2-
    135Icing indexCode table
    136Pseudo satellite image, cloud top temperature (infrared)K
    137Pseudo satellite image, water vapour brightness temperatureK
    138Pseudo satellite image, water vapour br. temp. + correction for cloudsK
    139Pseudo satellite image, cloud water reflectivity (visible)?
    140Direct normal irradianceJ m-2
    141Max icing growth index-
    142Level of max wind speedm
    143Level of max icingm
    144Precipition TypeCode table
    145CAT index- / %
    146Bottom level of icingm
    147Top level of icingm
    148Helicopter Triggered ligthning Index-
    149Transmittance-
    150Level of max CAT indexm
    151Max CAT index-
    152Bottom level of CATm
    153Top level of CATm
    154Max Wind speedm s-1
    155Available#
    156Available#
    157Available#
    158Surface downward moon radiationW m-2
    159ABrunt Vaisala frequencys-1
    160CAPEJ kg-1
    161AROME hail diagnostic%
    162U-momentum of gusts out of the modelm s-1
    163V-momentum of gusts out of the modelm s-1
    164Max u-component in columnm s-1
    165Convective inhibition (CIN)J kg-1
    166MOCON out of the modelkg/kg s-1
    167Lifting condensation level (LCL)m
    168Level of free convection (LFC)m
    169Level of neutral boyancy (LNB)m
    170Brightness temperature OZ clearK
    171Brightness temperature OZ cloudK
    172Brightness temperature IR clearK
    173Brightness temperature IR cloudK
    174Brightness temperature WV clearK
    175Brightness temperature WV cloudK
    176Virtual potential temperatureK
    177Max v-component in columnm s-1
    178Available#
    179Available#
    180Available#
    181Rainkg m-2
    182Stratiform Rainkg m-2
    183Convective Rainkg m-2
    184Snowkg m-2
    185Total solid precipitationkg m-2
    186Cloud basem
    187Cloud topm
    188Fraction of urban landProportion
    189Cloud base >3/8m
    190Snow AlbedoProportion
    191Snow densitykg/m3
    192Water on canopykg/m2
    193Soil icekg/m2
    194Available#
    195Gravity wave stress U-compN/m2
    196Gravity wave stress V-compN/m2
    197Available#
    198Available#
    199Vegetation type-
    200TKEm2 s-2
    201Graupelkg m-2
    202Stratiform Graupelkg m-2
    203Convective Graupelkg m-2
    204Hailkg m-2
    205Stratiform Hailkg m-2
    206Convective Hailkg m-2
    207Available#
    208Available#
    209Lightningflash h-1
    210Simulated reflectivitydBz
    211Wind power productionMW or MJ
    212Pressure departurePa
    213Vertical divergences-1
    214UD_OMEGAms-1?
    215DD_OMEGAms-1?
    216UDMESHFRAC-
    217DDMESHFRAC-
    218PSHICONVCL-
    219Surface albedo for non snow covered areasProportion
    220Standard deviation of orography * gm2 s-2
    221Anisotropy coeff of topography-
    222Direction of main axis of topographyrad
    223Roughness length of bare surface * gm2 s-2
    224Roughness length for vegetation * gm2 s-2
    225Fraction of clay within soilProportion
    226Fraction of sand within soilProportion
    227Maximum proportion of vegetationProportion
    228Gust wind speedm s-1
    229Albedo of bare groundProportion
    230Albedo of vegetationProportion
    231Stomatal minimum resistances/m
    232Leaf area indexm2/m2
    233Thetaprimwprim surface fluxKm/s
    234Dominant vegetation index-
    235Surface emissivity-
    236Maximum soil depthm
    237Soil depthm
    238Soil wetnesskg/m2
    239Thermal roughness length * gm2 s-2
    240Resistance to evapotransirations/m
    241Minimum relative moisture at 2 meters%
    242Maximum relative moisture at 2 meters%
    243Duration of total precipitationss
    244Latent Heat SublimationW/m2
    245Water evaporationkg/m2
    246Snow sublimationkg/m2
    247Snow history???
    248A OZONEkg kg-1
    249B OZONEkg kg-1
    250C OZONEkg kg-1
    251Surface aerosol seakg kg-1
    252Surface aerosol landkg kg-1
    253Surface aerosol sootkg kg-1
    254Surface aerosol desertkg kg-1
    255Missing valuen/a

    SURFEX output Harmonie GRIB1 code table 2 version 001

    Levels are used in the conversion of SURFEX output to GRIB to indicate tile/patch/type/level:

    leveldescription
    300Extra yet unknown SURFEX variables
    301Fraction of each vegetation types on PATCH 1
    302Fraction of each vegetation types on PATCH 2
    303Fraction of each vegetation types cy43 (ECOCLIMAP-SG)
    600Physiography fields?
    720Sea ice
    730Sea ice (TICE_LL)
    755Precip
    760Sea
    770in addition to FLake (or instead of it)
    780Flake
    790Patch (*_P fields)
    800ISBA
    810Gridpoint average
    820Surface boundary multi layer fields
    830ISBA - patch 1 (X001*, open land)
    840ISBA - patch 2 (X002*, forest)
    950Town energy balance model (TEB)

    A small selection of fields available in the SURFEX output files is shown below.

    FA nameshortNameNCnamelvTiOPlevsTunitsdescription
    FRAC_SEA#sftofhag32300ins0-1Fraction of sea
    FRAC_WATER#sftlafhag33300ins0-1Fraction of water
    FRAC_NATURE#sftnfhag34300ins0-1Fraction of nature
    FRAC_TOWN#sfturfhag35300ins0-1Fraction of town
    COVER001#lsm10insLAND SEA MASK
    COVER002-COVER243##002-2430insECOCLIMAP I cover types
    COVER255##2550insECOCLIMAP I MY_COVER type
    COVER301-COVER573##001-254 & 001-0190insECOCLIMAP II cover types
    ZS#oroghag80insmOro hgt.
    SST#tosms110insKSST
    SIC#siconcams910ins0-1SIC
    T2M_SEA#tas_seahag11760insKT2m sea
    Q2M_SEA#huss_seahag51760inskg kg-1Q2m sea
    MER10M_SEA#vas_seahag34760insm s-1V10m sea
    ZON10M_SEA#uas_seahag33760insm s-1U10m sea
    T2M_WAT#tas_waterhag11772insKT2m water
    Q2M_WAT#huss_waterhag51770inskg kg-1Q2m water
    MER10M_WAT#vas_waterhag34770insm s-1V10m water
    ZON10M_WAT#uas_waterhag33770insm s-1U10m water
    DSNTISBA#sndhag660insmSnow depth
    WSNTISBA#snwhag130inskg m-2Total snow reservoir
    T2M_ISBA#tas_naturehag11802insKT2m isba
    Q2M_ISBA#huss_naturehag51802inskg kg-1Q2m isba
    X001T2M_P#tashag11832insKT2m of patch 1
    X002T2M_P#tashag11842insKT2m of patch 2
    T2M_TEB#tas_townhag11950insKT2m town
    T2MMAX_TEB#tasmax_townhag15950maxKMax Temp for town
    T2MMIN_TEB#tasmin_townhag16950minKMin Temp for town
    TGL#tg_LLLhag11800+insKTemperature of soil layer L(isba)
    WGL#wsa_LLLhag86800+insm3 m-3Liquid volumetric water content of soil layer L
    WGIL#isa_LLLhag193800+insm3 m-3Frozen volumetric water content of soil layer L
    WR#wrhag12800inskg m-2Liquid water retained by foliage (isba)
    DGL#dsoil_LLLhag23300insmSoil depth of soil layer L

    Harmonie GRIB1 code table 2 version 210

    Used for aerosol fields

    GRIB

    NetCDF

    +

    Parameter list and GRIB definitions

    HARMONIE system output

    The HARMONIE system writes its primary output, in FA format, to the upper air history files ICMSHHARM+llll and the SURFEX history files ICMSHHARM+llll.sfx, where HARM is the four-character experiment identifier set in the configuration file config_exp.h, and llll is normally the current timestep in hours. The files are designed to be complete snapshots of respective model state described by the system for a particular time point. In addition more model output including post-processing/diagnostic fields can be written out during the forecast model integration, such as those model diagnostics or pressure level diagnostics, also in FA format, as PFHARMDOMAIN+llll. The FA files can be considered to be internal format files. All of them can be converted to GRIB files during the run for external usage. The name convention is as follows:

    GRIB1 table 2 version in HARMONIE

    To avoid conflicts with archived HIRLAM data HARMONIE uses version 253 of table 2. The table is based on the standard WMO version 3 of table 2 and postion 000-127 is kept the same as in the WMO. Note that accumulated and instantaneous versions of the same parameter differ only by the time range indicator. It is thus not sufficient to specify parameter, type and level when you refer to an accumulated parameter, but the time range indicator has to be included as well.

    The translation of SURFEX files to GRIB1 is still incomplete and contains several WMO violations. This is not changed in the current release but will revised later. However, the upper air history file also includes the most common surface parameters and should be sufficient for most users.

    The current table 2 version 253 definition files for gribapi can be found in `util/glgrib_api/definitions/`. These local definition files assume centre=233 (Dublin) and should be copied to your own GRIB-API installation. You are strongly recommended to set your own code for generating centre fore operational usage of the data.

    GRIB2 in HARMONIE

    The possibility to convert to GRIB2 has been introduced in release-43h2. So far the conversion is restricted to atmospheric history and fullpos files only. To get the output in GRIB2 set ARCHIVE_FORMAT=GRIB2 in ecf/config_exp.h. Please notice that if ARCHIVE_FORMAT=GRIB2 is selected, SURFEX files will be converted to GRIB1 anyway (for the time being). To convert from GRIB1 with GRIB2 using grib_filter we have to tell EcCodes how to translate the parameters. This is done by using the internal HARMONIE tables and setting

    export ECCODES_DEFINITION_PATH=$SOME_PATH_TO_GL/gl/definitions:$SOME_PATH_TO_ECCODES/share/eccodes/definitions

    Note that there are a few parameters that are not translated to GRIB2 to and those has to be excluded explicitly.

    List of parameters

    header abbreviations in the tables:

    abbr.descriptionsee table
    lvTlevelTypelevel types
    iOPindicatorOfParameterindicator of parameter
    ddiscipline
    pCparameterCategory
    pNparameterNumber
    levlevel
    sTstepTypetime range indicator

    3D model state variables on model levels (1-NLEV), levelType=hybrid

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    SNNNHUMI.SPECIFIqhushy510101inskg/kgSpecific humidity
    SNNNLIQUID_WATERcwat_condclwhy7601831inskg/kgSpecific cloud liquid water content
    SNNNSOLID_WATERciwc_condclihy5801841inskg/kgSpecific cloud ice water content
    SNNNSNOWsnow_cond#hy18401861inskg/kgSpecific snow water content
    SNNNRAINrain_cond#hy18101851inskg/kgSpecific rain water content
    SNNNGRAUPELgrpl_cond#hy20101321inskg/kgSpecific graupel
    SNNNTKEtketkehy200019111insJ/kgTurbulent Kinetic Energy
    SNNNCLOUD_FRACTItccclthy71061921ins0-1Total cloud cover
    SNNNPRESS.DEPARTpdep#hy2120381insPaPressure departure
    SNNNTEMPERATUREttahy110001insKTemperature
    SNNNVERTIC.DIVERvdiv#hy213021921inss-1Vertical Divergence
    SNNNWIND.U.PHYSuuahy330221insm/su-component of wind
    SNNNWIND.V.PHYSvvahy340231insm/sv-component of wind

    2D Surface, prognostic/diagnostic near-surface and soil variables, levelType=heightAboveGround

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    SURFPRESSIONprespshag10300insPaSurface pressure
    SURFTEMPERATUREtts_radhag110000insKSurface temperature
    CLSTEMPERATUREttashag110002insKTemperature at 2m
    CLSMAXI.TEMPERATtmaxtasmaxhag150002maxKMaximum temperature (FREQ_RESET_TEMP)
    CLSMINI.TEMPERATtmintasminhag160002minKMinimum temperature (FREQ_RESET_TEMP)
    CLSVENT.ZONALuuashag3302210insm/su-component of wind at 10m, relative to model coordinates
    CLSVENT.MERIDIENvvashag3402310insm/sv-component of wind at 10m, relative to model coordinates
    CLSHUMI.SPECIFIQqhusshag510102inskg/kgSpecific humidity at 2m
    CLSHUMI.RELATIVErhurshag52011922ins0-1Relative humidity at 2m
    SURFRESERV.NEIGEsdwesnwhag6501600inskg/m2Snow depth water equivalent
    CLPMHAUT.MOD.XFUmldzmlahag6701930insmHeight (in meters) of the PBL out of the model
    SURFNEBUL.TOTALEtccclt_inshag71061920ins0-1Total cloud cover
    SURFNEBUL.CONVECcccclc_inshag72061930ins0-1Convective cloud cover
    SURFNEBUL.BASSElcccll_inshag73061940ins0-1Low cloud cover
    SURFNEBUL.MOYENNmccclm_inshag74061950ins0-1Medium cloud cover
    SURFNEBUL.HAUTEhccclh_inshag75061960ins0-1High cloud cover
    SURFRAYT.SOLAIREswavr#hag1160470insW/m2Instantaneous surface solar radiation (SW down global) Parameter identifier was 116, again is???
    SURFRAYT.TERRESTlwavr#hag1150540insW/m2Instantaneous longwave radiation flux
    SURFCAPE.MOD.XFUcapecapehag1600760insJ/kgModel output CAPE (not calculated by AROME physics)
    SURFDIAGHAILxhail#hag161012030ins0-1AROME hail diagnostic, LXXDIAGH = .TRUE.
    CLSU.RAF.MOD.XFUugstugshag162022310maxm/sU-momentum of gusts from the model. LXXGST = .TRUE. in NAMXFU. gives gust between current and previous output time step (FREQ_RESET_GUST)
    CLSV.RAF.MOD.XFUvgstvgshag163022410maxm/sV-momentum of gusts from the model. LXXGST = .TRUE. in NAMXFU. gives gust between current and previous output time step (FREQ_RESET_GUST)
    SURFINSPLUIErain#hag18101650inskg/m2Instantaneous rain
    SURFINSNEIGEsnow#hag18401530inskg/m2Instantaneous snow
    SURFINSGRAUPELgrpl#hag20101750inskg/m2Instantaneous graupel
    CLSMINI.HUMI.RELrmn2m#hag2410112min0-1Minimum relative moisture at 2m over 3h
    CLSMAXI.HUMI.RELrmx2m#hag2420112max0-1Maximum relative moisture at 2m over 3h
    CLSRAFALES.POSfgwsgsmaxhag228022210maxm/sGust wind speed

    2D Surface, accumulated near-surface and soil variables

    Note that all these are coded with stepType=accum

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    S065RAYT SOL CLcssw#hy130041165accJ/m2SW net clear sky rad
    S065RAYT THER CLcslw#hy13105665accJ/m2LW net clear sky rad
    SURFACCGRAUPELgrplprgrplhag20101750acckg/m2Accumulated graupel
    SURFACCNEIGEsnowprsnhag18401530acckg/m2Accumulated snowfall
    SURFACCPLUIErainprrainhag18101650acckg/m2Accumulated rain
    SURFDIR NORM IRRdneridshag1403630accJ/m2Direct normal exposure
    SURFFLU.CHA.SENSshfhfsshag12200110accJ/m2Sensible heat flux
    SURFFLU.LAT.MEVAlhehfls_evahag132011930accJ/m2Latent heat flux through evaporation
    SURFFLU.LAT.MSUBlhsubhfls_sblhag244012020accJ/kgLatent Heat Sublimation
    SURFFLU.MEVAP.EAwevapevspsblhag2450160acckg/m2Water evaporation
    SURFFLU.MSUBL.NEsnsubsbl_snowhag24601620acckg/m2Snow sublimation
    SURFFLU.RAY.SOLAnswrsrsnshag1110490accJ/m2Net shortwave radiation flux (surface)
    SURFFLU.RAY.THERnlwrsrlnshag1120550accJ/m2Net longwave radiation flux (surface)
    SURFRAYT DIR SURswavrrsdsdirhag1160470accJ/m2Shortwave radiation flux
    SURFRAYT SOLA DEgradrsdshag1170430accJ/m2Global radiation flux
    SURFRAYT THER DElwavrrldshag1150540accJ/m2Longwave radiation flux
    SURFTENS.TURB.MEvflxtauvhag125021990accN/m2Momentum flux, v-component
    SURFTENS.TURB.ZOuflxtauuhag124021980accN/m2Momentum flux, u-component

    2D TOA, diagnostic and accumulated variables, levelType=nominalTop

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    SOMMFLU.RAY.SOLAnswrtrsntnt1130490accJ/m2Net shortwave radiation flux(atmosph.top)
    SOMMFLU.RAY.THERnlwrtrlntnt1140550accJ/m2Net longwave radiation flux(atmosph.top)
    SOMMRAYT.SOLAIREnswrt#nt1130490insW/m2Net shortwave radiation flux(atmosph.top)
    SOMMRAYT.TERRESTnlwrt#nt1140550insW/m2Net longwave radiation flux(atmosph.top)
    SOMMRAYT SOL CLcsswrsntcsnt13004110accJ/m2TOA Net shortwave clear sky radiation(atmosph.top)
    SOMMRAYT THER CLcslwrlntcsnt1310560accJ/m2TOA Net longwave clear sky radiation(atmosph.top)
    TOPRAYT DIR SOMswavrrsdtnt1160470accJ/m2TOA Accumulated SW down radiation Parameter identifier was 117
    SOMMTBOZCLEARbtozcs#nt170-1-1-10-KBrightness temperature OZ clear
    SOMMTBOZCLOUDbtozcl#nt171-1-1-10-KBrightness temperature OZ cloud
    SOMMTBIRCLEARbtircs#nt172-1-1-10-KBrightness temperature IR clear
    SOMMTBIRCLOUDbtircl#nt173-1-1-10-KBrightness temperature IR cloud
    SOMMTBWVCLEARbtwvcs#nt174-1-1-10-KBrightness temperature WV clear
    SOMMTBWVCLOUDbtwvcl#nt175-1-1-10-KBrightness temperature WV cloud

    2D Surface, Postprocessed variables (fullpos)

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    SURFCAPE.POS.F00capecapehag1600760insJ/kgConvective available potential energy (CAPE)
    SURFCIEN.POS.F00cincinhag1650770insJ/kgConvective inhibition (CIN)
    SURFLIFTCONDLEVlcl#ac1670360insmLifting condensation level (LCL)
    SURFFREECONVLEVlfc#lfc1680360insmLevel of free convection (LFC)
    SURFEQUILIBRLEVlnb#lnb1690360insmLevel of neutral buoyancy (LNB)

    2D Surface, constant near-surface and soil variables

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    SPECSURFGEOPOTENzphis_shag60340insm2/s2Geopotential relative to mean sea level. "... contains a GRID POINT orography which is the interpolation of the departure orography"
    SURFIND.TERREMERlsmlsmhag812000ins0-1Land-sea mask
    SURFAEROS.SEAaers#hag2510131920inskg/kgSurface aerosol sea (Marine aerosols, locally defined GRIB)
    SURFAEROS.LANDaerl#hag2520131930inskg/kgSurface aerosol land (Continental aerosols, locally defined GRIB)
    SURFAEROS.SOOTaerc#hag2530131940inskg/kgSurface carbon aerosol (Carbone aerosols, locally defined GRIB)
    SURFAEROS.DESERTaerd#hag2540131950inskg/kgSurface aerosol desert (Desert aerosols, locally defined GRIB)
    SURFAEROS.VOLCAN##hag197-1-1-1-1Surface aerosol volcan (Stratospheric ash, to be locally defined GRIB)
    SURFAEROS.SULFAT##hag198-1-1-1-1Surface aerosol sulfate (Stratospheric sulfate, to be locally defined GRIB)
    SURFA.OF.OZONEao#hag2480141920inskg/kgA Ozone, First ozone profile (A), locally defined GRIB
    SURFB.OF.OZONEbo#hag2490141930inskg/kgB Ozone, Second ozone profile (B), locally defined GRIB
    SURFC.OF.OZONEco#hag2500141940inskg/kgC Ozone, Third ozone profile (C), locally defined GRIB
    PROFTEMPERATUREslt#dbl8523180insKSoil Temperature
    PROFRESERV.EAUsm#dbl8623200inskg/m2Deep Soil Wetness
    PROFPROP.RMAX.EAswv#dbl23823250inskg/m2Climate relaxed deep soil wetness
    PROFRESERV.GLACEwsoice#dbl19323220inskg/m2Deep soil ice

    2D variables on special surfaces

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    KT273ISOT_ALTITh#isot803627315insmAltitude of 0-degree isotherm
    KT263ISOT_ALTITh#isot803626315insmAltitude of -10-degree isotherm
    SURFISOTPW0.MALTh#isot0wb80360insmAltitude of iso-tprimw=0
    SURFTOT.WAT.VAPOwvintprwea5401640inskg/m2Total column integral water vapour
    WFPOWERINSwfpower_inswfpower_insea21102390insMWWind power production, instantaneous (LWINDFARM=.TRUE. in NAMPHY)
    WFPOWERACCwfpower_accwfpower_accea21102390accMJWind power production, accumulated (LWINDFARM=.TRUE. in NAMPHY)

    Postprocessed variables on different surface types

    Through the postprocessing sofware fullpos HARMONIE offers a number of variables postprocessed on different surface types. For the current choice of variables, surfaces and levels please see scr/Select_postp.pl.

    State variables and diagnostics on pressure levels, leveltype=isobaricInhPa

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    PNNNNNWIND.U.PHYuuapl33022NNNNNinsm/su-component of wind
    PNNNNNWIND.V.PHYvvapl34023NNNNNinsm/sv-component of wind
    PNNNNNTEMPERATURttapl11000NNNNNinsKTemperature
    PNNNNNHUMI.SPECIqhuspl51010NNNNNinskg/kgSpecific humidity
    PNNNNNLIQUID_WATcwat_condclwpl760183NNNNNinskg/kgSpecific cloud liquid water content
    PNNNNNSOLID_WATEciwc_condclipl580184NNNNNinskg/kgSpecific cloud ice water content
    PNNNNNCLOUD_FRACtcc#pl7106192NNNNNins0-1Total cloud cover
    PNNNNNSNOWsnow_cond#pl1840186NNNNNinskg/kgSpecific snow water content
    PNNNNNRAINrain_cond#pl1810185NNNNNinskg/kgSpecific rain water content
    PNNNNNGRAUPELgrpl_cond#pl2010132NNNNNinskg/kgSpecific graupel
    PNNNNNGEOPOTENTIzphipl6034NNNNNinsm2/s2Geopotential
    PNNNNNHUMI_RELATrhurpl5201192NNNNNins0-1Relative humidity
    PNNNNNTHETA_PRIMpaptthetaEpl14003NNNNNinsKPseudo-adiabatic potential temperature
    PNNNNNTHETA_VIRTvptmp#pl1760015NNNNNinsKVirtual potential temperature
    PNNNNNVERT.VELOCwwapl40029NNNNNinsm/sGeometrical vertical velocity
    PNNNNNPOT_VORTICpvpvpl40214NNNNNinsK m2/kg/sPotential vorticity
    PNNNNNABS_VORTICabsv#pl410210NNNNNinss-1Absolute vorticity
    PNNNNNDIVERGENCEd#pl440213NNNNNinss-1Relative divergence

    State variables and diagnostics on height levels, levelType=heightAboveGround

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    HNNNNNWIND.U.PHYuuahag33022NNNNNinsm/su-component of wind
    HNNNNNWIND.V.PHYvvahag34023NNNNNinsm/sv-component of wind
    HNNNNNTEMPERATURttahag11000NNNNNinsKTemperature
    HNNNNNLIQUID_WATcwat_condclwhag760183NNNNNinskg/kgSpecific cloud liquid water content
    HNNNNNSOLID_WATEciwc_condclihag580184NNNNNinskg/kgSpecific cloud ice water content
    HNNNNNCLOUD_FRACtccclthag7106192NNNNNins0-1Total cloud cover
    HNNNNNSNOWsnow_cond#hag1840186NNNNNinskg/kgSpecific snow water content
    HNNNNNRAINrain_cond#hag1810185NNNNNinskg/kgSpecific rain water content
    HNNNNNGRAUPELgrpl_cond#hag2010132NNNNNinskg/kgSpecific graupel
    HNNNNNHUMI_RELATrhurhag5201192NNNNNins0-1Relative humidity
    HNNNNNPRESSUREpresphag1030NNNNNinsPaPressure

    State variables and diagnostics on PV levels, GRIB1 level type 117, levelType=potentialVorticity

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    VNNNGEOPOTENTIELz#pv6034NNNinsm2/s2Geopotential
    VNNNTEMPERATUREt#pv11000NNNinsKTemperature
    VNNNPRESSUREpres#pv1030NNNinsPaPressure
    VNNNHUMI_RELATIVr#pv5201192NNNins0-1Relative humidity
    VNNNHUMI.SPECIFIq#pv51010NNNinskg/kgSpecific humidity
    VNNNWIND.U.PHYSu#pv33022NNNinsm/su-component of wind
    VNNNWIND.V.PHYSv#pv34023NNNinsm/sv-component of wind
    VNNNVITESSE_VERTomega#pv39028NNNinsPa/sPressure vertical velocity (DYNAMICS=h)
    VNNNVERT.VELOCITw#pv40029NNNinsm/sGeometrical vertical velocity (DYNAMICS=nh)
    VNNNTEMPE_POTENTpt#pv13002NNNinsKPotential temperature
    VNNNABS_VORTICITabsv#pv410210NNNinss-1Absolute vorticity
    VNNNDIVERGENCEd#pv440213NNNinss-1Relative divergence
    VNNNTHETAPRIMWpapt#pv14003NNNinsKPseudo-adiabatic potential temperature

    State variables and diagnostics on Theta levels, GRIB1 level type 113, levelType=theta

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    TNNNGEOPOTENTIELz#th6034NNNinsm2/s2Geopotential
    TNNNTEMPERATUREt#th11000NNNinsKTemperature
    TNNNPRESSUREpres#th1030NNNinsPaPressure
    TNNNHUMI_RELATIVr#th5201192NNNins0-1Relative humidity
    TNNNHUMI.SPECIFIq#th51010NNNinskg/kgSpecific humidity
    TNNNWIND.U.PHYSu#th33022NNNinsm/su-component of wind
    TNNNWIND.V.PHYSv#th34023NNNinsm/sv-component of wind
    TNNNVITESSE_VERTomega#th39028NNNinsPa/sPressure vertical velocity (DYNAMICS=h)
    TNNNVERT.VELOCITw#th40029NNNinsm/sGeometrical vertical velocity (DYNAMICS=nh)
    TNNNABS_VORTICITabsv#th410210NNNinss-1Absolute vorticity
    TNNNPOT_VORTICITpv#th40214NNNinsK m2/kg/sPotential vorticity
    TNNNDIVERGENCEd#th440213NNNinss-1Relative divergence

    FA fields without any default GRIB1 translation

    Some very special fields are left without any default translation. Please see in the gl documentation on how to add you own translation.

    FA nameUnitComment
    CUF1PRESSURECoupling error field.
    THETAPWP_FLUXK m-4 s-1Instantaneous thetaprimwprim surface flux
    CLPMOCON.MOD.XFUkg kg-1 s-1MOCON model output
    ATMONEBUL.TOTALEAccumulated Total cloud cover.
    ATMONEBUL.CONVECAccumulated Convective cloud cover.
    ATMONEBUL.BASSEAccumulated Low cloud cover.
    ATMONEBUL.MOYENNAccumulated Medium cloud cover.
    ATMONEBUL.HAUTEAccumulated High cloud cover.
    SURFCFU.Q.TURBULAccumulated contribution of Turbulence to Q.
    SURFCFU.CT.TURBULAccumulated contribution of Turbulence to CpT
    SUNSHI. DURATIONSunshine duration.
    SURFFL.U TURBULContribution of Turbulence to U.
    SURFFL.V TURBULContribution of Turbulence to V.
    SURFFL.Q TURBULContribution of Turbulence to Q.
    SURFFL.CT TURBULContribution of Turbulence to CpT
    SNNNSRCSecond order flux.

    Variables postprocessed by gl

    The following fields are can be generated by gl from a history file and are thus not necessarily available as FA fields in Harmonie's FA output. When calculating these post-processed fields, make sure the required fields to derive them are in the input files! For details, check util/gl/grb/postprocess.f90 & the routines called therein.

    Single level fields

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    MSLPRESSUREprespslhas10300insPaMSLP. gl calculates MSLP independent of AROME/!FullPos
    #tdtdhag170062insKDew point temperature
    #vis#hag2001900insmVisibility
    #wdir#ttt31020lllinsDeg. trueWind direction. gl calculates based on u[33,ttt,lll] and v[34,ttt,lll] wind components
    #ws#ttt32021lllinsm/sWind speed. gl calculates based on u[33,ttt,lll] and v[34,ttt,lll] wind components
    TOT.WATER.PRECIPtpprhag610180acckg/m2Total precipitation, gl calculates TP![61,105,0]=rain![181,105,0]+snow![184,105,0]+graupel![201,105,0]+hail![204,105,0]
    TOT.SOLID.PRECIPtpsolidprsolidhag185012000acckg/m2Total solid precipitation, gl calculates ![185,105,0]=snow![184,105,0]+graupel![201,105,0]+hail![204,105,0]
    #mldzmlahag6701930insmMixed layer depth/boundary layer height
    #tcc#hag71061922ins0-1Fog, cloud fraction of lowest model level
    #icei#hag1350ins-Icing index
    #atmiceg#hy??01205insm/sIcing index, Atmospheric ice growth rate
    #icei2#hag/?134011940ins-Icing index version 2
    #psct#hag/ct?1360400insKPseudo satellite image, cloud top temperature (infrared)
    #pstb#hag137041980insKPseudo satellite image, water vapour brightness temperature
    #pstbc#hag138041990insKPseudo satellite image, water vapour br. temp. + correction for clouds
    #pscw#hag139042000ins-Pseudo satellite image, cloud water reflectivity (visible)
    #prtp#hag14401190inscodePrecipitation type, 0:drizzle, 1:rain, 2:sleet, 3:snow, 4:freezing drizzle, 5:freezing rain, 6:graupel, 7:hail
    #fg#ttt2280222lllmaxm/sGust wind speed, calculated from ugst & vgst on corresponding level & levelType
    #hti#hag1480171930ins-Helicopter Triggered lightning Index
    #transmit#hag149061990ins-Transmittance
    #cat#hag145019220ins-|%CAT (clear air turbulence) index
    #bvf#hag1590192020inss-1Brunt Vaisala frequency

    Integrated quantities

    FA nameshortNameNCnamelvTiOPdpCpNlevsTunitsdescription
    TOT.NEB.ICEciwc_vintcliviea5801700inskg/m2Vertical integral of cloud ice
    TOT.NEB.WATcwat_vintclqviea7601690inskg/m2Vertical integral of cloud liquid water
    #msca#ea133061970ins0-1Mask of significant cloud amount
    #cape#hag1600760insJ/kgConvective Available Potential Energy, comes in two flavours, cape_version=1|2, where the second is compatible with the ECMWF version
    #cin#hag1650770insJ/kgConvective inhibition, , comes in two flavours, cape_version=1|2, where the second is compatible with the ECMWF version
    #rain_vintclrviea18101450inskg/m2Vertical integral of rain
    #snow_vintclsviea18401460inskg/m2Vertical integral of snow
    #grpl_vintclgviea20101740inskg/m2Vertical integral of garupel
    #cb#ea18606110insmCloud base
    #ct#ea18706120insmCloud top
    #cb38#hag?189061983insmCloud base >3/8
    #lgt#ea2090171920insflash/hLightning intensity
    #lmxws#ea/mw?1420360insmLevel of max wind speed
    #maxucol#ea1640220insm/sMax u-component in column
    #maxvcol#ea1770230insm/sMax v-component in column
    #lmxice#ea143011990insmIcing index, Level of max icing
    #mxicegr#ea141012040ins1Icing index, Max icing growth index (0-4)
    #blice#ea14601960insmIcing index, Bottom level of icing
    #tlice#ea14701950insmIcing index, Top level of icing
    #cat_maxlev#ea1500192010insmLevel of max CAT index
    #cat_max#ea1510191970ins-Max CAT index
    #cat_b#ea1520191980insmBottom level of CAT
    #cat_t#ea1530191990insmTop level of CAT

    GRIB encoding information

    Time units, WMO code table 4

    The following time units are used to encode GRIB edition 1 data

    CodeUnit
    0Minute
    1Hour
    1315 minutes
    1430 minutes

    Time range indicator, WMO code TABLE 5

    CodeabbrDefinition
    0insForecast product valid for reference time + P1 (P1 > 0), or Uninitialized analysis product for reference time (P1 = 0)
    2min/maxProduct with a valid time ranging between reference time + P1 and reference time + P2. Used for min/max values
    3avgAverage (reference time + P1 to reference time + P2)
    4accAccumulation (reference time + P1 to reference time + P2) product considered valid at reference time + P2

    Note that fields available as both instanteous and accumulated values like e.g. rain has the same parameter values and can only be distinguished by the time range indicator.

    Level types, WMO Code table 3

    level typenameabbrWMO/HIRLAM type definitionUnitsnotes
    001surfacesfcGround or water surfaceWMO
    002cloudBasecbCloud base levelWMO
    003cloudTopctLevel of cloud topsWMO
    004isothermZeroisot0Level of 0°C isothermWMO
    005adiabaticCondensationacLevel of Adiabatic Condensation Lifted from the SurfaceWMO
    006maxWindmwMaximum wind levelWMO
    007tropopausetpTropopauseWMO
    008nominalTopntTop-of-atmosphereWMO
    020isothermalisotIsothermal levelTemperature in 1/100 KWMO
    100isobaricInhPaplIsobaric levelhPaWMO
    102meanSeamsAt mean sea level
    103heightAboveSeahasSpecified altitude above mean sea levelAltitude in mWMO
    105heightAboveGroundhagSpecified height above groundAltitude in mWMO
    107sigmaSigma levelSigma value in 1/10000WMO
    109hybridhyHybrid levelWMO
    112depthBelowLandLayerdbl
    113thetathIsentropic (theta) levelPotential temperature in KWMO
    117potentialVorticitypvPotential vorticity surface10-9 K m2 kg-1 s-1WMO
    192isothermZeroWetBulbisot0wb
    200entireAtmosphereeaEntire atmosphere (considered as a single layer)WMO, vertically integrated
    levelFreeConvectionlfcas heightAboveGround in GRIB1
    levelNeutralBuoyancylnbas heightAboveGround in GRIB1

    Harmonie GRIB1 code table 2 version 253 - Indicator of parameter

    Below the indicator of parameter code table for the Harmonie model. It is based on the WMO code table 2 version 3 with local parameters added. Parameter indicators 128-254 are reserved for originating center use. Parameter indicators 000-127 should not be altered. In HARMONIE, radiation fluxes are assumed positive downwards (against the recommendation by WMO).

    ParDescriptionSI Units
    000Reservedn/a
    001PressurePa
    002Pressure reduced to MSLPa
    003Pressure tendencyPa s-1
    004Potential vorticityK m2 kg-1 s-1
    005ICAO Standard Atmosphere reference heightm
    006Geopotentialm2 s-2
    007Geopotential heightgpm
    008Geometrical heightm
    009Standard deviation of heightm
    010Total ozoneDobson
    011TemperatureK
    012Virtual temperatureK
    013Potential temperatureK
    014Pseudo-adiabatic potential temperatureK
    015Maximum temperatureK
    016Minimum temperatureK
    017Dew-point temperatureK
    018Dew-point depression (or deficit)K
    019Lapse rateK m-1
    020Visibilitym
    021Radar spectra (1)-
    022Radar spectra (2)-
    023Radar spectra (3)-
    024Parcel lifted index (to 500 hPa)K
    025Temperature anomalyK
    026Pressure anomalyPa
    027Geopotential height anomalygpm
    028Wave spectra (1)-
    029Wave spectra (2)-
    030Wave spectra (3)-
    031Wind directionDegree true
    032Wind speedm s-1
    033u-component of windm s-1
    034v-component of windm s-1
    035Stream functionm2 s-1
    036Velocity potentialm2 s-1
    037Montgomery stream functionm2 s-1
    038Sigma coordinate vertical velocitys-1
    039Vertical velocityPa s-1
    040Vertical velocitym s-1
    041Absolute vorticitys-1
    042Absolute divergences-1
    043Relative vorticitys-1
    044Relative divergences-1
    045Vertical u-component shears-1
    046Vertical v-component shears-1
    047Direction of currentDegree true
    048Speed of currentm s-1
    049u-component of currentm s-1
    050v-component of currentm s-1
    051Specific humiditykg kg-1
    052Relative humidity%
    053Humidity mixing ratiokg kg-1
    054Precipitable waterkg m-2
    055Vapor pressurePa
    056Saturation deficitPa
    057Evaporationkg m-2
    058Cloud icekg m-2
    059Precipitation ratekg m-2 s-1
    060Thunderstorm probability%
    061Total precipitationkg m-2
    062Large scale precipitationkg m-2
    063Convective precipitationkg m-2
    064Snowfall rate water equivalentkg m-2 s-1
    065Water equivalent of accumulated snow depthkg m-2
    066Snow depthm
    067Mixed layer depthm
    068Transient thermocline depthm
    069Main thermocline depthm
    070Main thermocline anomalym
    071Total cloud cover%
    072Convective cloud cover%
    073Low cloud cover%
    074Medium cloud cover%
    075High cloud cover%
    076Cloud waterkg m-2
    077Best lifted index (to 500 hPa)K
    078Convective snowkg m-2
    079Large scale snowkg m-2
    080Water temperatureK
    081Land cover (1 = land, 0 = sea)Proportion
    082Deviation of sea level from meanm
    083Surface roughnessm
    084Albedo%
    085Soil temperatureK
    086Soil moisture contentkg m-2
    087Vegetation%
    088Salinitykg kg-1
    089Densitykg m-3
    090Water run-offkg m-2
    091Ice cover (1 = ice, 0 = no ice)Proportion
    092Ice thicknessm
    093Direction of ice driftDegree true
    094Speed of ice driftm s-1
    095u-component of ice driftm s-1
    096v-component of ice driftm s-1
    097Ice growth ratem s-1
    098Ice divergences-1
    099Snow meltkg m-2
    100Significant height of combined wind waves and swellm
    101Direction of wind wavesDegree true
    102Significant height of wind wavesm
    103Mean period of wind wavess
    104Direction of swell wavesDegree true
    105Significant height of swell wavesm
    106Mean period of swell wavess
    107Primary wave directionDegree true
    108Primary wave mean periods
    109Secondary wave directionDegree true
    110Secondary wave mean periods
    111Net short-wave radiation flux (surface)W m-2
    112Net long-wave radiation flux (surface)W m-2
    113Net short-wave radiation flux (top of atmosphere)W m-2
    114Net long-wave radiation flux (top of atmosphere)W m-2
    115Long-wave radiation fluxW m-2
    116Short-wave radiation fluxW m-2
    117Global radiation fluxW m-2
    118Brightness temperatureK
    119Radiance (with respect to wave number)W m-1 sr-1
    120Radiance (with respect to wave length)W m-3 sr-1
    121Latent heat fluxW m-2
    122Sensible heat fluxW m-2
    123Boundary layer dissipationW m-2
    124Momentum flux, u-componentN m-2
    125Momentum flux, v-componentN m-2
    126Wind mixing energyJ
    127Image data-
    128Analysed RMS of PHI (CANARI)m2 s-2
    129Forecasted RMS of PHI (CANARI)m2 s-2
    130SW net clear sky radW m-2
    131LW net clear sky radW m-2
    132Latent heat flux through evaporationW m-2
    133Mask of significant cloud amount0-1
    134Icing index version 2-
    135Icing indexCode table
    136Pseudo satellite image, cloud top temperature (infrared)K
    137Pseudo satellite image, water vapour brightness temperatureK
    138Pseudo satellite image, water vapour br. temp. + correction for cloudsK
    139Pseudo satellite image, cloud water reflectivity (visible)?
    140Direct normal irradianceJ m-2
    141Max icing growth index-
    142Level of max wind speedm
    143Level of max icingm
    144Precipition TypeCode table
    145CAT index- / %
    146Bottom level of icingm
    147Top level of icingm
    148Helicopter Triggered ligthning Index-
    149Transmittance-
    150Level of max CAT indexm
    151Max CAT index-
    152Bottom level of CATm
    153Top level of CATm
    154Max Wind speedm s-1
    155Available#
    156Available#
    157Available#
    158Surface downward moon radiationW m-2
    159ABrunt Vaisala frequencys-1
    160CAPEJ kg-1
    161AROME hail diagnostic%
    162U-momentum of gusts out of the modelm s-1
    163V-momentum of gusts out of the modelm s-1
    164Max u-component in columnm s-1
    165Convective inhibition (CIN)J kg-1
    166MOCON out of the modelkg/kg s-1
    167Lifting condensation level (LCL)m
    168Level of free convection (LFC)m
    169Level of neutral boyancy (LNB)m
    170Brightness temperature OZ clearK
    171Brightness temperature OZ cloudK
    172Brightness temperature IR clearK
    173Brightness temperature IR cloudK
    174Brightness temperature WV clearK
    175Brightness temperature WV cloudK
    176Virtual potential temperatureK
    177Max v-component in columnm s-1
    178Available#
    179Available#
    180Available#
    181Rainkg m-2
    182Stratiform Rainkg m-2
    183Convective Rainkg m-2
    184Snowkg m-2
    185Total solid precipitationkg m-2
    186Cloud basem
    187Cloud topm
    188Fraction of urban landProportion
    189Cloud base >3/8m
    190Snow AlbedoProportion
    191Snow densitykg/m3
    192Water on canopykg/m2
    193Soil icekg/m2
    194Available#
    195Gravity wave stress U-compN/m2
    196Gravity wave stress V-compN/m2
    197Available#
    198Available#
    199Vegetation type-
    200TKEm2 s-2
    201Graupelkg m-2
    202Stratiform Graupelkg m-2
    203Convective Graupelkg m-2
    204Hailkg m-2
    205Stratiform Hailkg m-2
    206Convective Hailkg m-2
    207Available#
    208Available#
    209Lightningflash h-1
    210Simulated reflectivitydBz
    211Wind power productionMW or MJ
    212Pressure departurePa
    213Vertical divergences-1
    214UD_OMEGAms-1?
    215DD_OMEGAms-1?
    216UDMESHFRAC-
    217DDMESHFRAC-
    218PSHICONVCL-
    219Surface albedo for non snow covered areasProportion
    220Standard deviation of orography * gm2 s-2
    221Anisotropy coeff of topography-
    222Direction of main axis of topographyrad
    223Roughness length of bare surface * gm2 s-2
    224Roughness length for vegetation * gm2 s-2
    225Fraction of clay within soilProportion
    226Fraction of sand within soilProportion
    227Maximum proportion of vegetationProportion
    228Gust wind speedm s-1
    229Albedo of bare groundProportion
    230Albedo of vegetationProportion
    231Stomatal minimum resistances/m
    232Leaf area indexm2/m2
    233Thetaprimwprim surface fluxKm/s
    234Dominant vegetation index-
    235Surface emissivity-
    236Maximum soil depthm
    237Soil depthm
    238Soil wetnesskg/m2
    239Thermal roughness length * gm2 s-2
    240Resistance to evapotransirations/m
    241Minimum relative moisture at 2 meters%
    242Maximum relative moisture at 2 meters%
    243Duration of total precipitationss
    244Latent Heat SublimationW/m2
    245Water evaporationkg/m2
    246Snow sublimationkg/m2
    247Snow history???
    248A OZONEkg kg-1
    249B OZONEkg kg-1
    250C OZONEkg kg-1
    251Surface aerosol seakg kg-1
    252Surface aerosol landkg kg-1
    253Surface aerosol sootkg kg-1
    254Surface aerosol desertkg kg-1
    255Missing valuen/a

    SURFEX output Harmonie GRIB1 code table 2 version 001

    Levels are used in the conversion of SURFEX output to GRIB to indicate tile/patch/type/level:

    leveldescription
    300Extra yet unknown SURFEX variables
    301Fraction of each vegetation types on PATCH 1
    302Fraction of each vegetation types on PATCH 2
    303Fraction of each vegetation types cy43 (ECOCLIMAP-SG)
    600Physiography fields?
    720Sea ice
    730Sea ice (TICE_LL)
    755Precip
    760Sea
    770in addition to FLake (or instead of it)
    780Flake
    790Patch (*_P fields)
    800ISBA
    810Gridpoint average
    820Surface boundary multi layer fields
    830ISBA - patch 1 (X001*, open land)
    840ISBA - patch 2 (X002*, forest)
    950Town energy balance model (TEB)

    A small selection of fields available in the SURFEX output files is shown below.

    FA nameshortNameNCnamelvTiOPlevsTunitsdescription
    FRAC_SEA#sftofhag32300ins0-1Fraction of sea
    FRAC_WATER#sftlafhag33300ins0-1Fraction of water
    FRAC_NATURE#sftnfhag34300ins0-1Fraction of nature
    FRAC_TOWN#sfturfhag35300ins0-1Fraction of town
    COVER001#lsm10insLAND SEA MASK
    COVER002-COVER243##002-2430insECOCLIMAP I cover types
    COVER255##2550insECOCLIMAP I MY_COVER type
    COVER301-COVER573##001-254 & 001-0190insECOCLIMAP II cover types
    ZS#oroghag80insmOro hgt.
    SST#tosms110insKSST
    SIC#siconcams910ins0-1SIC
    T2M_SEA#tas_seahag11760insKT2m sea
    Q2M_SEA#huss_seahag51760inskg kg-1Q2m sea
    MER10M_SEA#vas_seahag34760insm s-1V10m sea
    ZON10M_SEA#uas_seahag33760insm s-1U10m sea
    T2M_WAT#tas_waterhag11772insKT2m water
    Q2M_WAT#huss_waterhag51770inskg kg-1Q2m water
    MER10M_WAT#vas_waterhag34770insm s-1V10m water
    ZON10M_WAT#uas_waterhag33770insm s-1U10m water
    DSNTISBA#sndhag660insmSnow depth
    WSNTISBA#snwhag130inskg m-2Total snow reservoir
    T2M_ISBA#tas_naturehag11802insKT2m isba
    Q2M_ISBA#huss_naturehag51802inskg kg-1Q2m isba
    X001T2M_P#tashag11832insKT2m of patch 1
    X002T2M_P#tashag11842insKT2m of patch 2
    T2M_TEB#tas_townhag11950insKT2m town
    T2MMAX_TEB#tasmax_townhag15950maxKMax Temp for town
    T2MMIN_TEB#tasmin_townhag16950minKMin Temp for town
    TGL#tg_LLLhag11800+insKTemperature of soil layer L(isba)
    WGL#wsa_LLLhag86800+insm3 m-3Liquid volumetric water content of soil layer L
    WGIL#isa_LLLhag193800+insm3 m-3Frozen volumetric water content of soil layer L
    WR#wrhag12800inskg m-2Liquid water retained by foliage (isba)
    DGL#dsoil_LLLhag23300insmSoil depth of soil layer L

    Harmonie GRIB1 code table 2 version 210

    Used for aerosol fields

    GRIB

    NetCDF

    diff --git a/previews/PR1153/ForecastModel/SingleColumnModel/Forcing/index.html b/previews/PR1153/ForecastModel/SingleColumnModel/Forcing/index.html index 81dc3cddc..eeb9853e8 100644 --- a/previews/PR1153/ForecastModel/SingleColumnModel/Forcing/index.html +++ b/previews/PR1153/ForecastModel/SingleColumnModel/Forcing/index.html @@ -48,4 +48,4 @@ NL_T_NUDG_TIME(3) = 43200 NL_T_NUDG_TIME(4) = 64800 NL_T_NUDG_TIME(5) = 86400 -/

    and now you can not run MUSC more than 1 day ... if the time between the forcing profile is the same you can use *_FREQ instead of TIME ...

    +/

    and now you can not run MUSC more than 1 day ... if the time between the forcing profile is the same you can use *_FREQ instead of TIME ...

    diff --git a/previews/PR1153/ForecastModel/SingleColumnModel/MUSC/index.html b/previews/PR1153/ForecastModel/SingleColumnModel/MUSC/index.html index 51ef947fb..81a53eac5 100644 --- a/previews/PR1153/ForecastModel/SingleColumnModel/MUSC/index.html +++ b/previews/PR1153/ForecastModel/SingleColumnModel/MUSC/index.html @@ -70,4 +70,4 @@ IF(ABS(ZVBH(JFLEV)-PVBH(JFLEV)) > PEPS) THEN WRITE(KULOUT,*) ' VERTICAL FUNCTION *B* MISMATCH ON ',&

    Then you are ready to compile:

    When starting the MUSC run, add the PATH to mpirun and the libraries:

    export PATH=$PATH:/usr/lib64/openmpi/bin
     export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/openmpi/lib
    -./musc_run.sh [...]

    MUSC FAQ

    1. If there is an error, what files do I look in? NODE.001_01 and lola in your output directory.

    2. How to I handle the output files? The output files are of the form Out.XXX.XXXX and appear in your output directory. There are in lfa format and can be handled using ddh tools. See the bash script musc_plot1Dts.sh for ideas. There are also ICM*lfa output files that are also handy for plotting profiles - use musc_convertICM2ascii.sh to convert these files to ASCII and musc_plot_profiles_ICMfiles.sh to plot some profiles e.g. TKE, cloud liquid etc.

    3. I ran a different idealised case but did not get different results? The likely reason for this is that you did not delete the namelists from your experiment directory. If the namelists are there, the musc_run.sh script neither creates them nor copies them from the repository.

    4. How do I create a new idealised case? This is not straightforward but the following was used to create the ASTEX cases in cy43 using info from cy38: https://www.overleaf.com/7513443985ckqvfdcphnng

    5. How can I access a list of MUSC output parameters? Ensure you have the ddhtoolbox compiled. Then use lfaminm $file on any of your output files and it will show what is there. To look at a particular variable try lfac $file $parameter e.g. lfac $file PTS (for surface temperature). You can use cat to copy the values to an ASCII file for ease of use (e.g. lfac $file PTS > $ASCIIfile).

    6. Is MUSC similar to the full 3D model version - is the physics the same? Yes, if you checkout develop then you have MUSC up-to-date with that.

    7. Do I need to recompile the model if I modify code? Yes, if you modify code in a single file you must recompile the code but do not delete the original compiled model first. This will recompile relatively quickly. If you modify code in multiple files and you change what variables are passed between files, then you must delete your original compiled model and recompile the code. This will take longer to recompile.

    MUSC variable names

    A list of variable names found in the MUSC lfa output files can be found here. Please note that this is not a complete list of MUSC output parameters (yet). The variables in regular ICMSH... fa output are documented here

    Outstanding Issues

    1. ARMCU and Jenny's cases run without surface physics, radiation etc and hence return NANs in apl_arome. To circumvent this on ECMWF, we needed to compile less strictly. This needs to be investigated further.
    2. The ASTEX cases currently do not run on ECMWF but work perfectly at Met Eireann - debugging needed.

    MUSC using EMS

    These instructions have moved to MUSC EMS

    +./musc_run.sh [...]

    MUSC FAQ

    1. If there is an error, what files do I look in? NODE.001_01 and lola in your output directory.

    2. How to I handle the output files? The output files are of the form Out.XXX.XXXX and appear in your output directory. There are in lfa format and can be handled using ddh tools. See the bash script musc_plot1Dts.sh for ideas. There are also ICM*lfa output files that are also handy for plotting profiles - use musc_convertICM2ascii.sh to convert these files to ASCII and musc_plot_profiles_ICMfiles.sh to plot some profiles e.g. TKE, cloud liquid etc.

    3. I ran a different idealised case but did not get different results? The likely reason for this is that you did not delete the namelists from your experiment directory. If the namelists are there, the musc_run.sh script neither creates them nor copies them from the repository.

    4. How do I create a new idealised case? This is not straightforward but the following was used to create the ASTEX cases in cy43 using info from cy38: https://www.overleaf.com/7513443985ckqvfdcphnng

    5. How can I access a list of MUSC output parameters? Ensure you have the ddhtoolbox compiled. Then use lfaminm $file on any of your output files and it will show what is there. To look at a particular variable try lfac $file $parameter e.g. lfac $file PTS (for surface temperature). You can use cat to copy the values to an ASCII file for ease of use (e.g. lfac $file PTS > $ASCIIfile).

    6. Is MUSC similar to the full 3D model version - is the physics the same? Yes, if you checkout develop then you have MUSC up-to-date with that.

    7. Do I need to recompile the model if I modify code? Yes, if you modify code in a single file you must recompile the code but do not delete the original compiled model first. This will recompile relatively quickly. If you modify code in multiple files and you change what variables are passed between files, then you must delete your original compiled model and recompile the code. This will take longer to recompile.

    MUSC variable names

    A list of variable names found in the MUSC lfa output files can be found here. Please note that this is not a complete list of MUSC output parameters (yet). The variables in regular ICMSH... fa output are documented here

    Outstanding Issues

    1. ARMCU and Jenny's cases run without surface physics, radiation etc and hence return NANs in apl_arome. To circumvent this on ECMWF, we needed to compile less strictly. This needs to be investigated further.
    2. The ASTEX cases currently do not run on ECMWF but work perfectly at Met Eireann - debugging needed.

    MUSC using EMS

    These instructions have moved to MUSC EMS

    diff --git a/previews/PR1153/ForecastModel/SingleColumnModel/MUSC_EMS/index.html b/previews/PR1153/ForecastModel/SingleColumnModel/MUSC_EMS/index.html index 779eff712..ee68193ca 100644 --- a/previews/PR1153/ForecastModel/SingleColumnModel/MUSC_EMS/index.html +++ b/previews/PR1153/ForecastModel/SingleColumnModel/MUSC_EMS/index.html @@ -55,4 +55,4 @@ mkdir config cp $HOME/SCM-atlas_git/ewhelan/hirlam/examples/config/config_HARM.py config/ ### edit config/config_HARM.py -run_atlas1d.py -config config/config_HARM.py
    +run_atlas1d.py -config config/config_HARM.py
    diff --git a/previews/PR1153/ForecastModel/SingleColumnModel/MUSC_vars/index.html b/previews/PR1153/ForecastModel/SingleColumnModel/MUSC_vars/index.html index aedf46d52..4ef664d7a 100644 --- a/previews/PR1153/ForecastModel/SingleColumnModel/MUSC_vars/index.html +++ b/previews/PR1153/ForecastModel/SingleColumnModel/MUSC_vars/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Variable names for MUSC output

    List of parameters copied from variable_list.csv

    short namevariable namelong nameunit
    lwdwPFRTHDSlong wave downward radiation at surfaceW/m2
    lwupPFRTHlong wave upward radiation at surfaceW/m2
    swdwPFRSODSshort wave downward radiation at surfaceW/m2
    swupPFRSOshort wave upward radiation at surfaceW/m2
    shfPFCSsensible heat fluxW/m2
    lhfPFCLNlatent heat fluxW/m2
    evapZLH_fluxevaporation+sublimation fluxmm/day
    evap2PFEVLevaporation+sublimation fluxmm/day
    ustarZUSTARfriction velocitym/s
    rainPREC_TOTprecipitation (liq+sol) ratemm/day
    psurfPAPRSsurface PressurePa
    hpblPCLPHboundary layer heightm
    hpbl2KCLPHboundary layer heightm
    tsurfPTSsurface temperatureK
    t2mPTCLS2 m temperatureK
    q2mPQCLS2 m specific humidityKg/Kg
    rh2mPRHCLS2 m relative humidity[0-100]
    u10mPUCLS1 0m u-componentm/s
    v10mPVCLS10m v-componentm/s
    t3mPT_03temperature at 3.30 meter above the surfaceK
    q3mPQ_03specific humidity at 3.30 meterKg/Kg
    rh3mPRH_03relative humidity at 3.30 meter[0-100]
    u3mPU_03u-component at 3.30 meterm/s
    v3mPV_03v-component at 3.30 meterm/s
    etc
    t42mPT_42temperature at 41.90 meter above the surfaceK
    q42mPQ_42specific humidity at 41.90 meterKg/Kg
    rh42mPRH_42relative humidity at 41.90 meter[0-100]
    u42mPU_42u-component at 41.90 meterm/s
    v42mPV_42v-component at 41.90 meterm/s
    ccPCLCTtotal cloud cover fraction0 1
    tsurfPTSSurface temperatureK
    albPALBHAlbedo[0-1]
    alb_surfTALB_ISBAsurface albedo-
    z0mPGZ0Momentum roughness lengthm
    z0hPGZ0HHeat roughness lengthm
    emisPEMISsurface emissivity[0-1]
    emisEMISsurface emissivity[0-1]
    zfPAPHIFAltitude of layer mid-points at t=0 (full-level)m
    pfPAPRSFPressure of layer mid-points at t=0 (full-level)Pa
    tPTtemperatureK
    thTHETApotential temperatureK
    qPQspecific humiditykg/kg
    uPUzonal wind componentm/s
    vPVmeridional wind componentm/s
    ugeoZFUGEOu-component geostrophic windm/s
    vgeoZFVGEOv-component geostrophic windm/s
    dudt_lsZFUu-component advectionm/s/s
    dvdt_lsZFVv-component advectionm/s/s
    dtdt_lsZFTtemperature advectionK/s
    dqdt_lsZFQmoisture advectionKg/Kg/s
    wZWvertical movementm/s
    zhhPAPHIheight of half levelm
    phhPAPRSpressure of half levelPa
    kmZKMEddy diffusivity momentumm2/s
    khZKHEddy diffusivity momentumm2/s
    mfZMF_shalmassfluxKg/m2/s
    dT_dt_radZDTRADtemperature tendency from radiationK/d
    TKEPECTturbulent kinetic energy$m^2/s^2$
    shearZPRDYshear production$m^2/s^3$
    buoyZPRTHbuoyancy production$m^2/s^3$
    transZDIFFtotal transport$m^2/s^3$
    dissiZDISSdissipation$m^2/s^3$
    +

    Variable names for MUSC output

    List of parameters copied from variable_list.csv

    short namevariable namelong nameunit
    lwdwPFRTHDSlong wave downward radiation at surfaceW/m2
    lwupPFRTHlong wave upward radiation at surfaceW/m2
    swdwPFRSODSshort wave downward radiation at surfaceW/m2
    swupPFRSOshort wave upward radiation at surfaceW/m2
    shfPFCSsensible heat fluxW/m2
    lhfPFCLNlatent heat fluxW/m2
    evapZLH_fluxevaporation+sublimation fluxmm/day
    evap2PFEVLevaporation+sublimation fluxmm/day
    ustarZUSTARfriction velocitym/s
    rainPREC_TOTprecipitation (liq+sol) ratemm/day
    psurfPAPRSsurface PressurePa
    hpblPCLPHboundary layer heightm
    hpbl2KCLPHboundary layer heightm
    tsurfPTSsurface temperatureK
    t2mPTCLS2 m temperatureK
    q2mPQCLS2 m specific humidityKg/Kg
    rh2mPRHCLS2 m relative humidity[0-100]
    u10mPUCLS1 0m u-componentm/s
    v10mPVCLS10m v-componentm/s
    t3mPT_03temperature at 3.30 meter above the surfaceK
    q3mPQ_03specific humidity at 3.30 meterKg/Kg
    rh3mPRH_03relative humidity at 3.30 meter[0-100]
    u3mPU_03u-component at 3.30 meterm/s
    v3mPV_03v-component at 3.30 meterm/s
    etc
    t42mPT_42temperature at 41.90 meter above the surfaceK
    q42mPQ_42specific humidity at 41.90 meterKg/Kg
    rh42mPRH_42relative humidity at 41.90 meter[0-100]
    u42mPU_42u-component at 41.90 meterm/s
    v42mPV_42v-component at 41.90 meterm/s
    ccPCLCTtotal cloud cover fraction0 1
    tsurfPTSSurface temperatureK
    albPALBHAlbedo[0-1]
    alb_surfTALB_ISBAsurface albedo-
    z0mPGZ0Momentum roughness lengthm
    z0hPGZ0HHeat roughness lengthm
    emisPEMISsurface emissivity[0-1]
    emisEMISsurface emissivity[0-1]
    zfPAPHIFAltitude of layer mid-points at t=0 (full-level)m
    pfPAPRSFPressure of layer mid-points at t=0 (full-level)Pa
    tPTtemperatureK
    thTHETApotential temperatureK
    qPQspecific humiditykg/kg
    uPUzonal wind componentm/s
    vPVmeridional wind componentm/s
    ugeoZFUGEOu-component geostrophic windm/s
    vgeoZFVGEOv-component geostrophic windm/s
    dudt_lsZFUu-component advectionm/s/s
    dvdt_lsZFVv-component advectionm/s/s
    dtdt_lsZFTtemperature advectionK/s
    dqdt_lsZFQmoisture advectionKg/Kg/s
    wZWvertical movementm/s
    zhhPAPHIheight of half levelm
    phhPAPRSpressure of half levelPa
    kmZKMEddy diffusivity momentumm2/s
    khZKHEddy diffusivity momentumm2/s
    mfZMF_shalmassfluxKg/m2/s
    dT_dt_radZDTRADtemperature tendency from radiationK/d
    TKEPECTturbulent kinetic energy$m^2/s^2$
    shearZPRDYshear production$m^2/s^3$
    buoyZPRTHbuoyancy production$m^2/s^3$
    transZDIFFtotal transport$m^2/s^3$
    dissiZDISSdissipation$m^2/s^3$
    diff --git a/previews/PR1153/ForecastModel/WindFarms/index.html b/previews/PR1153/ForecastModel/WindFarms/index.html index cd87fcd59..030cd7940 100644 --- a/previews/PR1153/ForecastModel/WindFarms/index.html +++ b/previews/PR1153/ForecastModel/WindFarms/index.html @@ -141,4 +141,4 @@ editionNumber = 2 ; interpretationOfNumberOfPoints = 0 ; subCentre = 255 ; - }

    For both GRIB 1 and GRIB 2:

    1. Wind power production, accumulated:

    2. Wind power production, accumulated:

    + }

    For both GRIB 1 and GRIB 2:

    1. Wind power production, accumulated:

    2. Wind power production, accumulated:

    diff --git a/previews/PR1153/Observations/Aeolus/index.html b/previews/PR1153/Observations/Aeolus/index.html index c07067554..19ae6f701 100644 --- a/previews/PR1153/Observations/Aeolus/index.html +++ b/previews/PR1153/Observations/Aeolus/index.html @@ -6,4 +6,4 @@

    Aeolus, HLOS wind

    short overview

    Aeolus was an ESA Earth Explorer mission, carrying a Doppler wind lidar that measured the vertical profile of winds. Aeolus was launched in August 2018 and safely re-entered over Antarctica in July 2023. The period of usable data is from 31 August 2018 to 30 April 2023

    Aeolus winds come in two different versions, Mie and Rayleigh. The Mie winds are measured by observing the scattering by cloud droplets and aerosols and are only available in optically thin and medium-thin clouds. The horizontal resolution of Mie profiles is 10 km. Rayleigh winds are obtained by measuring the scattering by air molecules in clear air, and have a lower horizontal resolution of 80 km.

    Since Aeolus was a non-operational mission, the data need to be downloaded manually from, e.g. ESA's Earth Observation portal, https://aeolus-ds.eo.esa.int/oads/access/ (a registration is needed to download the data).

    The data from Aeolus is being processed by the Aeolus DISC team, and the processing has been continously improved throughout the mission lifetime. A final version, covering the full Aeolus data set, will be released in 2028 (using baseline 18, the operational baseline at the time of the satellite's reentry was baseline 13). More details can be found here.

    Harmonie changes

    To use Aeolus winds, activate them in scr/include.ass by setting LIDAR_OBS to 1

    export LIDAR_OBS=1             # LIDAR aeolus hlos wind
     [[  $LIDAR_OBS -eq 1  ]] && types_BASE="$types_BASE lidar"

    The optimal settings to use for the observation errors of Aeolus data is still an open question. They are reported in the .bufr file which contain the L2B winds, and the limit of when to allow them can be adjusted in src/odb/pandor/module/bator_decodbufr_mod.F90

    The main ones to be careful are the upper error limits. The recommended values at the time of writing are

      REAL, PARAMETER    :: error_est_threshold_Mie = 4.5  ! m/s
       REAL, PARAMETER    :: error_est_threshold_Ray = 8.  ! m/s
    -

    Future updates

    When the follow-on mission, Aeolus-2 (ESA's name) or EPS-Aeolus (EUMETSAT''s name) launches in 2032, these settings will probably have to be revised. The Aeolus follow-on mission will carry a revised version of the previous instrument, providing observations with higher resolution.

    +

    Future updates

    When the follow-on mission, Aeolus-2 (ESA's name) or EPS-Aeolus (EUMETSAT''s name) launches in 2032, these settings will probably have to be revised. The Aeolus follow-on mission will carry a revised version of the previous instrument, providing observations with higher resolution.

    diff --git a/previews/PR1153/Observations/Amv/index.html b/previews/PR1153/Observations/Amv/index.html index 9b9998229..25b23e26f 100644 --- a/previews/PR1153/Observations/Amv/index.html +++ b/previews/PR1153/Observations/Amv/index.html @@ -36,4 +36,4 @@ values 24 008012 LAND/SEA QUALIFIER values 25 007024 SATELLITE ZENITH ANGLE values 211 033007 % CONFIDENCE -END geowind

    Please be reminded that the processing of data from MARS was not yet tested. From 43h2.1, we have the all necessary content of the param file for processing of both GEOW and POLW in const/bator_param/param_bator.cfg.geow.${GEOW_SOURCE/POLW_SOURCE}

    BATOR namelist

    Depending on the satellite and channel you may have to add entries to the NADIRS namelist in the Bator script like the following:

    TS_GEOWIND(isatid)%T_SELECT%LCANAL(ichanal)=.TRUE.,

    Source code

    The reading of BUFR AMVs is taken care of by src/odb/pandor/module/bator_decodbufr_mod.F90. This subroutine reads the following parameters defined in the param.cfg file:

    NameDescription
    Date and timederived from the tconfig(004001) - assumes month, day, hour and minute are in consecutive entries in the values array
    Locationlatitude and longitude are read from tconfig(005001) and tconfig(006001)
    Satellitethe satellite identifier is read from tconfig(001007)
    Origin. centerthe originating center (of the AMV) is read from tconfig(001031)
    Compu. methodthe wind computation method (type of channel + cloudy/clear if WV) is read from tconfig(002023)
    Derivation methodthe height assignment method is read from tconfig(002163) and the tracking method from tconfig (002164)
    Channel frequencythe centre frequency of the satellite channel is read from tconfig(002153)
    Height (pressure)the height of the AMV observation is read from tconfig(007004)
    Windthe wind speed and direction are read from tconfig(011002) and tconfig(011001)
    Temperaturethe coldest cluster temperature is read from tconfig(012071)
    FG QIThe QI (including FG consistency) for MSG AMVs is read from the first location where descriptor 033007 appears
    noFG-QIThe FG-independent QI for MSG AMVs is read from the first location where 033007 appears + offset(1)=24
    Sat zenith anglethe satellite zenith angle is read from tconfig(007024)
    Land/sea/coasta land/sea/coast qualifier is read from tconfig(008012)

    The geowind routine was adapted to handle MSG AMVs from MARS and its module /src/odb/pandor/module/bator_decodbufr_mod.F90 uploaded to the trunk (Mar 2017) .

    Blacklist

    The selection/blacklist of AMVs according to channel, underlying sea/land, QI, etc. is done in src/blacklist/mf_blacklist.b, section - SATOB CONSTANT DATA SELECTION -.

    +END geowind

    Please be reminded that the processing of data from MARS was not yet tested. From 43h2.1, we have the all necessary content of the param file for processing of both GEOW and POLW in const/bator_param/param_bator.cfg.geow.${GEOW_SOURCE/POLW_SOURCE}

    BATOR namelist

    Depending on the satellite and channel you may have to add entries to the NADIRS namelist in the Bator script like the following:

    TS_GEOWIND(isatid)%T_SELECT%LCANAL(ichanal)=.TRUE.,

    Source code

    The reading of BUFR AMVs is taken care of by src/odb/pandor/module/bator_decodbufr_mod.F90. This subroutine reads the following parameters defined in the param.cfg file:

    NameDescription
    Date and timederived from the tconfig(004001) - assumes month, day, hour and minute are in consecutive entries in the values array
    Locationlatitude and longitude are read from tconfig(005001) and tconfig(006001)
    Satellitethe satellite identifier is read from tconfig(001007)
    Origin. centerthe originating center (of the AMV) is read from tconfig(001031)
    Compu. methodthe wind computation method (type of channel + cloudy/clear if WV) is read from tconfig(002023)
    Derivation methodthe height assignment method is read from tconfig(002163) and the tracking method from tconfig (002164)
    Channel frequencythe centre frequency of the satellite channel is read from tconfig(002153)
    Height (pressure)the height of the AMV observation is read from tconfig(007004)
    Windthe wind speed and direction are read from tconfig(011002) and tconfig(011001)
    Temperaturethe coldest cluster temperature is read from tconfig(012071)
    FG QIThe QI (including FG consistency) for MSG AMVs is read from the first location where descriptor 033007 appears
    noFG-QIThe FG-independent QI for MSG AMVs is read from the first location where 033007 appears + offset(1)=24
    Sat zenith anglethe satellite zenith angle is read from tconfig(007024)
    Land/sea/coasta land/sea/coast qualifier is read from tconfig(008012)

    The geowind routine was adapted to handle MSG AMVs from MARS and its module /src/odb/pandor/module/bator_decodbufr_mod.F90 uploaded to the trunk (Mar 2017) .

    Blacklist

    The selection/blacklist of AMVs according to channel, underlying sea/land, QI, etc. is done in src/blacklist/mf_blacklist.b, section - SATOB CONSTANT DATA SELECTION -.

    diff --git a/previews/PR1153/Observations/Ascat/index.html b/previews/PR1153/Observations/Ascat/index.html index 14123c53e..9e8ae5f0a 100644 --- a/previews/PR1153/Observations/Ascat/index.html +++ b/previews/PR1153/Observations/Ascat/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -
    +
    diff --git a/previews/PR1153/Observations/Atovs/index.html b/previews/PR1153/Observations/Atovs/index.html index a9733aed9..20149dfcb 100644 --- a/previews/PR1153/Observations/Atovs/index.html +++ b/previews/PR1153/Observations/Atovs/index.html @@ -255,4 +255,4 @@ cp $HM_LIB/const/bias_corr/${DOMAIN}/VARBC.cycle.$HH ${DLOCVARBC}/VARBC.cycle || \ { echo "Could not find cold start VARBC data VARBC.cycle.$EMONTH.$HH" ; exit 1 ; } ls -lrt ${DLOCVARBC} - fi

    With a tiny difference that all the VarBC files are now stored under a ${DOMAIN} directory. This allows our system to be up-to-date and ready for all known model domains. Please send your VarBC files to the system administrators.

    For operational implementation

    The setup is much easier. Name the VARBC.cycle files the following way VARBC.cycle.${HH} and put them in $ARCHIVE_ROOT/VARBC_latest, which you need to create.

    To check that you have done things right:

    If you passed the test, then you are ready with ATOVS implementation. Congratulation!

    + fi

    With a tiny difference that all the VarBC files are now stored under a ${DOMAIN} directory. This allows our system to be up-to-date and ready for all known model domains. Please send your VarBC files to the system administrators.

    For operational implementation

    The setup is much easier. Name the VARBC.cycle files the following way VARBC.cycle.${HH} and put them in $ARCHIVE_ROOT/VARBC_latest, which you need to create.

    To check that you have done things right:

    If you passed the test, then you are ready with ATOVS implementation. Congratulation!

    diff --git a/previews/PR1153/Observations/Bator/index.html b/previews/PR1153/Observations/Bator/index.html index 59dbe0a52..3217a071b 100644 --- a/previews/PR1153/Observations/Bator/index.html +++ b/previews/PR1153/Observations/Bator/index.html @@ -88,4 +88,4 @@ #-- create IOASSIGN file for the given sub-base cd ${d_DB}/ECMA.${base} export ODB_IOASSIGN_MAXPROC=${NPOOLS} - $HM_LIB/scr/create_ioassign -l "ECMA" -n ${BATOR_NBPOOL}

    where $base is the ODB base ($base can be conv (for conventional data), amsu (ATOVS/AMSU-A,AMSU-B/MHS), sev (for Sevir), iasi, radarv (radar) for example). Important: If you would like to have more bases, do not forget to take that into consideration when generating the "batormap" file for BATOR to define which observations you would like to have in each base.

    Blacklisting

    To avoid model forecast degradation, two files can be used to blacklist or exclude data from the analysis. They are also used to blacklist observations that the model cannot deal with because they are not representative (orography, breeze effects...). The reason for the existence of this method of 'blacklisting', built-in Bator, alongside with hirlam_blacklist.b (built-in Screening) is to allow simple and quick changes (and especially without changing binary) in the operational suite.

    The selection of an observation to be 'blacklisted' can be done using multi-criteria (SID/STATID, obstype, codetype, varno, channel/level, production center, sub-center producer, network (s) concerned (s), cycle (prod / assim), ..).

    LISTE_LOC

    The LISTE_LOC file can be used to blacklist satellite data and also for other data by type and / or subtype for a given parameter (described by varno or not). The contents of the LISTE_LOC are as follows:

    ColumnDescriptionFormat
    1Type of action: N: blacklisted, E: excludea1
    2The observation type (obsytpe@hdr)i3
    3The observation code-type (codetype@hdr)i4
    4The satellite ID with leading zeros (satid@sat)a9
    5The centre that produced the satellite datai4
    6The parameter ID (varno@body) or the satellite sensor ID (sensor@hdr)i4
    7Optional keywords of ZONx4, TOVSn, PPPPn, PROFn

    TOVSn C1 C2 ... Cn

    PPPPn P1 P2 ... Pn

    PROFn P1a P2 ... Pn-1 I1 I2 ... In-1

    ZONx4 latmin latmax lonmin lonmax

    LISTE_NOIRE_DIAP

    The LISTE_NOIRE_DIAP (const/bator_liste) can be used to blacklist conventional observations by station identifier. The contents of the LISTE_NOIRE_DIAP are as follows:

    ColumnDescriptionFormat
    1Observation type (obstype@hdr)i2
    2Observation namea10
    3Observation codetype (codetype@hdr)i3
    4Parameter ID (varno@body)i3
    5Station ID (statid@hdr)a8
    6Start date of blacklisting yyyymmdda8
    7Optional layer blacklisting (PROFn)a180

    PROFn P1a P2 ... Pn-1 I1 I2 ... In

    Particularities - the blacklisting of certain parameters involves the automatic blacklisting of other parameter summarized in the table below:

    obstypespecified parameterblacklisted parameters
    SYNOP39 (t2)39 (t2), 58 (rh2), 7 (q)
    SYNOP58 (rh2)58 (rh2), 7 (q)
    TEMP1 (z)1 (z), 29 (rh), 2 (t), 59 (td), 7 (q)
    TEMP2 (t)2 (t), 29 (rh), 7 (q)
    TEMP29 (rh)29 (rh), 7 (q)
    + $HM_LIB/scr/create_ioassign -l "ECMA" -n ${BATOR_NBPOOL}

    where $base is the ODB base ($base can be conv (for conventional data), amsu (ATOVS/AMSU-A,AMSU-B/MHS), sev (for Sevir), iasi, radarv (radar) for example). Important: If you would like to have more bases, do not forget to take that into consideration when generating the "batormap" file for BATOR to define which observations you would like to have in each base.

    Blacklisting

    To avoid model forecast degradation, two files can be used to blacklist or exclude data from the analysis. They are also used to blacklist observations that the model cannot deal with because they are not representative (orography, breeze effects...). The reason for the existence of this method of 'blacklisting', built-in Bator, alongside with hirlam_blacklist.b (built-in Screening) is to allow simple and quick changes (and especially without changing binary) in the operational suite.

    The selection of an observation to be 'blacklisted' can be done using multi-criteria (SID/STATID, obstype, codetype, varno, channel/level, production center, sub-center producer, network (s) concerned (s), cycle (prod / assim), ..).

    LISTE_LOC

    The LISTE_LOC file can be used to blacklist satellite data and also for other data by type and / or subtype for a given parameter (described by varno or not). The contents of the LISTE_LOC are as follows:

    ColumnDescriptionFormat
    1Type of action: N: blacklisted, E: excludea1
    2The observation type (obsytpe@hdr)i3
    3The observation code-type (codetype@hdr)i4
    4The satellite ID with leading zeros (satid@sat)a9
    5The centre that produced the satellite datai4
    6The parameter ID (varno@body) or the satellite sensor ID (sensor@hdr)i4
    7Optional keywords of ZONx4, TOVSn, PPPPn, PROFn

    TOVSn C1 C2 ... Cn

    PPPPn P1 P2 ... Pn

    PROFn P1a P2 ... Pn-1 I1 I2 ... In-1

    ZONx4 latmin latmax lonmin lonmax

    LISTE_NOIRE_DIAP

    The LISTE_NOIRE_DIAP (const/bator_liste) can be used to blacklist conventional observations by station identifier. The contents of the LISTE_NOIRE_DIAP are as follows:

    ColumnDescriptionFormat
    1Observation type (obstype@hdr)i2
    2Observation namea10
    3Observation codetype (codetype@hdr)i3
    4Parameter ID (varno@body)i3
    5Station ID (statid@hdr)a8
    6Start date of blacklisting yyyymmdda8
    7Optional layer blacklisting (PROFn)a180

    PROFn P1a P2 ... Pn-1 I1 I2 ... In

    Particularities - the blacklisting of certain parameters involves the automatic blacklisting of other parameter summarized in the table below:

    obstypespecified parameterblacklisted parameters
    SYNOP39 (t2)39 (t2), 58 (rh2), 7 (q)
    SYNOP58 (rh2)58 (rh2), 7 (q)
    TEMP1 (z)1 (z), 29 (rh), 2 (t), 59 (td), 7 (q)
    TEMP2 (t)2 (t), 29 (rh), 7 (q)
    TEMP29 (rh)29 (rh), 7 (q)
    diff --git a/previews/PR1153/Observations/Cope/index.html b/previews/PR1153/Observations/Cope/index.html index b5afbd167..ac4670eb3 100644 --- a/previews/PR1153/Observations/Cope/index.html +++ b/previews/PR1153/Observations/Cope/index.html @@ -86,4 +86,4 @@ make install

    COPE in HARMONIE system

    The use of COPE in HARMONIE relies on ODB-API, b2o and COPE itself.

    export COPE_DEFINITIONS_PATH=${COPE_DIR}/share/cope
     export ODB_SCHEMA_FILE=${B2O_DIR}/share/b2o/ECMA.sch
     export ODB_CODE_MAPPINGS=${B2O_DIR}/share/b2o/odb_code_mappings.dat
    -export ODBCODEMAPPINGS=${B2O_DIR}/share/b2o/odb_code_mappings.dat
    +export ODBCODEMAPPINGS=${B2O_DIR}/share/b2o/odb_code_mappings.dat diff --git a/previews/PR1153/Observations/GNSS/index.html b/previews/PR1153/Observations/GNSS/index.html index f983159ad..12d925f41 100644 --- a/previews/PR1153/Observations/GNSS/index.html +++ b/previews/PR1153/Observations/GNSS/index.html @@ -6,4 +6,4 @@

    GNSS ZTD observations

    Introduction

    The NRT GNSS delay data contain information about the amount of water vapour above the GNSS sites. E-GVAP European program’s aim is to provide its EUMETNET members with European GNSS delay and water vapour estimates for operational meteorology in near real-time. Currently, the E-GVAP network consists of more than 1500 GNSS sites.

    • E-GVAP Programme here

    GNSS ZTD data

    Raw data from GNSS sites are collected by a number of GNSS analysis centers, which process the data to estimate the Zenith Total Delays (ZTD) and other parameters. The ZTDs are then forwarded to a data server, for distribution to meteorological institutes. The observations are currently distributed from Met Office, in two different formats: BUFR that are distributed via GTS to the meteorological centers or in ASCII format, that may be download via ftp.

    Preprocessing the GNSS ZTD data

    The preprocessing of these data should be local, depending if you want to have them in BUFR or ASCII format. ASCII option needs a local script to get the files from Metoffice server and transform them from COST format (EGVAP) into OBSOUL format. (In this case there is an optional script inside scr directory in Harmonie called GNSStoOBSOUL that could transforms ascii into OBSOUL format).

    Apart of the preprocessing, a White List of sites to be assimilated in your domain is needed. It will contain the values of:

       statid lat lon alt dts bias sd obserr

    where statid is the name of the site (NNNNPPPP: NNNN=site PPPP=Procesing centre) , dts is the frequency in minutes between obs, and sd the standard deviation of that station and obserr the observation error. You are supposed to have calculated these values before launching the experiment.

    Harmonie changes to assimilate GNSS ZTD data

    scr/

    • Bator and Fetch_assim_data have the white list path.
    • Oulan : has the white list and gnss observation files paths and cat this one to the rest of conventional observation file.
    • include.ass: This script has two options about gnss bias correction: static bias correction (LSTATIC_BIAS) or variational bias correction (LVARBC_GNSS). For the first case, a fix bias value from each site is read from the White List and then substracted from the corresponding observation value. For the second case, VarBC, it is also needed to set in this script the cold start option.
      export GNSS_OBS=1            #GNSS
       export LSTATIC_BIAS=F        #Swich for bias correction or not,(T|F)
       export LVARBC_GNSS=T         #Swich for GNSS varbc
      -export VARBC_COLD_START=yes  #yes/no

    nam/ Here it should be the White list, called list.gpssol.201512 for example /src/arpifs/obs_preproc/

    • redgps.F90 : This routine is where the horizontal thinning is done (Cy40) , so the thinning distance could be selected here.

    /src/blacklist/

    • mf_blacklist.b: here is posible to blacklist the gnss observations so to calculate the varbc coefficients. It can be done tuning to experimental the apdss variable.
    +export VARBC_COLD_START=yes #yes/no

    nam/ Here it should be the White list, called list.gpssol.201512 for example /src/arpifs/obs_preproc/

    /src/blacklist/

    diff --git a/previews/PR1153/Observations/Iasi/index.html b/previews/PR1153/Observations/Iasi/index.html index 978da0e4f..a4363a0ba 100644 --- a/previews/PR1153/Observations/Iasi/index.html +++ b/previews/PR1153/Observations/Iasi/index.html @@ -70,4 +70,4 @@ /

    Here we specify a list of 145 channels to be included in "band 1" of the cloud detection, i.e., in the main cloud detection channel band. The setup of the cloud detection involves not just the channel list but several additional tuning parameters that can be modified to make the screening more or less conservative. The default settings are specified in src/arpifs/obs_preproc/cloud_detect_setup.F90. A comprehensive description of the cloud detection scheme, including explanations of the various tuning parameter values, is given at the NWPSAF web site https://nwp-saf.eumetsat.int/site/software/aerosol-and-cloud-detection/documentation/.

    Log file of the Screening task will indicate whether the formatting of the namelist file is appropriate:

     READING CLOUD DETECTION FILE FOR IASI
      IASI  CLOUD DETECTION FILE READ OK

    In case of an error, the following is printed instead:

     READING CLOUD DETECTION FILE FOR IASI
      PROBLEM READING IASI CLOUD DETECTION FILE: Using Default Values

    The third possibility is that the namelist file does not appear in the working directory, in which case the printout statement is this:

     READING CLOUD DETECTION FILE FOR IASI
    - NO IASI  CLOUD DETECTION FILE : Using Default Values

    Please note that the use of the "Default Values" is generally not a desired outcome. This is because many of the cloud detection channels in the default list (see src/arpifs/obs_preproc/cloud_detect_setup.F90) are sensitive to higher stratosphere and therefore may be severely affected by the relatively low model top of limited-area HARMONIE systems.

    References:

    McNally, AP, and PD Watts, 2003: A cloud detection algorithm for high-spectral-resolution infrared sounders. Quarterly Journal of the Royal Meteorological Society, 129, 3411-3423, doi:10.1256/qj.02.208.

    + NO IASI CLOUD DETECTION FILE : Using Default Values

    Please note that the use of the "Default Values" is generally not a desired outcome. This is because many of the cloud detection channels in the default list (see src/arpifs/obs_preproc/cloud_detect_setup.F90) are sensitive to higher stratosphere and therefore may be severely affected by the relatively low model top of limited-area HARMONIE systems.

    References:

    McNally, AP, and PD Watts, 2003: A cloud detection algorithm for high-spectral-resolution infrared sounders. Quarterly Journal of the Royal Meteorological Society, 129, 3411-3423, doi:10.1256/qj.02.208.

    diff --git a/previews/PR1153/Observations/Modes/index.html b/previews/PR1153/Observations/Modes/index.html index 6fc651111..4ed1b2761 100644 --- a/previews/PR1153/Observations/Modes/index.html +++ b/previews/PR1153/Observations/Modes/index.html @@ -20,4 +20,4 @@ END

    Processing using Oulan

    The processing of Mode-S EHS BUFR using Oulan is controlled by the following namelist entry in scr/Oulan:

    LMODES=.FALSE.

    Thinning of Mode-S

    Thinning of a bufr file

    A collection of python scripts which directly thin the Mode-S csv and bufr file is uploaded here https://gitlab.com/haandes/emaddc-public.

    E.g. the emaddcc-thinning4.py script works with the large Mode-S_EMADDC_KNMI_oper_${DTG}.bufr file and thins the data in 4 dimensions, horizontal, vertical and in observation time closest to analysis time. Emaddcc-thinning4.py currently assumes valid temperature and wind observations at the same time and fix vertical thinning intervals of:

    [300, 300, 600, 1000] m 

    which corresponds to the heights of the lowest, second lowest, third lowest and all above boxes. The horizontal box width is variable and in the following example 40 km.

    The .py script is triggered in scr/Prepare_ob, with:

      nMsgs=`bufr_count $OBDIR/Mode-S_EMADDC_KNMI_oper_${DTG}.bufr`
       time python3 $HM_LIB/scr/emaddcc_thinning4.py --infile $OBDIR/Mode-S_EMADDC_KNMI_oper_${DTG}.bufr --box_width 40 --DTG $DTG --nMsgs $nMsgs --outfile emaddcc_thinned.bufr
    -  cat emaddcc_thinned.bufr  /dev/null >> $BUFRFILE

    It takes about 1:35 min on Atos-Bologna and results in reduction of Mode-S data by a factor of 4-5.

    + cat emaddcc_thinned.bufr /dev/null >> $BUFRFILE

    It takes about 1:35 min on Atos-Bologna and results in reduction of Mode-S data by a factor of 4-5.

    diff --git a/previews/PR1153/Observations/ObservationData/index.html b/previews/PR1153/Observations/ObservationData/index.html index c4a5d7802..83a846a5f 100644 --- a/previews/PR1153/Observations/ObservationData/index.html +++ b/previews/PR1153/Observations/ObservationData/index.html @@ -7,4 +7,4 @@ EASTEC=$( tail -1 foo | head -1 | sed 's/ //g' ) NORTHEC=$( tail -2 foo | head -1 | sed 's/ //g' ) WESTEC=$( tail -3 foo | head -1 | sed 's/ //g' ) - SOUTHEC=$( tail -4 foo | head -1 | sed 's/ //g' )

    LOCAL

    Otherwise, this step consists of fetching (or waiting for) the observations stored in $OBDIR defined in ecf/config_exp.h . In that case one can use the command "cat" to merge different observations into one BUFR file, ob${DTG}. In general, HIRLAM services are adopting SAPP, ECMWF's scalable acquisition and pre-processing system, to process (conventional) GTS reports and other observational data for use in operational NWP. SAPP produces BUFR encoded in the same way as observational BUFR data available in the MARS archive.

    + SOUTHEC=$( tail -4 foo | head -1 | sed 's/ //g' )

    LOCAL

    Otherwise, this step consists of fetching (or waiting for) the observations stored in $OBDIR defined in ecf/config_exp.h . In that case one can use the command "cat" to merge different observations into one BUFR file, ob${DTG}. In general, HIRLAM services are adopting SAPP, ECMWF's scalable acquisition and pre-processing system, to process (conventional) GTS reports and other observational data for use in operational NWP. SAPP produces BUFR encoded in the same way as observational BUFR data available in the MARS archive.

    diff --git a/previews/PR1153/Observations/ObservationPreprocessing/index.html b/previews/PR1153/Observations/ObservationPreprocessing/index.html index 5e3e2d574..7c966052f 100644 --- a/previews/PR1153/Observations/ObservationPreprocessing/index.html +++ b/previews/PR1153/Observations/ObservationPreprocessing/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    HARMONIE Observation Preprocessing

    Introduction

    The following figure shows different schematic steps in the HARMONIE data assimilation system. It is worth mentioning some differences between the observation pre-processing systems used by ECMWF, Météo France, and HIRLAM. Some of these differences are listed below:

    AROME/HARMONIE-AROMEIFS
    data format/contentBUFR, but sometimes with own tableBUFR with WMO code
    creation of ODB databaseBator converts BUFR to ODBb2o/bufr2odb converts BUFR to ODB
    blacklisting techniqueBator (LISTE_LOC, LISTE_NOIRE_DIAP), Screening (hirlam_blacklist.B) & Minim (NOTVAR namelist)Screening only

    Observation file preparation

    Preprocessing Software

    • Bator: Bator - reads BUFR/HDF5/OBSOUL observation data and writes ODBs used by data assimilation

    Other possibilities include:

    • Oulan: Oulan - Converts conventional BUFR data to OBSOUL file that is read by BATOR
    • Cope: Cope - preparation of ODBs used by data assimilation (in development)
    +

    HARMONIE Observation Preprocessing

    Introduction

    The following figure shows different schematic steps in the HARMONIE data assimilation system. It is worth mentioning some differences between the observation pre-processing systems used by ECMWF, Météo France, and HIRLAM. Some of these differences are listed below:

    AROME/HARMONIE-AROMEIFS
    data format/contentBUFR, but sometimes with own tableBUFR with WMO code
    creation of ODB databaseBator converts BUFR to ODBb2o/bufr2odb converts BUFR to ODB
    blacklisting techniqueBator (LISTE_LOC, LISTE_NOIRE_DIAP), Screening (hirlam_blacklist.B) & Minim (NOTVAR namelist)Screening only

    Observation file preparation

    Preprocessing Software

    • Bator: Bator - reads BUFR/HDF5/OBSOUL observation data and writes ODBs used by data assimilation

    Other possibilities include:

    • Oulan: Oulan - Converts conventional BUFR data to OBSOUL file that is read by BATOR
    • Cope: Cope - preparation of ODBs used by data assimilation (in development)
    diff --git a/previews/PR1153/Observations/Oulan/index.html b/previews/PR1153/Observations/Oulan/index.html index 0832e902c..ffe793fa3 100644 --- a/previews/PR1153/Observations/Oulan/index.html +++ b/previews/PR1153/Observations/Oulan/index.html @@ -29,4 +29,4 @@ -e "s/SLNEWSHIPBUFR/$SLNEWSHIPBUFR/" \ -e "s/SLNEWBUOYBUFR/$SLNEWBUOYBUFR/" \ -e "s/SLNEWTEMPBUFR/$SLNEWTEMPBUFR/" \ - ${NAMELIST} >NAMELIST
  • run oulan

    $BINDIR/oulan
  • process GNSS data. If $GNSS_OBS is set to 1 then GNSS observations are added to the OBSOUL file and whitelisting is carried out using PREGPSSOL

  • New BUFR templates

    Valid for HARMONIE 40h1 and later

    The use of new format (GTS WMO) BUFR is controlled in scr/include.ass by LNEWSYNOPBUFR, LNEWSHIPBUFR, LNEWBUOYBUFR, LNEWTEMPBUFR (set to 0 or 1). These environment variables control namelist settings in the Oulan script. GTS and ECMWF BUFR were used to guide the code changes so Oulan assumes either "flavour" of BUFR. Local changes may be required if your locally produced BUFR, in particular section 1 data sub-type settings, do not follow WMO and/or ECMWF practices.

    The ECMWF wiki contains updates regarding the quality of the new BUFR HR observations. See the following ECMWF wiki pages for furher information.

    + ${NAMELIST} >NAMELIST
  • run oulan

    $BINDIR/oulan
  • process GNSS data. If $GNSS_OBS is set to 1 then GNSS observations are added to the OBSOUL file and whitelisting is carried out using PREGPSSOL

  • New BUFR templates

    Valid for HARMONIE 40h1 and later

    The use of new format (GTS WMO) BUFR is controlled in scr/include.ass by LNEWSYNOPBUFR, LNEWSHIPBUFR, LNEWBUOYBUFR, LNEWTEMPBUFR (set to 0 or 1). These environment variables control namelist settings in the Oulan script. GTS and ECMWF BUFR were used to guide the code changes so Oulan assumes either "flavour" of BUFR. Local changes may be required if your locally produced BUFR, in particular section 1 data sub-type settings, do not follow WMO and/or ECMWF practices.

    The ECMWF wiki contains updates regarding the quality of the new BUFR HR observations. See the following ECMWF wiki pages for furher information.

    diff --git a/previews/PR1153/Observations/RadarData/index.html b/previews/PR1153/Observations/RadarData/index.html index 74ccb0e70..49493f475 100644 --- a/previews/PR1153/Observations/RadarData/index.html +++ b/previews/PR1153/Observations/RadarData/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Assimilation of Radar Data

    This documentation outlines how to retrieve, process and assimilate HDF5 radar data

    HARMONIE compilation

    HIRLAM have made code changes to BATOR to allow the direct reading of HDF5 radar data and conversion to ODB suitable for use in the HARMONIE data assimilation system. If you wish to use these changes you must compile HARMONIE with support for HDF5. This requires the addition of -DUSE_HDF5 to the FDEFS in your makeup config file as well has adding hdf5 to EXTMODS. util/makeup/config.ECMWF.atos.gnu is an example of a makeup config file

    Format

    The BATOR code assumes the HDF5 radar data being read uses the OPERA Data Information Model (ODIM). See https://www.eumetnet.eu/wp-content/uploads/2021/07/ODIMH5v2.4.pdf for further information.

    Data retrieval

    Quality-controlled radar data can be retrieved from local archives, the OPERA Nimbus server (contact: Lukas Tuechler (Geosphere)), or the ODE (OPERA Development Environment) server (contact: Günther Haase (SMHI)).

    Data processing

    The HARMONIE script system requires that the OPERA HDF5 data files be stored in RADARDIR (defined in ecf/config_exp.h ) and have a file name using the format: ${HDFID}_qcvol_${DATE}T${HH}00.h5 where:

    • HDFID is a 5 digit OPERA radar identifier
    • DATE is the date
    • HH is the hour

    Common pitfalls

    • Forgetting to add -DUSE_HDF5 correctly to your config file
    • Incorrect RADARDIR
    • Incorrect file names
    • Incorrect format entered in refdata - BATOR is quite strict about how it reads the information in refdata:
    02918zh  HDF5     radarv           20100808 03 

    Further reading

    Martin Ridal's radar data assimilation presentation

    +

    Assimilation of Radar Data

    This documentation outlines how to retrieve, process and assimilate HDF5 radar data

    HARMONIE compilation

    HIRLAM have made code changes to BATOR to allow the direct reading of HDF5 radar data and conversion to ODB suitable for use in the HARMONIE data assimilation system. If you wish to use these changes you must compile HARMONIE with support for HDF5. This requires the addition of -DUSE_HDF5 to the FDEFS in your makeup config file as well has adding hdf5 to EXTMODS. util/makeup/config.ECMWF.atos.gnu is an example of a makeup config file

    Format

    The BATOR code assumes the HDF5 radar data being read uses the OPERA Data Information Model (ODIM). See https://www.eumetnet.eu/wp-content/uploads/2021/07/ODIMH5v2.4.pdf for further information.

    Data retrieval

    Quality-controlled radar data can be retrieved from local archives, the OPERA Nimbus server (contact: Lukas Tuechler (Geosphere)), or the ODE (OPERA Development Environment) server (contact: Günther Haase (SMHI)).

    Data processing

    The HARMONIE script system requires that the OPERA HDF5 data files be stored in RADARDIR (defined in ecf/config_exp.h ) and have a file name using the format: ${HDFID}_qcvol_${DATE}T${HH}00.h5 where:

    • HDFID is a 5 digit OPERA radar identifier
    • DATE is the date
    • HH is the hour

    Common pitfalls

    • Forgetting to add -DUSE_HDF5 correctly to your config file
    • Incorrect RADARDIR
    • Incorrect file names
    • Incorrect format entered in refdata - BATOR is quite strict about how it reads the information in refdata:
    02918zh  HDF5     radarv           20100808 03 

    Further reading

    Martin Ridal's radar data assimilation presentation

    diff --git a/previews/PR1153/Observations/SYNOP/index.html b/previews/PR1153/Observations/SYNOP/index.html index 34db6f1a8..72186d652 100644 --- a/previews/PR1153/Observations/SYNOP/index.html +++ b/previews/PR1153/Observations/SYNOP/index.html @@ -43,4 +43,4 @@ 'nbg_sfcobs_ndays_apd' => '15,', 'nbg_sfcobs_min_ps'=> '15,', 'nbg_sfcobs_ndays_ps' => '15,', -},

    In addition you need to make sure that surface pressure variable is 'ps' and not 'z' for ship surface pressure subtypes, as explained above. Variational bias correction is only prepared for 'ps' not 'z'.

    +},

    In addition you need to make sure that surface pressure variable is 'ps' and not 'z' for ship surface pressure subtypes, as explained above. Variational bias correction is only prepared for 'ps' not 'z'.

    diff --git a/previews/PR1153/Observations/Scatt/index.html b/previews/PR1153/Observations/Scatt/index.html index cfb89329b..f035823bf 100644 --- a/previews/PR1153/Observations/Scatt/index.html +++ b/previews/PR1153/Observations/Scatt/index.html @@ -4,4 +4,4 @@ gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash});

    Scatterometers

    Background

    The EUMETSAT OSI SAF produces different scatterometer wind products at KNMI and more will become available in 2019:

    • C-band ASCAT-A/B/C overpassing at 9:30/21:30 Local Solar Time (LST), since 2007/2011/2019;
    • Ku-band !ScatSat overpassing at 8:45/20:45 LST, since 2017;
    • Ku-band HY2A/B overpassing at 6:00/18:00 LST, since 2013 (n.a. in NRT)/2019;
    • Ku-band CFOSAT overpassing at 7:00/19:00 LST, expected 2019;
    • Ku-band OSCAT3 overpassing at 12:00/24:00, expected 2019;
    • C/Ku-band !WindRad overpassing at 6:00/18:00, expected 2020.

    Note that the products have different ambiguity and noise properties, that are handled in the generic KNMI processing. We distinguish two types of scatterometers with (1) static beams (ASCAT) and with (2) rotating beams (the rest).

    In the ECMWF model (on ~200 km scales) the availability of three hourly observations is motivated from the experience of assimilating ASCAT and OSCAT (2.5 hours overpass time difference), which showed double the impact of assimilating ASCAT only. So, they appear as independent data sources for the model.

    Since ASCAT overpasses only twice per day we cannot fulfil the temporal requirement and can therefore not expect to analyze open ocean surface winds deterministically at 25 km scales with ASCAT only. Based on this analysis we should therefore focus on larger than 25 km scales (as ECMWF does), also for Harmonie, so typically focus on 100 km scales. This means that scales between ~25-100 km in Harmonie over open sea is mostly noise, which can be removed through supermodding (ref: Mate Mile's project). Note that more scatterometers will be available next year at more times a day (see above).

    ECMWF is testing ASCAT with different aggregation, thinning and weights in order to optimize scatterometer data assimilation, which results may be useful for HARMONIE data assimilation strategy as well.

    ASCAT

    1. ASCAT-12.5km (or ASCAT-coastal) data are available on a 12.5 km grid.
    2. The resolution of ASCAT-12.5km is about 25 km (through the application of a Hanning with tails extending beyond 12.5 km)
    3. As a result, the errors of neighbouring observations are correlated. For the 6.25 km product:
      • along-track wind component l : neighbor 0.60; next-neighbor 0.19; next-next neighbor 0.02; total noise variance 0.385
      • cross-track wind component t : neighbor 0.51; next-neighbor 0.11; next-next neighbor 0.00; total noise variance 0.214
      This agrees well with the footprint overlap (see point 2). We expect similar values for ASCAT-12.5km, but this could be easily assessed more dedicated.
    4. Triple collocation tests show obervation error standard deviation for ASCAT-12.5km (or ASCAT-coastal) of ~ 0.7 m/s for u and v.
    5. The effective model resolution of Harmonie (with 2.5 km grid) is about 20-25 km.

    Based on this one may conclude that the resolution of ASCAT-12.5km and Harmonie is about the same, so the representativeness error is negligible, and the total error equal to the observation error, i.e., 0.7 m/s and use this value for giving weight to ASCAT in Harmonie.

    However, we think this will not give the best impact. This is because if you want to analyse model states on 25 km scales (Harmonie effective resolution) deterministically, you need a forcing term which accounts for this resolution. Forcing can be either from orography (over land only) or observations. So, over sea we have to rely on the density of the observation network. To analyse scales up to 25 km deterministically over sea requires high density observations both in space and time, i.e., for the latter at least every hour. This is corroborated by studies with ASCAT A and B, separated in time by 50 minutes, showing high correlation of ASCAT divergence and convergence with moist convection rain, but negligible correlation between convergence or divergence of the two passes.

    Since ASCAT overpasses only twice per day we can not fulfil the temporal requirement and can therefore not expect to analyse ocean surface winds deterministically at 25 km scales with ASCAT only. Based on this analysis we should therefore focus on larger than 25 km scales (as ECMWF does), also for Harmonie, so typically focus on 100 km scales. This means that scales between ~25-100 km in Harmonie over sea is mostly noise, which can be removed through supermodding, i.e., the project where Mate Mile is working on.

    KNMI are waiting for a data feed from EUMETSAT. Level 1 ASCAT data available 14 March 2019 here

    Other scatterometers

    1. 25km data are generally available on a the satellite swath grid of WVCs
    2. The resolution of this 25 km data is around 100 km (through the application of a spatial filter that successfully suppresses both wind direction ambiguities and noise)
    3. As a result, the errors of neighboring observations are correlated over a distance of 100 km or more
    4. Triple collocation tests show observation error standard deviation ~ 0.7 m/s for u and v
    5. Biases exist at warm and cold SST of up to 0.5 m/s, which are being corrected; also winds around nadir and, to a lesser extent, in the outer swath are sometimes biased; the IFS takes account of this, but may need retuning for CFOSAT

    Further reading

    More information is available on the OSI SAF wind site in the form of training material, product manuals, scientific publications, verification reports and monitoring information. Support and services messages for all products can be obtained through scat at knmi.nl .

    The EUMETSAT NWP SAF provides the following reports:

    Model

    Enable assimilation

    • Set SCATT_OBS=1 in scr/include.ass
    • Ensure ascat${DTG} files are available in $OBDIR (defined in ecf/config_exp.h )

    Technical information

    • Referred to as NSCAT3 in arpifs (see src/arpifs/module/yomcoctp.F90)
    • From https://apps.ecmwf.int/odbgov
      • obstype=9
      • codetype=139
      • sensor=190
      • varno=125/124 for ambiguos u/v wind component

    Issues (CY40/CY43)

    Thinning: NASCAWVC

    • Number of ASCAT wave vector cells
    • Defined in src/arpifs/module/yomthlim.F90
    • Default, set in src/arpifs/obs_preproc/sufglim.F90, is 42 (for 25-km product)
    • Set to 82 for 12.5-km scatterometer product in nam/harmonie_namelists.pm (possibly also in sufglim.F90. To be checked)

    Observation error

    • Set by Bator (src/odb/pandor/module/bator_init_mod.F90) u_err=1.39, v_err=1.54
    • Suggested values from KNMI: u_err=1.4, v_err=1.4
    • ZWE=2.0 set in src/arpifs/obs_preproc/nscatin.F90 but not used (I think)
    • ObsErr in Jo-table is RMS of all ASCAT obs_error values (SQRT(0.5*(u_err^2 + v_err^2)
    • sigma_o can be set by Bator in NADIRS using NADIRS:
      ECTERO(9,139,125,1) = 1.39_JPRB
      -ECTERO(9,139,124,1) = 1.54_JPRB
    +ECTERO(9,139,124,1) = 1.54_JPRB diff --git a/previews/PR1153/Observations/Seviri/index.html b/previews/PR1153/Observations/Seviri/index.html index a663128a6..fcae4b2fa 100644 --- a/previews/PR1153/Observations/Seviri/index.html +++ b/previews/PR1153/Observations/Seviri/index.html @@ -37,4 +37,4 @@ NSEVIRI(57)%NbChannels= 8, NSEVIRI(57)%Channels(1:8)= 1,2,3,4,5,6,7,8, NSEVIRI(57)%NamChannels(1:8)='IR_039','WV_062','WV_073','IR_087','IR_097','IR_108','IR_120','IR_134', -/

    Model settings (Screening and Minimisation)

    References

    Technical stuff:

    Further reading and links to reports/presentations:

    +/

    Model settings (Screening and Minimisation)

    References

    Technical stuff:

    Further reading and links to reports/presentations:

    diff --git a/previews/PR1153/Overview/Binaries/index.html b/previews/PR1153/Overview/Binaries/index.html index c0336d2e4..c55c12a9d 100644 --- a/previews/PR1153/Overview/Binaries/index.html +++ b/previews/PR1153/Overview/Binaries/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    HARMONIE binaries

    An installation of HARMONIE produces the following binaries:

    • ACADFA1D : Tool to generate initial and boundary data for MUSC
    • ADDPERT : Create initial perturbations
    • ADDSURF : Allows you to mix different files and add different fields
    • ALTO : Also known as PINUTS. Contains several diagnostic tools.
    • BATOR : Generate ODB from observations in various formats
    • bl95.x : Blacklist compiler, help program to generate object files from the blacklist
    • BLEND : Mixes to files
    • BLENDSUR : Mixes to files
    • cluster : Cluster ensemble members
    • CONVERT_ECOCLIMAP_PARAM : Generate binary files from ECOCLIMAP ascii files
    • dcagen : ODB handling tool
    • domain_prop : Helper program to return various model domain properties
    • FESTAT : Background error covariance calculations.
    • fldextr : Extracts data for verification from model history files. Reads FA from HARMONIE and GRIB from ECMWF/HIRLAM.
    • gl : Converts/interpolates between different file formats and projections. Used for boundary interpolation.
    • IOASSIGN/ioassign : ODB IO setup
    • LSMIX : Scale dependent mixing of two model states.
    • jbconv : Interpolates/extrapolates background error statistics files. For technical experimentation
    • lfitools : FA/LFI file manipulation tool
    • MASTERODB : The main binary for the forecast model, surface assimilation, climate generation, 3DVAR, fullpos and much more.
    • MTEN : Computation of moist tendencies
    • obsextr : Extract data for verification from BUFR files.
    • obsmon : Extract data for observation monitoring
    • odb98.x : ODB manipulation program
    • OFFLINE : The SURFEX offline model. Also called SURFEX
    • oulan : Converts observations in BUFR to OBSOUL format used by BATOR
    • PERTCMA : Perturbation of observations in ODB
    • PERTSFC : Surface perturbation scheme
    • PGD : Generates physiography files for SURFEX.
    • PREGPSSOL : Processing of GNSS data
    • PREP : Generate SURFEX initial files. Interpolates/translates between two SURFEX domains.
    • SFXTOOLS : Converts SURFEX output between FA and LFI format.
    • shuffle : Manipulation of ODB. Also called ODBTOOLS
    • ShuffleBufr : Split bufr data according to observation type, used in the observation preprocessing.
    • SODA : Surfex offline data assimilation
    • SPG : Stochastic pattern generator, https://github.com/gayfulin/SPG
    • SURFEX : The SURFEX offline model. Also called OFFLINE
    • tot_energy : Calculates the total energy of a model state. Is used for boundary perturbation scaling.
    • xtool : Compares two FA/LFI/GRIB files.
    +

    HARMONIE binaries

    An installation of HARMONIE produces the following binaries:

    • ACADFA1D : Tool to generate initial and boundary data for MUSC
    • ADDPERT : Create initial perturbations
    • ADDSURF : Allows you to mix different files and add different fields
    • ALTO : Also known as PINUTS. Contains several diagnostic tools.
    • BATOR : Generate ODB from observations in various formats
    • bl95.x : Blacklist compiler, help program to generate object files from the blacklist
    • BLEND : Mixes to files
    • BLENDSUR : Mixes to files
    • cluster : Cluster ensemble members
    • CONVERT_ECOCLIMAP_PARAM : Generate binary files from ECOCLIMAP ascii files
    • dcagen : ODB handling tool
    • domain_prop : Helper program to return various model domain properties
    • FESTAT : Background error covariance calculations.
    • fldextr : Extracts data for verification from model history files. Reads FA from HARMONIE and GRIB from ECMWF/HIRLAM.
    • gl : Converts/interpolates between different file formats and projections. Used for boundary interpolation.
    • IOASSIGN/ioassign : ODB IO setup
    • LSMIX : Scale dependent mixing of two model states.
    • jbconv : Interpolates/extrapolates background error statistics files. For technical experimentation
    • lfitools : FA/LFI file manipulation tool
    • MASTERODB : The main binary for the forecast model, surface assimilation, climate generation, 3DVAR, fullpos and much more.
    • MTEN : Computation of moist tendencies
    • obsextr : Extract data for verification from BUFR files.
    • obsmon : Extract data for observation monitoring
    • odb98.x : ODB manipulation program
    • OFFLINE : The SURFEX offline model. Also called SURFEX
    • oulan : Converts observations in BUFR to OBSOUL format used by BATOR
    • PERTCMA : Perturbation of observations in ODB
    • PERTSFC : Surface perturbation scheme
    • PGD : Generates physiography files for SURFEX.
    • PREGPSSOL : Processing of GNSS data
    • PREP : Generate SURFEX initial files. Interpolates/translates between two SURFEX domains.
    • SFXTOOLS : Converts SURFEX output between FA and LFI format.
    • shuffle : Manipulation of ODB. Also called ODBTOOLS
    • ShuffleBufr : Split bufr data according to observation type, used in the observation preprocessing.
    • SODA : Surfex offline data assimilation
    • SPG : Stochastic pattern generator, https://github.com/gayfulin/SPG
    • SURFEX : The SURFEX offline model. Also called OFFLINE
    • tot_energy : Calculates the total energy of a model state. Is used for boundary perturbation scaling.
    • xtool : Compares two FA/LFI/GRIB files.
    diff --git a/previews/PR1153/Overview/Content/index.html b/previews/PR1153/Overview/Content/index.html index 7bb517563..5f03aaa0c 100644 --- a/previews/PR1153/Overview/Content/index.html +++ b/previews/PR1153/Overview/Content/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Harmonie Content

    Overview

    Harmonie is HIRLAM's adaptation of the LAM version of the IFS/ARPEGE project. The common code shared with the ALADIN program, Meteo France and ECMWF only contains the source code. Harmonie adds the build environment, scripts, support for a scheduler, and a number of diagnostics tools for file conversion and postprocessing. In summary a git clone of harmonie from github contains the following main directories

    • config-sh : Configuration and job submission files for different platforms.
    • const : A selected number of constant files for bias correction, assimilation and different internal schemes. A large number of data for climate generation and the RTTOV software is kept outside of the repository. See [wiki:HarmonieSystemDocumentation#Downloaddata].
    • ecf : Directory for the main configuration file config_exp.h and the containers for the scheduler ECFLOW.
    • suites Scripts and suit definition files for ECFLOW, the scheduler for HARMONIE.
    • nam : Namelists for different configurations.
    • scr : Scripts to run the different tasks.
    • src : The IFS/ARPEGE source code.
    • util : A number of utilities and support libraries.

    util

    The util directory contains the following main directories

    • auxlibs : Contains gribex, bufr, rgb and some dummy routines
    • binutils : https://www.gnu.org/software/binutils/
    • checknorms : Script for code norm checking
    • gl_grib_api : Boundary file generator and file converter
    • makeup : HIRLAM style compilation tool
    • musc : MUSC scripts
    • obsmon : Code to produce obsmon sqlite files
    • offline : SURFEX offline code
    • oulan : Converts conventional BUFR data to OBSOUL format read by bator.
    • RadarDAbyFA : Field alignment code
    +

    Harmonie Content

    Overview

    Harmonie is HIRLAM's adaptation of the LAM version of the IFS/ARPEGE project. The common code shared with the ALADIN program, Meteo France and ECMWF only contains the source code. Harmonie adds the build environment, scripts, support for a scheduler, and a number of diagnostics tools for file conversion and postprocessing. In summary a git clone of harmonie from github contains the following main directories

    • config-sh : Configuration and job submission files for different platforms.
    • const : A selected number of constant files for bias correction, assimilation and different internal schemes. A large number of data for climate generation and the RTTOV software is kept outside of the repository. See [wiki:HarmonieSystemDocumentation#Downloaddata].
    • ecf : Directory for the main configuration file config_exp.h and the containers for the scheduler ECFLOW.
    • suites Scripts and suit definition files for ECFLOW, the scheduler for HARMONIE.
    • nam : Namelists for different configurations.
    • scr : Scripts to run the different tasks.
    • src : The IFS/ARPEGE source code.
    • util : A number of utilities and support libraries.

    util

    The util directory contains the following main directories

    • auxlibs : Contains gribex, bufr, rgb and some dummy routines
    • binutils : https://www.gnu.org/software/binutils/
    • checknorms : Script for code norm checking
    • gl_grib_api : Boundary file generator and file converter
    • makeup : HIRLAM style compilation tool
    • musc : MUSC scripts
    • obsmon : Code to produce obsmon sqlite files
    • offline : SURFEX offline code
    • oulan : Converts conventional BUFR data to OBSOUL format read by bator.
    • RadarDAbyFA : Field alignment code
    diff --git a/previews/PR1153/Overview/FileFormats/index.html b/previews/PR1153/Overview/FileFormats/index.html index a3a37b5b3..c1460665b 100644 --- a/previews/PR1153/Overview/FileFormats/index.html +++ b/previews/PR1153/Overview/FileFormats/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    File formats in HARMONIE

    Introduction

    The HARMONIE system reads and writes a number of different formats.

    FA files

    Default internal format input/output for HARMONIE for gridpoint, spectral and SURFEX data. GRIB is used as a way to pack data, but the grib record cannot be used as such.

    • The header contains information about model domain, projection, spectral truncation, extension zone, boundary zone, vertical levels.
    • Only one date/time per file.
    • FA routines are found under ifsaux/fa
    • List or convert a file with gl
    • Other listing tool PINUTS

    Read more

    GRIB/GRIB2

    All FA files may be converted to GRIB after the forecast run. For the conversion between FA names and GRIB parameters check this table.

    • List or convert a GRIB file with gl

    NETCDF

    In climate mode all FA files may converted to NETCDF after the forecast run. For the conversion between FA names and NETCDF parameters check util/gl/inc/nc_tab.h.

    • For the manipulation and listing of NETCDF files we refer to standard NETCDF tools.
    • NETCDF is also used as output data from some SURFEX tools.

    BUFR and ODB

    BUFR is the archiving/exchange format for observations. Observation Database is used for efficient handling of observations on IFS. ODB used for both input data and feedback information.

    Read more about observations in HARMONIE here.

    DDH (LFA files )

    Diagnostics by Horizontal Domains allows you to accumulate fluxes from different packages over different areas/points.

    • LFA files ( Autodocumented File Software )
    • gmapdoc
    • under util/ddh

    Misc

    • vfld/vobs files in a simple ASCII format used by the verification.
    • Obsmon files are stored in sqlite format.
    +

    File formats in HARMONIE

    Introduction

    The HARMONIE system reads and writes a number of different formats.

    FA files

    Default internal format input/output for HARMONIE for gridpoint, spectral and SURFEX data. GRIB is used as a way to pack data, but the grib record cannot be used as such.

    • The header contains information about model domain, projection, spectral truncation, extension zone, boundary zone, vertical levels.
    • Only one date/time per file.
    • FA routines are found under ifsaux/fa
    • List or convert a file with gl
    • Other listing tool PINUTS

    Read more

    GRIB/GRIB2

    All FA files may be converted to GRIB after the forecast run. For the conversion between FA names and GRIB parameters check this table.

    • List or convert a GRIB file with gl

    NETCDF

    In climate mode all FA files may converted to NETCDF after the forecast run. For the conversion between FA names and NETCDF parameters check util/gl/inc/nc_tab.h.

    • For the manipulation and listing of NETCDF files we refer to standard NETCDF tools.
    • NETCDF is also used as output data from some SURFEX tools.

    BUFR and ODB

    BUFR is the archiving/exchange format for observations. Observation Database is used for efficient handling of observations on IFS. ODB used for both input data and feedback information.

    Read more about observations in HARMONIE here.

    DDH (LFA files )

    Diagnostics by Horizontal Domains allows you to accumulate fluxes from different packages over different areas/points.

    • LFA files ( Autodocumented File Software )
    • gmapdoc
    • under util/ddh

    Misc

    • vfld/vobs files in a simple ASCII format used by the verification.
    • Obsmon files are stored in sqlite format.
    diff --git a/previews/PR1153/Overview/Source/index.html b/previews/PR1153/Overview/Source/index.html index 665174be5..257c7569f 100644 --- a/previews/PR1153/Overview/Source/index.html +++ b/previews/PR1153/Overview/Source/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Harmonie Source Code

    Introduction

    This wiki page summaries the ARPEGE/IFS source code made available in the HARMONIE system. It is based on documents made available by YESSAD K. (METEO-FRANCE/CNRM/GMAP/ALGO). The relevant document for cycle 40 is available here (or directly here).

    HARMONIE Source Library Structure

    The main source of HARMONIE system originates from IFS/ARPEGE and it consists of a number of "project" sources. These are:

    • aeolus: Aeolous source code, a package for pre-processing satellite lidar wind data. Inactive for us.
    • aladin: specific routines only relevant to LAM, (limited area models, in particular ALADIN and AROME).
    • algor: application routines, e.g. to read LFI or Arpege files,interface routines for distributed memory environment, some linear algebra routines, such as lanczos algorithm, minimizers.
    • arpifs: global model routines (ARPEGE, IFS), and routines common to global and LAM models. This is the core of the ARPEGE/IFS software. The core of ARPEGE/IFS software.
    • biper: Biperiodization routines for the LAM
    • blacklist: package for blacklisting
    • coupling: lateral coupling and spectral nudging for LAM models
    • etrans: spectral transforms for plane geometry, used for LAM
    • ifsaux: some application routines, for example reading or writing on “LFI” or ARPEGE files, interface routines for distributed memory environment
    • mpa: upper air meso-NH/AROME physics (also used in ARPEGE/ALADIN)
    • mse: surface processes in meso-NH/AROME (interface for SURFEX)
    • odb: ODB (Observational Data Base software), needed by ARPEGE/ALADIN for their analysis or their assimilation cycle
    • satrad: satellite data handling package, needed to run the model analysis/assimilation
    • surf: ECMWF surface scheme
    • surfex: surface processes in meso-NH/AROME - the externalized surface scheme SURFEX
    • trans: spectral transforms for spherical geometry, used for ARPEGE/IFS
    • utilities: utility packages, for operational FA to GRIB (PROGRID), OULAN, BATOR, or programs to operate on ODB and radiances bias correction

    Dependencies and hierarchy between each project

    Note: these project names are no longer valid – need to update

    • ARP+TFL+XRD+XLA+MPA+MSE+SURFEX: for ARPEGE forecasts with METEO-FRANCE physics.
    • ARP+ALD+TFL+TAL+XRD+XLA+BIP+MPA+MSE+SURFEX: for ALADIN or AROME forecasts.
    • ARP+TFL+XRD+XLA+SUR: for IFS forecasts with ECMWF physics.
    • ARP+TFL+XRD+XLA+MPA+MSE+SURFEX+BLA+ODB+SAT+AEO: for ARPEGE assimilations with METEO-FRANCE physics.
    • ARP+ALD+TFL+TAL+XRD+XLA+BIP+MPA+MSE+SURFEX+BLA+ODB+SAT+AEO: for ALADIN or AROME assimilations.
    • ARP+TFL+XRD+XLA+SUR+BLA+ODB+SAT+OBT+SCR+AEO: for IFS assimilations with ECMWF physics.

    Libraries under each project

    Note: this information made need to be updated for CY40

    ARPIFS

    • adiab
      • Adiabatic dynamics
      • Adiabatic diagnostics and intermediate quantities calculation, for example the geopotential height (routines GP... or GNH...).
      • Eulerian advections
      • Semi-Lagrangian advection and interpolators (routines LA...)
      • Semi-implicit scheme and linear terms calculation (routines SI..., SP..SI..)
      • Horizontal diffusion (routines SP..HOR..)
    • ald inc
      • function: functions used only in ALADIN
      • namelist: namelists read by ALADIN.
    • c9xx: specific configurations 901 to 999 routines (mainly configuration 923). Routines INCLI.. are used in configuration 923. Routines INTER... are interpolators used in configurations 923, 931, 932.
    • canari: routines used in the CANARI optimal interpolation. Their names generally starts by CA.
    • canari common: empty directory to be deleted.
    • climate: some specific ARPEGE-CLIMAT routines.
    • common: often contains includes
    • control: control routines. Contains in particular STEPO and CNT... routines.
    • dfi: routines used in the DFI (digital filter initialisation) algorithm
    • dia: diagnostics other than FULL-POS. One finds some setup SU... routines specific to some diagnostics and some WR... routines doing file writing.
    • function: functions (in includes). The qa....h functions are used in CANARI, the fc....h functions are used in a large panel of topics.
    • interface: not automatic interfaces (currently empty).
    • kalman: Kalman filter.
    • module: all the types of module (variables declarations, type definition, active code).
    • mwave: micro-wave observations (SSM/I) treatment.* namelist: all namelists.
    • nmi: routines used in the NMI (normal mode initialisation) algorithm.
    • obs error: treatment of the observation errors in the assimilation.
    • obs preproc: observation pre-processing (some of them are called in the screening).
    • ocean: oceanic coupling, for climatic applications.
    • onedvar: 1D-VAR assimilation scheme used at ECMWF.
    • parallel: parallel environment, communications between processors.
    • parameter: empty directory to be deleted.
    • phys dmn: physics parameterizations used at METEO-FRANCE, and HIRLAM physics, ALARO physics.
    • phys ec: ECMWF physics. Some of these routines (FMR radiation scheme, Lopez convection scheme) are now also used in the METEO-FRANCE physics.
    • pointer: empty directory to be deleted.
    • pp obs: several applications
      • observation horizontal and vertical interpolator.
      • FULL-POS.
      • vertical interpolator common to FULL-POS and the observation interpolator; some of these routines may be used elsewhere.
    • setup: setup routines not linked with a very specific domain. More specific setup routines are spread among some other subdirectories.
    • sinvect: singular vectors calculation (configuration 601).
    • support: empty directory to be deleted.
    • transform: hat routines for spectral transforms.
    • utility: miscellaneous utilitaries, linear algebra routines, array deallocation routines.
    • var: routines involved in the 3DVAR and 4DVAR assimilation, some minimizers (N1CG1, CONGRAD), some specific 3DVAR and 4DVAR setup routines.
    • wave: empty directory to be deleted.

    ALADIN

    • adiab: adiabatic dynamics.
    • blending: blending scheme (currently only contains the procedure blend.ksh).
    • c9xx: specific configurations E901 to E999 routines (mainly configuration E923). Routines EINCLI.. are used in configuration E923. Routines EINTER... are interpolators used in configurations E923, E931, E932.
    • control: control routines.
    • coupling: lateral coupling by external lateral boundary conditions.
    • dia: diagnostics other than FULL-POS.
    • inidata: setup routines specific to file reading (initial conditions, LBC).
    • module: active code modules only used in ALADIN.
    • obs preproc: observation pre-processing (some of them are called in the screening).
    • parallel: parallel environment, communications between processors.
    • pp obs: several applications:
      • observation horizontal and vertical interpolator.
      • FULL-POS.
      • vertical interpolator common to FULL-POS and the observation interpolator; some of these routines may be used elsewhere.
    • programs: probably designed to contain procedures, but currently contains among others some blending routines, the place of which would be probably better in subdirectory "blending".
    • setup: setup routines not linked with a very specific domain. More specific setup routines are spread among some other subdirectories.
    • sinvect: singular vectors calculation (configuration E601).
    • transform: hat routines for spectral transforms.
    • utility: miscellaneous utilitaries, array deallocation routines.
    • var: routines involved in the 3DVAR and 4DVAR assimilation, some specific 3DVAR and 4DVAR setup routines.

    TFL

    • build: contains procedures.
    • external: routines which can be called from another project.
    • interface: not automatically generated interfaces which match with the "external" directory routines.
    • module: all the types of module (variables declarations, type definition, active code).
      • tpm ...F90: variable declaration + type definition modules.
      • lt.... mod.F90: active code modules for Legendre transforms.
      • ft.... mod.F90: active code modules for Fourier transforms.
      • tr.... mod.F90: active code modules for transpositions.
      • su.... mod.F90: active code modules for setup.
    • programs: specific entries which can be used for TFL code validation. These routines are not called elsewhere.

    TAL

    • external: routines which can be called from another project.
    • interface: not automatically generated interfaces which match with the "external" directory routines.
    • module: all the types of module (variables declarations, type definition, active code).
      • tpmald ...F90: variable declaration + type definition modules.
      • elt.... mod.F90: active code modules for N-S Fourier transforms.
      • eft.... mod.F90: active code modules for E-W Fourier transforms.
      • sue.... mod.F90: active code modules for setup.
    • programs: specific entries which can be used for TAL code validation. These routines are not called elsewhere.

    XRD

    • arpege: empty directory to be deleted.
    • bufr io: BUFR format files reading and writing.
    • cma: CMA format files reading and writing.
    • ddh: DDH diagnostics.
    • fa: ARPEGE (FA) files reading and writing.
    • grib io: ECMWF GRIB format files reading and writing.
    • grib mf: METEO-FRANCE GRIB format files reading and writing.
    • ioassign: empty directory to be deleted.
    • lanczos: linear algebra routines for Lanczos algorithm.
    • lfi: LFI format files reading and writing.
    • minim: linear algebra routines for minimizations. Contains the M1QN3 (quasi-Newton) minimizer.
    • misc: miscellaneous decks.* module: all the types of module (variables declarations, type definition, active code). There are a lot of mpl...F90 modules for parallel environment (interface to MPI parallel environment).
    • mrfstools: empty directory to be deleted.
    • newbufrio: empty directory to be deleted.
    • newcmaio: empty directory to be deleted.
    • not used: miscellaneous decks (unused decks to be deleted?).
    • pcma: empty directory to be deleted.
    • support: miscellaneous routines. Some of them do Fourier transforms, some others do linear algebra.
    • svipc: contains only svipc.c .
    • utilities: miscellaneous utilitaries.

    SUR

    • build: contains procedures.
    • external: routines which can be called from another project.* function: specific functions.
    • interface: not automatically generated interfaces which match with the "external" directory routines.
    • module: all the types of module (variables declarations, type definition, active code).
      • yos ...F90: variable declaration + type definition modules.
      • su.... mod.F90 but not surf.... mod.F90: active code modules for setup.
      • surf.... mod.F90, v.... mod.F90: other active code modules.
    • offline: specific entries which can be used for SUR code validation. These routines are not called elsewhere.

    BLA

    • compiler.
    • include: not automatically generated interfaces, functions, and some other includes.
    • library: the only containing .F90 decks.
    • old2new.
    • scripts.

    SAT

    • bias.
    • emiss.
    • interface.
    • module.
    • mwave.
    • onedvar.
    • pre screen.
    • rtlimb.
    • rttov.
    • satim.
    • test. (Not described in detail; more information has to be provided by someone who knows the content of this project, but there is currently no specific documentation about this topic)

    UTI

    • add cloud fields: program to add 4 cloud variables (liquid water, ice, rainfall, snow) in ARPEGE files.
    • bator: BATOR software (reads observations data in a ASCII format file named OBSOUL and the blacklist, writes them on a ODB format file with some additional information).
    • combi: combination of perturbations in an ensemble forecast (PEARP).
    • controdb: control of the number of observations.
    • extrtovs: unbias TOVS.
    • fcq: does quality control and writes this quality control in ODB files.
    • gobptout: PROGRIB? (convert ARPEGE files contained post-processed data into GRIB files).
    • include: all .h decks (functions, COMMON blocks, parameters).
    • mandalay: software MANDALAY.
    • module: all types of modules.
    • namelist: namelists specific to the applications stored in UTI (for example OULAN, BATOR).
    • oulan: OULAN software (the step just before BATOR: observation extractions in the BDM, samples data in space and time, and writes the sampled data in an ASCII file called "OBSOUL").
    • pregpssol: Surface GPS processing.
    • prescat: Scatterometer data processing.
    • progrid: PROGRID? (convert ARPEGE files contained post-processed data into GRIB files).
    • progrid cadre: cf. progrid?
    • sst nesdis: program to read the SST on the BDAP. This project has its own entries.

    MPA

    It contains first layer of directory

    • chem: chemistry.
    • conv: convection.
    • micro: microphysics.
    • turb: turbulence.

    Each directory contains the following subdirectories

    • externals: routines which can be called from another project.
    • include: all the "include" decks (functions, COMMON blocks, parameters).
    • interface: not automatically generated interfaces which match with the "external" directory routines.
    • internals: other non-module routines; they cannot be called from another project.
    • module: all types of modules.

    SURFEX

    • ASSIM: Surface assimilation routines (please note that programs soda.F90, oi_main.F90 and varassim.F90 are located under mse/programs).
    • OFFLIN: Surface offline routines (please note that programs pgd.F90, prep.F90 and offline.F90 are located under mse/programs).
    • SURFEX: Surface routines for physiography (PGD), initialisation (PREP) and physical processes including e.g. land (ISBA), sea, town (TEB) and lakes.
    • TOPD: TOPMODEL (TOPography based MODEL) for soil hydrology.
    • TRIP: River routing model TRIP

    MSE

    • dummy: empty versions of some routines.
    • externals: routines which can be called from another project.
    • interface: not automatically generated interfaces which match with the "external" directory routines.
    • internals: other non-module routines; they cannot be called from another project.
    • module: all types of modules.
    • new: file conversion routines, e.g. fa2lfi, lfi2fa
    • programs: SURFEX programs

    References and documentation

    +

    Harmonie Source Code

    Introduction

    This wiki page summaries the ARPEGE/IFS source code made available in the HARMONIE system. It is based on documents made available by YESSAD K. (METEO-FRANCE/CNRM/GMAP/ALGO). The relevant document for cycle 40 is available here (or directly here).

    HARMONIE Source Library Structure

    The main source of HARMONIE system originates from IFS/ARPEGE and it consists of a number of "project" sources. These are:

    • aeolus: Aeolous source code, a package for pre-processing satellite lidar wind data. Inactive for us.
    • aladin: specific routines only relevant to LAM, (limited area models, in particular ALADIN and AROME).
    • algor: application routines, e.g. to read LFI or Arpege files,interface routines for distributed memory environment, some linear algebra routines, such as lanczos algorithm, minimizers.
    • arpifs: global model routines (ARPEGE, IFS), and routines common to global and LAM models. This is the core of the ARPEGE/IFS software. The core of ARPEGE/IFS software.
    • biper: Biperiodization routines for the LAM
    • blacklist: package for blacklisting
    • coupling: lateral coupling and spectral nudging for LAM models
    • etrans: spectral transforms for plane geometry, used for LAM
    • ifsaux: some application routines, for example reading or writing on “LFI” or ARPEGE files, interface routines for distributed memory environment
    • mpa: upper air meso-NH/AROME physics (also used in ARPEGE/ALADIN)
    • mse: surface processes in meso-NH/AROME (interface for SURFEX)
    • odb: ODB (Observational Data Base software), needed by ARPEGE/ALADIN for their analysis or their assimilation cycle
    • satrad: satellite data handling package, needed to run the model analysis/assimilation
    • surf: ECMWF surface scheme
    • surfex: surface processes in meso-NH/AROME - the externalized surface scheme SURFEX
    • trans: spectral transforms for spherical geometry, used for ARPEGE/IFS
    • utilities: utility packages, for operational FA to GRIB (PROGRID), OULAN, BATOR, or programs to operate on ODB and radiances bias correction

    Dependencies and hierarchy between each project

    Note: these project names are no longer valid – need to update

    • ARP+TFL+XRD+XLA+MPA+MSE+SURFEX: for ARPEGE forecasts with METEO-FRANCE physics.
    • ARP+ALD+TFL+TAL+XRD+XLA+BIP+MPA+MSE+SURFEX: for ALADIN or AROME forecasts.
    • ARP+TFL+XRD+XLA+SUR: for IFS forecasts with ECMWF physics.
    • ARP+TFL+XRD+XLA+MPA+MSE+SURFEX+BLA+ODB+SAT+AEO: for ARPEGE assimilations with METEO-FRANCE physics.
    • ARP+ALD+TFL+TAL+XRD+XLA+BIP+MPA+MSE+SURFEX+BLA+ODB+SAT+AEO: for ALADIN or AROME assimilations.
    • ARP+TFL+XRD+XLA+SUR+BLA+ODB+SAT+OBT+SCR+AEO: for IFS assimilations with ECMWF physics.

    Libraries under each project

    Note: this information made need to be updated for CY40

    ARPIFS

    • adiab
      • Adiabatic dynamics
      • Adiabatic diagnostics and intermediate quantities calculation, for example the geopotential height (routines GP... or GNH...).
      • Eulerian advections
      • Semi-Lagrangian advection and interpolators (routines LA...)
      • Semi-implicit scheme and linear terms calculation (routines SI..., SP..SI..)
      • Horizontal diffusion (routines SP..HOR..)
    • ald inc
      • function: functions used only in ALADIN
      • namelist: namelists read by ALADIN.
    • c9xx: specific configurations 901 to 999 routines (mainly configuration 923). Routines INCLI.. are used in configuration 923. Routines INTER... are interpolators used in configurations 923, 931, 932.
    • canari: routines used in the CANARI optimal interpolation. Their names generally starts by CA.
    • canari common: empty directory to be deleted.
    • climate: some specific ARPEGE-CLIMAT routines.
    • common: often contains includes
    • control: control routines. Contains in particular STEPO and CNT... routines.
    • dfi: routines used in the DFI (digital filter initialisation) algorithm
    • dia: diagnostics other than FULL-POS. One finds some setup SU... routines specific to some diagnostics and some WR... routines doing file writing.
    • function: functions (in includes). The qa....h functions are used in CANARI, the fc....h functions are used in a large panel of topics.
    • interface: not automatic interfaces (currently empty).
    • kalman: Kalman filter.
    • module: all the types of module (variables declarations, type definition, active code).
    • mwave: micro-wave observations (SSM/I) treatment.* namelist: all namelists.
    • nmi: routines used in the NMI (normal mode initialisation) algorithm.
    • obs error: treatment of the observation errors in the assimilation.
    • obs preproc: observation pre-processing (some of them are called in the screening).
    • ocean: oceanic coupling, for climatic applications.
    • onedvar: 1D-VAR assimilation scheme used at ECMWF.
    • parallel: parallel environment, communications between processors.
    • parameter: empty directory to be deleted.
    • phys dmn: physics parameterizations used at METEO-FRANCE, and HIRLAM physics, ALARO physics.
    • phys ec: ECMWF physics. Some of these routines (FMR radiation scheme, Lopez convection scheme) are now also used in the METEO-FRANCE physics.
    • pointer: empty directory to be deleted.
    • pp obs: several applications
      • observation horizontal and vertical interpolator.
      • FULL-POS.
      • vertical interpolator common to FULL-POS and the observation interpolator; some of these routines may be used elsewhere.
    • setup: setup routines not linked with a very specific domain. More specific setup routines are spread among some other subdirectories.
    • sinvect: singular vectors calculation (configuration 601).
    • support: empty directory to be deleted.
    • transform: hat routines for spectral transforms.
    • utility: miscellaneous utilitaries, linear algebra routines, array deallocation routines.
    • var: routines involved in the 3DVAR and 4DVAR assimilation, some minimizers (N1CG1, CONGRAD), some specific 3DVAR and 4DVAR setup routines.
    • wave: empty directory to be deleted.

    ALADIN

    • adiab: adiabatic dynamics.
    • blending: blending scheme (currently only contains the procedure blend.ksh).
    • c9xx: specific configurations E901 to E999 routines (mainly configuration E923). Routines EINCLI.. are used in configuration E923. Routines EINTER... are interpolators used in configurations E923, E931, E932.
    • control: control routines.
    • coupling: lateral coupling by external lateral boundary conditions.
    • dia: diagnostics other than FULL-POS.
    • inidata: setup routines specific to file reading (initial conditions, LBC).
    • module: active code modules only used in ALADIN.
    • obs preproc: observation pre-processing (some of them are called in the screening).
    • parallel: parallel environment, communications between processors.
    • pp obs: several applications:
      • observation horizontal and vertical interpolator.
      • FULL-POS.
      • vertical interpolator common to FULL-POS and the observation interpolator; some of these routines may be used elsewhere.
    • programs: probably designed to contain procedures, but currently contains among others some blending routines, the place of which would be probably better in subdirectory "blending".
    • setup: setup routines not linked with a very specific domain. More specific setup routines are spread among some other subdirectories.
    • sinvect: singular vectors calculation (configuration E601).
    • transform: hat routines for spectral transforms.
    • utility: miscellaneous utilitaries, array deallocation routines.
    • var: routines involved in the 3DVAR and 4DVAR assimilation, some specific 3DVAR and 4DVAR setup routines.

    TFL

    • build: contains procedures.
    • external: routines which can be called from another project.
    • interface: not automatically generated interfaces which match with the "external" directory routines.
    • module: all the types of module (variables declarations, type definition, active code).
      • tpm ...F90: variable declaration + type definition modules.
      • lt.... mod.F90: active code modules for Legendre transforms.
      • ft.... mod.F90: active code modules for Fourier transforms.
      • tr.... mod.F90: active code modules for transpositions.
      • su.... mod.F90: active code modules for setup.
    • programs: specific entries which can be used for TFL code validation. These routines are not called elsewhere.

    TAL

    • external: routines which can be called from another project.
    • interface: not automatically generated interfaces which match with the "external" directory routines.
    • module: all the types of module (variables declarations, type definition, active code).
      • tpmald ...F90: variable declaration + type definition modules.
      • elt.... mod.F90: active code modules for N-S Fourier transforms.
      • eft.... mod.F90: active code modules for E-W Fourier transforms.
      • sue.... mod.F90: active code modules for setup.
    • programs: specific entries which can be used for TAL code validation. These routines are not called elsewhere.

    XRD

    • arpege: empty directory to be deleted.
    • bufr io: BUFR format files reading and writing.
    • cma: CMA format files reading and writing.
    • ddh: DDH diagnostics.
    • fa: ARPEGE (FA) files reading and writing.
    • grib io: ECMWF GRIB format files reading and writing.
    • grib mf: METEO-FRANCE GRIB format files reading and writing.
    • ioassign: empty directory to be deleted.
    • lanczos: linear algebra routines for Lanczos algorithm.
    • lfi: LFI format files reading and writing.
    • minim: linear algebra routines for minimizations. Contains the M1QN3 (quasi-Newton) minimizer.
    • misc: miscellaneous decks.* module: all the types of module (variables declarations, type definition, active code). There are a lot of mpl...F90 modules for parallel environment (interface to MPI parallel environment).
    • mrfstools: empty directory to be deleted.
    • newbufrio: empty directory to be deleted.
    • newcmaio: empty directory to be deleted.
    • not used: miscellaneous decks (unused decks to be deleted?).
    • pcma: empty directory to be deleted.
    • support: miscellaneous routines. Some of them do Fourier transforms, some others do linear algebra.
    • svipc: contains only svipc.c .
    • utilities: miscellaneous utilitaries.

    SUR

    • build: contains procedures.
    • external: routines which can be called from another project.* function: specific functions.
    • interface: not automatically generated interfaces which match with the "external" directory routines.
    • module: all the types of module (variables declarations, type definition, active code).
      • yos ...F90: variable declaration + type definition modules.
      • su.... mod.F90 but not surf.... mod.F90: active code modules for setup.
      • surf.... mod.F90, v.... mod.F90: other active code modules.
    • offline: specific entries which can be used for SUR code validation. These routines are not called elsewhere.

    BLA

    • compiler.
    • include: not automatically generated interfaces, functions, and some other includes.
    • library: the only containing .F90 decks.
    • old2new.
    • scripts.

    SAT

    • bias.
    • emiss.
    • interface.
    • module.
    • mwave.
    • onedvar.
    • pre screen.
    • rtlimb.
    • rttov.
    • satim.
    • test. (Not described in detail; more information has to be provided by someone who knows the content of this project, but there is currently no specific documentation about this topic)

    UTI

    • add cloud fields: program to add 4 cloud variables (liquid water, ice, rainfall, snow) in ARPEGE files.
    • bator: BATOR software (reads observations data in a ASCII format file named OBSOUL and the blacklist, writes them on a ODB format file with some additional information).
    • combi: combination of perturbations in an ensemble forecast (PEARP).
    • controdb: control of the number of observations.
    • extrtovs: unbias TOVS.
    • fcq: does quality control and writes this quality control in ODB files.
    • gobptout: PROGRIB? (convert ARPEGE files contained post-processed data into GRIB files).
    • include: all .h decks (functions, COMMON blocks, parameters).
    • mandalay: software MANDALAY.
    • module: all types of modules.
    • namelist: namelists specific to the applications stored in UTI (for example OULAN, BATOR).
    • oulan: OULAN software (the step just before BATOR: observation extractions in the BDM, samples data in space and time, and writes the sampled data in an ASCII file called "OBSOUL").
    • pregpssol: Surface GPS processing.
    • prescat: Scatterometer data processing.
    • progrid: PROGRID? (convert ARPEGE files contained post-processed data into GRIB files).
    • progrid cadre: cf. progrid?
    • sst nesdis: program to read the SST on the BDAP. This project has its own entries.

    MPA

    It contains first layer of directory

    • chem: chemistry.
    • conv: convection.
    • micro: microphysics.
    • turb: turbulence.

    Each directory contains the following subdirectories

    • externals: routines which can be called from another project.
    • include: all the "include" decks (functions, COMMON blocks, parameters).
    • interface: not automatically generated interfaces which match with the "external" directory routines.
    • internals: other non-module routines; they cannot be called from another project.
    • module: all types of modules.

    SURFEX

    • ASSIM: Surface assimilation routines (please note that programs soda.F90, oi_main.F90 and varassim.F90 are located under mse/programs).
    • OFFLIN: Surface offline routines (please note that programs pgd.F90, prep.F90 and offline.F90 are located under mse/programs).
    • SURFEX: Surface routines for physiography (PGD), initialisation (PREP) and physical processes including e.g. land (ISBA), sea, town (TEB) and lakes.
    • TOPD: TOPMODEL (TOPography based MODEL) for soil hydrology.
    • TRIP: River routing model TRIP

    MSE

    • dummy: empty versions of some routines.
    • externals: routines which can be called from another project.
    • interface: not automatically generated interfaces which match with the "external" directory routines.
    • internals: other non-module routines; they cannot be called from another project.
    • module: all types of modules.
    • new: file conversion routines, e.g. fa2lfi, lfi2fa
    • programs: SURFEX programs

    References and documentation

    diff --git a/previews/PR1153/Overview/da_graph/index.html b/previews/PR1153/Overview/da_graph/index.html index 94d2e8708..a19890d48 100644 --- a/previews/PR1153/Overview/da_graph/index.html +++ b/previews/PR1153/Overview/da_graph/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -
    +
    diff --git a/previews/PR1153/PostProcessing/Diagnostics/index.html b/previews/PR1153/PostProcessing/Diagnostics/index.html index ea26ece70..b401f2f15 100644 --- a/previews/PR1153/PostProcessing/Diagnostics/index.html +++ b/previews/PR1153/PostProcessing/Diagnostics/index.html @@ -11,4 +11,4 @@ YEZDIAG_NL(1)%CNAME='YOURVAL', YEZDIAG_NL(1)%LADV=.F.,

    If you add more fields (e.g. you set NGFL_EZDIAG=4), I think you will also need to set the grib parameter, e.g. (the default is 999, that you can leave for the first one).

    YEZDIAG_NL(2)%IGRBCODE=998,
     YEZDIAG_NL(3)%IGRBCODE=997,
    -YEZDIAG_NL(4)%IGRBCODE=996,

    Note that the two first places are already defined in harmonie_namelist.pm.

  • In order to have your variable converted from FA to grib, add the new variable in util/gl/inc/trans_tab.h

  • +YEZDIAG_NL(4)%IGRBCODE=996,

    Note that the two first places are already defined in harmonie_namelist.pm.

  • In order to have your variable converted from FA to grib, add the new variable in util/gl/inc/trans_tab.h

  • diff --git a/previews/PR1153/PostProcessing/FileConversions/index.html b/previews/PR1153/PostProcessing/FileConversions/index.html index 201cdf5e1..50f84f5f1 100644 --- a/previews/PR1153/PostProcessing/FileConversions/index.html +++ b/previews/PR1153/PostProcessing/FileConversions/index.html @@ -26,4 +26,4 @@ fstart(16) = $fstart, fstart(162) = $fstart, fstart(163) = $fstart, -/In the namelist:

    WMO GRIB editions and references

    Currently (Aug 2019) there are several editions of GRIB in use or in experimental phase.

    +/In the namelist:

    WMO GRIB editions and references

    Currently (Aug 2019) there are several editions of GRIB in use or in experimental phase.

    diff --git a/previews/PR1153/PostProcessing/Fullpos/index.html b/previews/PR1153/PostProcessing/Fullpos/index.html index 87c2ed482..2565a6a11 100644 --- a/previews/PR1153/PostProcessing/Fullpos/index.html +++ b/previews/PR1153/PostProcessing/Fullpos/index.html @@ -22,4 +22,4 @@ 132c132 < @namfpdyh_lev = (1,2,3,4,5,6,7,8,9,10,11,12,13) ; --- -> @namfpdyh_lev = (1,2,3,4,5,6,7,8,9,10,11,12) ;

    Expert users

    In the FULL-POS namelist NAMFPC (variables explained in src/arp/module/yomfpc.F90), the variables are placed into different categories:

    The default FA-names for parameters in different categories can be found from src/arp/setup/suafn1.F90 L687.

    It's worth mentioning some of the variables postprocessed by FULL-POS

    Problems

    Problems may be encountered with FULL-POS when running on large domains. Here are some things to look out for:

    +> @namfpdyh_lev = (1,2,3,4,5,6,7,8,9,10,11,12) ;

    Expert users

    In the FULL-POS namelist NAMFPC (variables explained in src/arp/module/yomfpc.F90), the variables are placed into different categories:

    The default FA-names for parameters in different categories can be found from src/arp/setup/suafn1.F90 L687.

    It's worth mentioning some of the variables postprocessed by FULL-POS

    Problems

    Problems may be encountered with FULL-POS when running on large domains. Here are some things to look out for:

    diff --git a/previews/PR1153/PostProcessing/Interpolation/index.html b/previews/PR1153/PostProcessing/Interpolation/index.html index 79e06ba1b..c436c69bc 100644 --- a/previews/PR1153/PostProcessing/Interpolation/index.html +++ b/previews/PR1153/PostProcessing/Interpolation/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Interpolations with gl

    Introduction

    In the following we describe the geometrical routines in gl. gl can handle the following projections

    • lat/lon
    • Rotated lat/lon
    • Lambert
    • Polar stereographic
    • Rotated Mercator

    Interpolation

    All interpolations are handled within the module util/gl/mod/module_interpol.f90. The module contains

    • clear_interpol to clear the interpolation setup
    • setup_interpol where the position of the output gridpoints in the input grid are calculated
    • setup_weights where we calculate the interpolation weights. Interpolation can be nearest gridpoint or bilinear. The interpolation can be masked with a field that tells which gridpoints from the input fields that can be used.

    The setup routines are only called once.

    • interpolate runs the interpolation
    • resample works like the interpolation if the input grid is coarser than the output grid. If reversed it takes the averages of the input gridpoints belonging to each output gridpoit.

    Interpolation can be done between different projections as wall as to geographical points. The most general example on the usage of the interpolatin can be found in util/gl/grb/any2any.F90.

    For practical usage see the section about postprocessing

    Rotations

    All rotations are handled within the module util/gl/mod/module_rotations.f90. The module contains

    • clear_rotation to clear the rotation setup
    • prepare_rotation prepare rotations from input geometry to output geometry via north south components.
    • rotate_winds runs the actual rotation.

    Staggering

    The staggering of an input file is based on the knowledge about the model and is in util/gl/mod/module_griblist.f90. The restaggering is done in util/gl/grb/restag.f90 as a simple average between gridpoints. The staggering of the output geomtery is defined by OUTGEO@ARKAWA, where A and C are available options.

    +

    Interpolations with gl

    Introduction

    In the following we describe the geometrical routines in gl. gl can handle the following projections

    • lat/lon
    • Rotated lat/lon
    • Lambert
    • Polar stereographic
    • Rotated Mercator

    Interpolation

    All interpolations are handled within the module util/gl/mod/module_interpol.f90. The module contains

    • clear_interpol to clear the interpolation setup
    • setup_interpol where the position of the output gridpoints in the input grid are calculated
    • setup_weights where we calculate the interpolation weights. Interpolation can be nearest gridpoint or bilinear. The interpolation can be masked with a field that tells which gridpoints from the input fields that can be used.

    The setup routines are only called once.

    • interpolate runs the interpolation
    • resample works like the interpolation if the input grid is coarser than the output grid. If reversed it takes the averages of the input gridpoints belonging to each output gridpoit.

    Interpolation can be done between different projections as wall as to geographical points. The most general example on the usage of the interpolatin can be found in util/gl/grb/any2any.F90.

    For practical usage see the section about postprocessing

    Rotations

    All rotations are handled within the module util/gl/mod/module_rotations.f90. The module contains

    • clear_rotation to clear the rotation setup
    • prepare_rotation prepare rotations from input geometry to output geometry via north south components.
    • rotate_winds runs the actual rotation.

    Staggering

    The staggering of an input file is based on the knowledge about the model and is in util/gl/mod/module_griblist.f90. The restaggering is done in util/gl/grb/restag.f90 as a simple average between gridpoints. The staggering of the output geomtery is defined by OUTGEO@ARKAWA, where A and C are available options.

    diff --git a/previews/PR1153/PostProcessing/gl/index.html b/previews/PR1153/PostProcessing/gl/index.html index ea207e386..da68d62e2 100644 --- a/previews/PR1153/PostProcessing/gl/index.html +++ b/previews/PR1153/PostProcessing/gl/index.html @@ -233,4 +233,4 @@ -NLON $NLON -NLAT $NLAT \ -LATC $LATC -LONC $LONC \ -LAT0 $LAT0 -LON0 $LON0 \ --GSIZE $GSIZE

    To get the geographical position of the lower left corner use

    domain_prop -f -LOW_LEFT FAFILE  

    To print out the important projection parameters in a file use:

    domain_prop -f -4JB FAFILE

    Get time information from a file

    domain_prop -f -DATE FAFILE

    fldextr and obsextr

    Read about the verification extraction programs here

    +-GSIZE $GSIZE

    To get the geographical position of the lower left corner use

    domain_prop -f -LOW_LEFT FAFILE  

    To print out the important projection parameters in a file use:

    domain_prop -f -4JB FAFILE

    Get time information from a file

    domain_prop -f -DATE FAFILE

    fldextr and obsextr

    Read about the verification extraction programs here

    diff --git a/previews/PR1153/PostProcessing/xtool/index.html b/previews/PR1153/PostProcessing/xtool/index.html index 5de3a5298..b07e0e226 100644 --- a/previews/PR1153/PostProcessing/xtool/index.html +++ b/previews/PR1153/PostProcessing/xtool/index.html @@ -67,4 +67,4 @@ outkey%time = 0000 outkey%endstep = 8 outkey%startstep = 7 -/

    This is used scr/convertFA to deaccumulate fields to NetCDF for climate simulations.

    SAL

    Structure Amplitude Location (SAL) is object based quality measure for the verification of QPFs (Wernli et al., 2008). SAL contains three independent components that focus on Structure, Amplitude and Location of the precipitation field in a specified domain.

    +/

    This is used scr/convertFA to deaccumulate fields to NetCDF for climate simulations.

    SAL

    Structure Amplitude Location (SAL) is object based quality measure for the verification of QPFs (Wernli et al., 2008). SAL contains three independent components that focus on Structure, Amplitude and Location of the precipitation field in a specified domain.

    diff --git a/previews/PR1153/SuiteManagement/ECFLOW/index.html b/previews/PR1153/SuiteManagement/ECFLOW/index.html index 58a93c68f..bd4975a5b 100644 --- a/previews/PR1153/SuiteManagement/ECFLOW/index.html +++ b/previews/PR1153/SuiteManagement/ECFLOW/index.html @@ -18,4 +18,4 @@ source ~/.bash_profile module unload ecflow module load ecflow/5.7.0 -$@

    The ecFlow server version may change over time.

    Add another user to your ecFlow viewer

    Sometimes it's handy to be able to follow, and control, your colleagues experiments. To be able to do this do the following steps:

    Changing the port

    By default, the port is set by

    export ECF_PORT=$((1500+usernumber))

    in mSMS.job (40h1.1), Start_ecFlow.sh (up to #b6d58dd), or Main (currently).

    For the VMs at ECMWF it is set to 3141 in Env_system. If you want to change this number (for example, if that port is in use already), you will also need to add a -p flag when calling ecflow_start.sh as follows:

    ecflow_start.sh -p $ECF_PORT -d $JOBOUTDIR

    Otherwise, ecflow_start.sh tries to open the default port.

    Note: if you already have an ecFlow server running at your new port number before launching an experiment, this won't be an issue.

    More info

    +$@

    The ecFlow server version may change over time.

    Add another user to your ecFlow viewer

    Sometimes it's handy to be able to follow, and control, your colleagues experiments. To be able to do this do the following steps:

    Changing the port

    By default, the port is set by

    export ECF_PORT=$((1500+usernumber))

    in mSMS.job (40h1.1), Start_ecFlow.sh (up to #b6d58dd), or Main (currently).

    For the VMs at ECMWF it is set to 3141 in Env_system. If you want to change this number (for example, if that port is in use already), you will also need to add a -p flag when calling ecflow_start.sh as follows:

    ecflow_start.sh -p $ECF_PORT -d $JOBOUTDIR

    Otherwise, ecflow_start.sh tries to open the default port.

    Note: if you already have an ecFlow server running at your new port number before launching an experiment, this won't be an issue.

    More info

    diff --git a/previews/PR1153/System/Build_local_docs/index.html b/previews/PR1153/System/Build_local_docs/index.html index ddb2da59e..20c8a4739 100644 --- a/previews/PR1153/System/Build_local_docs/index.html +++ b/previews/PR1153/System/Build_local_docs/index.html @@ -17,4 +17,4 @@ │ - ✘ ENV["GITHUB_REF"] matches devbranch="pre-CY46h1" │ - ✘ ENV["GITHUB_ACTOR"] exists │ - ✘ ENV["DOCUMENTER_KEY"] or ENV["GITHUB_TOKEN"] exists -└ Deploying: ✘

    The HTML pages will be put in docs/build. Open index.html in a browser

    firefox docs/build/index.html

    A local build will not deploy the HTML pages to github.com/Hirlam/HarmonieSystemDocumentation.git.

    Also see .github/workflows/documentation.yml

    +└ Deploying: ✘

    The HTML pages will be put in docs/build. Open index.html in a browser

    firefox docs/build/index.html

    A local build will not deploy the HTML pages to github.com/Hirlam/HarmonieSystemDocumentation.git.

    Also see .github/workflows/documentation.yml

    diff --git a/previews/PR1153/System/DrHook/index.html b/previews/PR1153/System/DrHook/index.html index 3b6bd6dbb..bbdda10cb 100644 --- a/previews/PR1153/System/DrHook/index.html +++ b/previews/PR1153/System/DrHook/index.html @@ -39,4 +39,4 @@ !-- The following now does NOT initialize MPL nor MPI for you IF (LHOOK) CALL DR_HOOK('SOME_UTILGL_TOOL',0,ZHOOK_HANDLE) ... -IF (LHOOK) CALL DR_HOOK('SOME_UTILGL_TOOL',1,ZHOOK_HANDLE)

    Overheads

    The DR_HOOK=1 has practically no overhead on a scalar machine. Profiling with DR_HOOK_OPT=prof causes some 5% overhead.

    On a vector machine overhead are so big that Dr.Hook should not be used there, unfortunately.

    +IF (LHOOK) CALL DR_HOOK('SOME_UTILGL_TOOL',1,ZHOOK_HANDLE)

    Overheads

    The DR_HOOK=1 has practically no overhead on a scalar machine. Profiling with DR_HOOK_OPT=prof causes some 5% overhead.

    On a vector machine overhead are so big that Dr.Hook should not be used there, unfortunately.

    diff --git a/previews/PR1153/System/ECMWF/ECMWF_teleport/index.html b/previews/PR1153/System/ECMWF/ECMWF_teleport/index.html index c41b3d59a..3cf5db405 100644 --- a/previews/PR1153/System/ECMWF/ECMWF_teleport/index.html +++ b/previews/PR1153/System/ECMWF/ECMWF_teleport/index.html @@ -20,4 +20,4 @@ User dui IdentityFile ~/.tsh/keys/jump.ecmwf.int/eoin.whelan@met.ie ProxyCommand bash -c "tsh login; ssh -W %h:%p %r@jump.ecmwf.int" -[ewhelan@reaserve ~]$

    Open ecFlow ports

    ssh hpc-login -C -N -L 3141:ecflow-gen-dui-001:3141
    +[ewhelan@reaserve ~]$

    Open ecFlow ports

    ssh hpc-login -C -N -L 3141:ecflow-gen-dui-001:3141
    diff --git a/previews/PR1153/System/ECMWF/RunningHarmonieOnAtos/index.html b/previews/PR1153/System/ECMWF/RunningHarmonieOnAtos/index.html index 8c2323909..92ee15944 100644 --- a/previews/PR1153/System/ECMWF/RunningHarmonieOnAtos/index.html +++ b/previews/PR1153/System/ECMWF/RunningHarmonieOnAtos/index.html @@ -18,4 +18,4 @@ git commit --author "Name <name@host>" -m "Commit message" git push --set-upstream origin <feature/branch_name>

    Specifying --set-upstream origin <feature/branch_name> to git push is only necessary the first time you push your branch to the remote. When ready you can now go to GitHub and make a pull-request to the Harmonie repository from your fork.

    Start your experiment

    Launch the experiment by giving start time, DTG, end time, DTGEND

    ./Harmonie start DTG=YYYYMMDDHH DTGEND=YYYYMMDDHH
     # e.g., ./Harmonie start DTG=2022122400 DTGEND=2022122406

    If successful, Harmonie will identify your experiment name and start building your binaries and run your forecast. If not, you need to examine the ECFLOW log file $HM_DATA/ECF.log. $HM_DATA is defined in your Env_system file. At ECMWF $HM_DATA=$SCRATCH/hm_home/$EXP where $EXP is your experiment name. Read more about where things happen further down.

    Continue your experiment

    If your experiment have successfully completed and you would like to continue for another period you should write

    ./Harmonie prod DTGEND=YYYYMMDDHH

    By using prod you tell the system that you are continuing the experiment and using the first guess from the previous cycle. The start date is take from a file progress.log created in your $HOME/hm_home/my_exp directory. If you would have used start the initial data would have been interpolated from the boundaries, a cold start in other words.

    Start/Restart of ecflow_ui

    To start the graphical window for ECFLOW

    ./Harmonie mon

    The graphical window runs independently of the experiment and can be closed and restarted again with the same command. With the graphical interface you can control and view logfiles of each task.

    Making local changes

    Very soon you will find that you need to do changes in a script or in the source code. Once you have identified which file to edit you put it into the current $HOME/hm_home/my_exp directory, with exactly the same subdirectory structure as in the reference. e.g, if you want to modify a namelist setting

    ./Harmonie co nam/harmonie_namelists.pm   # retrieve default namelist harmonie_namelists.pm
    -vi nam/harmonie_namelists.pm              # modify the namelist

    Next time you run your experiment the changed file will be used. You can also make changes in a running experiment. Make the change you wish and rerun the InitRun task from the viewer. The InitRun task copies all files from your local experiment directory to your working directory $HM_DATA. Once your InitRun task is complete your can rerun the task you are interested in. If you wish to recompile something you will also have to rerun the Build tasks.

    Issues

    Harmonie exp stop at ECMWF(Atos) due $PERM mounting problem https://github.com/Hirlam/Harmonie/issues/628

    Account

    In order to change the billing account, open Env_submit and find the definition of scalar_job. Then add a line like

    'ACCOUNT' => $submit_type.' --account=account_name' to the definition of the dictionary.

    Directory structure

    $SCRATCH

    In $SCRATCH/hm_home/$EXP you will find

    DirectoryContent
    binBinaries
    libSource code synced from $HM_LIB and compiled code
    lib/srcObject files and source code (if you build with makeup, set by MAKEUP_BUILD_DIR)
    lib/utilUtilities such as makeup, gl_grib_api or oulan
    climateClimate files
    YYYYMMDD_HHWorking directory for the current cycle. If an experiment fails it is useful to check the IFS log file, NODE.001_01, in the working directory of the current cycle. The failed job will be in a directory called something like Failed_this_job.
    archiveArchived files. A YYYY/MM/DD/HH structure for per cycle data. ICMSHHARM+NNNN and ICMSHHARM+NNNN.sfx are atmospheric and surfex forecast output files
    extractVerification input data. This is also stored on the permanent disk $HPCPERM/HARMONIE/archive/$EXP/parchive/archive/extract
    ECF.logLog of job submission

    ECFS

    $PERM

    DirectoryContent
    HARMONIE/$EXPecflow log and job files
    hm_lib/$EXP/libScipts, config files, ecf and suite, source code (not compiled, set by $HM_LIB). Reference with experiment's changes on top

    $HPCPERM

    In $HPCPERM/hm_home/$EXP

    DirectoryContent
    parchive/archive/extract/Verification input data.

    $HOME on ecflow-gen-${user}-001

    DirectoryContent
    ecflow_server/ecFlow checkpoint and log files

    Cleanup of old experiments

    Danger

    These commands may not work properly in all versions. Do not run the removal before you're sure it's OK

    Once you have complete your experiment you may wish to remove code, scripts and data from the disks. Harmonie provides some simple tools to do this. First check the content of the different disks by

    Harmonie CleanUp -ALL

    Once you have convinced yourself that this is OK you can proceed with the removal.

    Harmonie CleanUp -ALL -go 

    If you would like to exclude the data stored on e.g ECFS ( at ECMWF ) or in more general terms stored under HM_EXP ( as defined in Env_system ) you run

    Harmonie CleanUp -d

    to list the directories intended for cleaning. Again, convince yourself that this is OK and proceed with the cleaning by

    Harmonie CleanUp -d -go

    You can always remove the data from ECFS directly by running e.g.

    erm -R ec:/YOUR_USER/harmonie/EXPERIMENT_NAME 

    or

    erm -R ectmp:/YOUR_USER/harmonie/EXPERIMENT_NAME 

    Debugging Harmonie with ARM DDT

    Follow instructions here. Use Run DDT client on your Personal Computer or End User Device

    +vi nam/harmonie_namelists.pm # modify the namelist

    Next time you run your experiment the changed file will be used. You can also make changes in a running experiment. Make the change you wish and rerun the InitRun task from the viewer. The InitRun task copies all files from your local experiment directory to your working directory $HM_DATA. Once your InitRun task is complete your can rerun the task you are interested in. If you wish to recompile something you will also have to rerun the Build tasks.

    Issues

    Harmonie exp stop at ECMWF(Atos) due $PERM mounting problem https://github.com/Hirlam/Harmonie/issues/628

    Account

    In order to change the billing account, open Env_submit and find the definition of scalar_job. Then add a line like

    'ACCOUNT' => $submit_type.' --account=account_name' to the definition of the dictionary.

    Directory structure

    $SCRATCH

    In $SCRATCH/hm_home/$EXP you will find

    DirectoryContent
    binBinaries
    libSource code synced from $HM_LIB and compiled code
    lib/srcObject files and source code (if you build with makeup, set by MAKEUP_BUILD_DIR)
    lib/utilUtilities such as makeup, gl_grib_api or oulan
    climateClimate files
    YYYYMMDD_HHWorking directory for the current cycle. If an experiment fails it is useful to check the IFS log file, NODE.001_01, in the working directory of the current cycle. The failed job will be in a directory called something like Failed_this_job.
    archiveArchived files. A YYYY/MM/DD/HH structure for per cycle data. ICMSHHARM+NNNN and ICMSHHARM+NNNN.sfx are atmospheric and surfex forecast output files
    extractVerification input data. This is also stored on the permanent disk $HPCPERM/HARMONIE/archive/$EXP/parchive/archive/extract
    ECF.logLog of job submission

    ECFS

    $PERM

    DirectoryContent
    HARMONIE/$EXPecflow log and job files
    hm_lib/$EXP/libScipts, config files, ecf and suite, source code (not compiled, set by $HM_LIB). Reference with experiment's changes on top

    $HPCPERM

    In $HPCPERM/hm_home/$EXP

    DirectoryContent
    parchive/archive/extract/Verification input data.

    $HOME on ecflow-gen-${user}-001

    DirectoryContent
    ecflow_server/ecFlow checkpoint and log files

    Cleanup of old experiments

    Danger

    These commands may not work properly in all versions. Do not run the removal before you're sure it's OK

    Once you have complete your experiment you may wish to remove code, scripts and data from the disks. Harmonie provides some simple tools to do this. First check the content of the different disks by

    Harmonie CleanUp -ALL

    Once you have convinced yourself that this is OK you can proceed with the removal.

    Harmonie CleanUp -ALL -go 

    If you would like to exclude the data stored on e.g ECFS ( at ECMWF ) or in more general terms stored under HM_EXP ( as defined in Env_system ) you run

    Harmonie CleanUp -d

    to list the directories intended for cleaning. Again, convince yourself that this is OK and proceed with the cleaning by

    Harmonie CleanUp -d -go

    You can always remove the data from ECFS directly by running e.g.

    erm -R ec:/YOUR_USER/harmonie/EXPERIMENT_NAME 

    or

    erm -R ectmp:/YOUR_USER/harmonie/EXPERIMENT_NAME 

    Debugging Harmonie with ARM DDT

    Follow instructions here. Use Run DDT client on your Personal Computer or End User Device

    diff --git a/previews/PR1153/System/GitDeveloperDocumentation/index.html b/previews/PR1153/System/GitDeveloperDocumentation/index.html index fa8fbe8c4..2c4192930 100644 --- a/previews/PR1153/System/GitDeveloperDocumentation/index.html +++ b/previews/PR1153/System/GitDeveloperDocumentation/index.html @@ -33,4 +33,4 @@ remote:
  • Follow this link

  • Once the pull request has been approved by the System-core team it will be merged in to the dev-CY46h1 branch

  • Further information is available here

    Moving my branches from hirlam.org

    1. Add your hirlam.org fork as a remote (HLUSER is your hirlam.org username)

      cd $HOME/git/github/USER/Harmonie
       git remote add hirlamorgfork https://git.hirlam.org/users/HLUSER/Harmonie
       git fetch hirlamorgfork
    2. For each branch BRANCHNAME you want to move to github

      git checkout -t hirlamorgfork/BRANCHNAME
      -git push origin BRANCHNAME

    learn git branching is an excellent interactive tool to understand git.

    Coding Standards

    See Coding standards for Arpège, IFS and Aladin and Arpege/IFS Fortran coding standard (requires ECMWF account)

    +git push origin BRANCHNAME

    learn git branching is an excellent interactive tool to understand git.

    Coding Standards

    See Coding standards for Arpège, IFS and Aladin and Arpege/IFS Fortran coding standard (requires ECMWF account)

    diff --git a/previews/PR1153/System/HarmonieTestbed/index.html b/previews/PR1153/System/HarmonieTestbed/index.html index 57b007c3d..04cd333d6 100644 --- a/previews/PR1153/System/HarmonieTestbed/index.html +++ b/previews/PR1153/System/HarmonieTestbed/index.html @@ -122,4 +122,4 @@ [ Status: OK] For more details please check /scratch/hlam/hm_home/testbed_ECMWF.atos.gnu_12414/testbed_comp_12414.log_details -

    All the logs from a testbed experiment are posted to the mail address MAIL_TESTBED set in ecf_config_exp.h. If a github token GH_TOKEN is set in scr/Testbed_comp the results will also be posted on Testbed output discussions on github using the GraphQL API. See github settings to create a token. Tick at least the repo box. Save your token in $HOME/.ssh/gh_testbed.token or in $HOME/env/gh_testbed.token and chmod 600 and it will be used. The test returns three different status signals

    In addition to the summary information detailed information can be found in the archive about the art of the difference.

    When to use the testbed

    It is recommended to use the testbed when adding new options or make other changes in the configurations. If your new option is not activated the result compared with the reference experiment should be the same, if not you have to start debugging. When changing things for one configuration it's easy to break other ones. In such cases the testbed is a very good tool make sure you haven't destroyed anything.

    +

    All the logs from a testbed experiment are posted to the mail address MAIL_TESTBED set in ecf_config_exp.h. If a github token GH_TOKEN is set in scr/Testbed_comp the results will also be posted on Testbed output discussions on github using the GraphQL API. See github settings to create a token. Tick at least the repo box. Save your token in $HOME/.ssh/gh_testbed.token or in $HOME/env/gh_testbed.token and chmod 600 and it will be used. The test returns three different status signals

    In addition to the summary information detailed information can be found in the archive about the art of the difference.

    When to use the testbed

    It is recommended to use the testbed when adding new options or make other changes in the configurations. If your new option is not activated the result compared with the reference experiment should be the same, if not you have to start debugging. When changing things for one configuration it's easy to break other ones. In such cases the testbed is a very good tool make sure you haven't destroyed anything.

    diff --git a/previews/PR1153/System/Local/QuickStartLocal/index.html b/previews/PR1153/System/Local/QuickStartLocal/index.html index 751bee3a8..342807f32 100644 --- a/previews/PR1153/System/Local/QuickStartLocal/index.html +++ b/previews/PR1153/System/Local/QuickStartLocal/index.html @@ -12,4 +12,4 @@ PATH_TO_HARMONIE/config-sh/Harmonie prod DTGEND=YYYYMMDDHH LL=12

    By using prod you tell the system that you are continuing the experiment and using the first guess from the previous cycle. The start date is take from a file progress.log created in your $HOME/hm_home/my_exp directory. If you would have used start the initial data would have been interpolated from the boundaries, a cold start in other words.

    Start/Restart of ecflow_ui

    To start the graphical window for ecFlow on ECMWF type

    cd $HOME/hm_home/my_exp
     PATH_TO_HARMONIE/config-sh/Harmonie mon

    The graphical window, mXCdp runs independently of the mSMS job and can be closed and restarted again with the same command. With the graphical interface you can control and view logfiles of each task.

    Making local changes

    Very soon you will find that you need to do changes in a script or in the source code. Once you have identified which file to edit you put it into the current $HOME/hm_home/my_exp directory, with exactly the same subdirectory structure as in the reference. e.g, if you want to modify a namelist setting

    cd $HOME/hm_home/my_exp
     PATH_TO_HARMONIE/config-sh/Harmonie co nam/harmonie_namelists.pm         # retrieve default namelist harmonie_namelists.pm
    -vi nam/harmonie_namelists.pm                        # modify the namelist

    Next time you run your experiment the changed file will be used. You can also make changes in a running experiment. Make the change you wish and rerun the InitRun task in the mXCdp window. The !InitRun task copies all files from your local experiment directory to your working directory $HM_DATA. Once your InitRun task is complete your can rerun the task you are interested in. If you wish to recompile something you will also have to rerun the Build tasks. Read more about how to control and rerun tasks in mini-SMS from mXCdp.

    Directory structure

    On most platforms HARMONIE compiles and produces all its output data under $HM_DATA (defined in ~/hm_home/my_exp/Env_system)

    = Description == Location =
    Binaries$BINDIR (set in ecf/config_exp.h ), default is $HM_DATA/bin
    libraries, object files & source code$HM_DATA/lib/src if MAKEUP=yes, $HMDATA/gmkpack_build if MAKEUP=no
    Scripts$HM_LIB/scr
    config files (Envsystem & Envsystem$HM_LIB linked to files in $HM_LIB/config-sh
    ecf scripts and main config$HM_LIB/ecf
    ecFlow suite definitions$HM_LIB/suites
    Utilities such as gmkpack, gl & monitor$HM_DATA/lib/util
    Climate files$HM_DATA/climate
    Working directory for the current cycle$HM_DATA/YYYYMMDD_HH
    Archived files$HM_DATA/archive
    Archived cycle output$HM_DATA/archive/YYYY/MM/DD/HH
    Archived log files$HM_DATA/archive/log/HM_TaskFamily_YYYYMMDDHH.html where TaskFamily=MakeCycleInput,Date,Postprocessing
    Task log files$JOBOUTDIR (set in Env_system) usually $HM_DATA/sms_logfiles
    Verification data (vfld/vobs/logmonitor)$HM_DATA/archive/extract
    Verification (monitor) results$HM_DATA/archive/extract/WebgraF
    "Fail" directory$HM_DATA/YYYYMMDD_HH/Failed_Family_Task (look at ifs.stat,NODE.001_01, fort.4

    Archive contents

    $HM_DATA/archive/YYYY/MM/DD/HH is used to store "archived" output from HARMONIE cycles. The level of archiving depends on ARSTRATEGY in ecf/config_exp.h . The default setting is medium which will keep the following cycle data:

    Cleanup of old experiments

    Once you have complete your experiment you may wish to remove code, scripts and data from the disks. Harmonie provides some simple tools to do this. First check the content of the different disks by

     Harmonie CleanUp -ALL

    Once you have convinced yourself that this is OK you can proceed with the removal.

     Harmonie CleanUp -ALL -go 

    If you would like to exclude the data stored HM_DATA ( as defined in Env_system ) you run

     Harmonie CleanUp -d

    to list the directories intended for cleaning. Again, convince yourself that this is OK and proceed with the cleaning by

     Harmonie CleanUp -d -go

    NOTE that these commands may not work properly in all versions. Do not run the removal before you're sure it's OK

    +vi nam/harmonie_namelists.pm # modify the namelist

    Next time you run your experiment the changed file will be used. You can also make changes in a running experiment. Make the change you wish and rerun the InitRun task in the mXCdp window. The !InitRun task copies all files from your local experiment directory to your working directory $HM_DATA. Once your InitRun task is complete your can rerun the task you are interested in. If you wish to recompile something you will also have to rerun the Build tasks. Read more about how to control and rerun tasks in mini-SMS from mXCdp.

    Directory structure

    On most platforms HARMONIE compiles and produces all its output data under $HM_DATA (defined in ~/hm_home/my_exp/Env_system)

    = Description == Location =
    Binaries$BINDIR (set in ecf/config_exp.h ), default is $HM_DATA/bin
    libraries, object files & source code$HM_DATA/lib/src if MAKEUP=yes, $HMDATA/gmkpack_build if MAKEUP=no
    Scripts$HM_LIB/scr
    config files (Envsystem & Envsystem$HM_LIB linked to files in $HM_LIB/config-sh
    ecf scripts and main config$HM_LIB/ecf
    ecFlow suite definitions$HM_LIB/suites
    Utilities such as gmkpack, gl & monitor$HM_DATA/lib/util
    Climate files$HM_DATA/climate
    Working directory for the current cycle$HM_DATA/YYYYMMDD_HH
    Archived files$HM_DATA/archive
    Archived cycle output$HM_DATA/archive/YYYY/MM/DD/HH
    Archived log files$HM_DATA/archive/log/HM_TaskFamily_YYYYMMDDHH.html where TaskFamily=MakeCycleInput,Date,Postprocessing
    Task log files$JOBOUTDIR (set in Env_system) usually $HM_DATA/sms_logfiles
    Verification data (vfld/vobs/logmonitor)$HM_DATA/archive/extract
    Verification (monitor) results$HM_DATA/archive/extract/WebgraF
    "Fail" directory$HM_DATA/YYYYMMDD_HH/Failed_Family_Task (look at ifs.stat,NODE.001_01, fort.4

    Archive contents

    $HM_DATA/archive/YYYY/MM/DD/HH is used to store "archived" output from HARMONIE cycles. The level of archiving depends on ARSTRATEGY in ecf/config_exp.h . The default setting is medium which will keep the following cycle data:

    Cleanup of old experiments

    Once you have complete your experiment you may wish to remove code, scripts and data from the disks. Harmonie provides some simple tools to do this. First check the content of the different disks by

     Harmonie CleanUp -ALL

    Once you have convinced yourself that this is OK you can proceed with the removal.

     Harmonie CleanUp -ALL -go 

    If you would like to exclude the data stored HM_DATA ( as defined in Env_system ) you run

     Harmonie CleanUp -d

    to list the directories intended for cleaning. Again, convince yourself that this is OK and proceed with the cleaning by

     Harmonie CleanUp -d -go

    NOTE that these commands may not work properly in all versions. Do not run the removal before you're sure it's OK

    diff --git a/previews/PR1153/System/MFaccess/index.html b/previews/PR1153/System/MFaccess/index.html index 9bba70c77..c7e5dc790 100644 --- a/previews/PR1153/System/MFaccess/index.html +++ b/previews/PR1153/System/MFaccess/index.html @@ -94,4 +94,4 @@ [whelane@merou ~]$

    Access to (read-only) MF git arpifs git repository

    MF use ssh keys to allow access to their read-only git repository. If approved by the HIRLAM System PL you should request access to the repository by sending a request e-mail to Eric Escaliere and cc'ed to Daniel Santos and Claude Fischer your ssh public key attached.

    Once you have been given access you can create a local clone by issuing the following commands:

    cd $HOME
     mkdir arpifs_releases
     cd arpifs_releases
    -git clone ssh://reader054@git.cnrm-game-meteo.fr/git/arpifs.git

    Happy gitting!

    +git clone ssh://reader054@git.cnrm-game-meteo.fr/git/arpifs.git

    Happy gitting!

    diff --git a/previews/PR1153/System/ReleaseProcess/index.html b/previews/PR1153/System/ReleaseProcess/index.html index 098d7a919..6cf06df1d 100644 --- a/previews/PR1153/System/ReleaseProcess/index.html +++ b/previews/PR1153/System/ReleaseProcess/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Harmonie release process

    This page describes the release process for tagging new Harmonie versions

    Harmonie repository organization

    In the past we used the concept of trunk(svn) or develop(git) for the development of Harmonie-Arome codes. Since CY46 we decided to used dev-CYXXhX as development branch to be more clear about the harmonie version under development.

    Harmonie-AROME naming convection

    Using a common T(Toulouse) cycle of the ACCORD consortium from the IAL repository the development of an Harmonie-Arome version starts.

    • The naming convention is using the number of their cycle of T cycle used as base.
    • The h letter is used to indicate that it is, or will be, an Harmonie-Arome CSC that differs from the T base code version.
    • The first number after the h refers to the version of T cycle version used as base. (e.g. CY46T1 is used as base for dev-CY46h1)

    Tagging

    In Hirlam, various tagging prior to official releases are made to provide user communities with a 'frozen' code set, even though the code has not necessarily been fully validated. These codes are often labeled as alpha, beta, rc.

    • Alpha release (e.g. harmonie-46h1.alpha.1): a snapshot of dev branch which is not mature both technically and meteorologically
    • Beta release (e.g. harmonie-46h1.beta.1): a snapshot of dev branch which is deemed technically mature for evaluation and meteorological validation. On the other hand, there could still be possibility for more features to add
    • Target releases (e.g. harmonie-43h2.2.target.2 and harmonie-43h2.2.target.3): pre-release tagging for final meteorological evaluation
    • Release candidate(e.g. harmonie-43h2.2.rc1): pre-release tagging for final evaluation
    • Official release (e.g. harmonie-43h2.2): mature for operational use
      • The second number refers the number of the Harmonie-Arome release technically and meteorological quality assured
      • A third number could appear in the name for a minor update, a technical release necessities or other aspects.(e.g., harmonie-43h2.2.1
      • Also some bugfix branches could be active using the bf in the naming (e.g.harmonie-43h2.2_bf)
    +

    Harmonie release process

    This page describes the release process for tagging new Harmonie versions

    Harmonie repository organization

    In the past we used the concept of trunk(svn) or develop(git) for the development of Harmonie-Arome codes. Since CY46 we decided to used dev-CYXXhX as development branch to be more clear about the harmonie version under development.

    Harmonie-AROME naming convection

    Using a common T(Toulouse) cycle of the ACCORD consortium from the IAL repository the development of an Harmonie-Arome version starts.

    • The naming convention is using the number of their cycle of T cycle used as base.
    • The h letter is used to indicate that it is, or will be, an Harmonie-Arome CSC that differs from the T base code version.
    • The first number after the h refers to the version of T cycle version used as base. (e.g. CY46T1 is used as base for dev-CY46h1)

    Tagging

    In Hirlam, various tagging prior to official releases are made to provide user communities with a 'frozen' code set, even though the code has not necessarily been fully validated. These codes are often labeled as alpha, beta, rc.

    • Alpha release (e.g. harmonie-46h1.alpha.1): a snapshot of dev branch which is not mature both technically and meteorologically
    • Beta release (e.g. harmonie-46h1.beta.1): a snapshot of dev branch which is deemed technically mature for evaluation and meteorological validation. On the other hand, there could still be possibility for more features to add
    • Target releases (e.g. harmonie-43h2.2.target.2 and harmonie-43h2.2.target.3): pre-release tagging for final meteorological evaluation
    • Release candidate(e.g. harmonie-43h2.2.rc1): pre-release tagging for final evaluation
    • Official release (e.g. harmonie-43h2.2): mature for operational use
      • The second number refers the number of the Harmonie-Arome release technically and meteorological quality assured
      • A third number could appear in the name for a minor update, a technical release necessities or other aspects.(e.g., harmonie-43h2.2.1
      • Also some bugfix branches could be active using the bf in the naming (e.g.harmonie-43h2.2_bf)
    diff --git a/previews/PR1153/System/StandaloneOdb/index.html b/previews/PR1153/System/StandaloneOdb/index.html index 77cc93145..2ec86166b 100644 --- a/previews/PR1153/System/StandaloneOdb/index.html +++ b/previews/PR1153/System/StandaloneOdb/index.html @@ -68,4 +68,4 @@ cp -r /home/ms/ie/dui/odbMacroTest . cd odbMacroTest metview4 -b odbmap.mv4 conv201312.odb "obsvalue" "andate=20131225 and antime=120000 and varno=39" legon png -xv odbmap.1.png +xv odbmap.1.png diff --git a/previews/PR1153/System/TheHarmonieScript/index.html b/previews/PR1153/System/TheHarmonieScript/index.html index 8feddbc95..ce0ceb2a6 100644 --- a/previews/PR1153/System/TheHarmonieScript/index.html +++ b/previews/PR1153/System/TheHarmonieScript/index.html @@ -17,4 +17,4 @@ # unless the / is preceded by ~ (which will be removed). # Hence, to remove e.g. all analyses from 1995, use 1995/an, # which translates to 1995[0-9][0-9]*_*/an* -# (to be precise: use: CleanUp("REMOVE:1995/an", "-go"); +# (to be precise: use: CleanUp("REMOVE:1995/an", "-go"); diff --git a/previews/PR1153/System/UpdateNamelists/index.html b/previews/PR1153/System/UpdateNamelists/index.html index 85e9442d6..205ce21e9 100644 --- a/previews/PR1153/System/UpdateNamelists/index.html +++ b/previews/PR1153/System/UpdateNamelists/index.html @@ -11,4 +11,4 @@ Create namelist hash 4dvar.pm Create updated empty namelist hash empty_4dvar.pm for 4dvar

    We have now created a perl module for the new namelists. One with empty namelist entries, 4dvar_empty.pm, and one with all namelists in the right format, 4dvar.pm. To get one of your namelists back ( sorted ) you can write:

    ./gen_namlist.pl -n 4dvar_empty.pm -n 4dvar.pm namscreen_dat_4d

    To get the module integrated in the system the module has to be merged with the conventions in harmonie_namelists.pm, but as a start the full namelists can be used. Copy the new empty*.pm to empty.pm to get the updated list of empty namelists.

    Create the new namelist

    Add the new namelists to the script scr/Get_namelist. In this case we would add a new case for 4dvar

    4dvartraj) 
        NAMELIST_CONFIG="$DEFAULT minimization dynamics ${DYNAMICS} ${PHYSICS} ${PHYSICS}_minimization ${SURFACE} ${EXTRA_FORECAST_OPTIONS} varbc minim4d"
    -    ;;
    + ;; diff --git a/previews/PR1153/System/UsingSubmodulesinHarmonie/index.html b/previews/PR1153/System/UsingSubmodulesinHarmonie/index.html index b82d5e30c..981bd4ccb 100644 --- a/previews/PR1153/System/UsingSubmodulesinHarmonie/index.html +++ b/previews/PR1153/System/UsingSubmodulesinHarmonie/index.html @@ -148,4 +148,4 @@ Changes to be committed: (use "git restore --staged <file>..." to unstage) new file: .gitmodules - new file: <new_submodule_path>

    Then simply commit (git commit) and push (git push).


    More info

    + new file: <new_submodule_path>

    Then simply commit (git commit) and push (git push).


    More info

    diff --git a/previews/PR1153/Verification/AllobsVerification/index.html b/previews/PR1153/Verification/AllobsVerification/index.html index 0652c8374..0c61d7210 100644 --- a/previews/PR1153/Verification/AllobsVerification/index.html +++ b/previews/PR1153/Verification/AllobsVerification/index.html @@ -5,4 +5,4 @@ gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash});

    allobs Verification

    Introduction

    It is possible to use Screening (NCONF=002) to calculate observation - forecast (O-F) statistics using forecasts of any length as the model first-guess. The ...

    Screening adjustments

    The screening of observations is switched off by adjusting the NAMCT0 and L_SCREEN_CALL

    &NAMCT0
       L_SCREEN_CALL=.FALSE.,
    -/

    Inputs

    The inputs for the allobs data extraction are the same as for a regular DA Screening task - observations (ECMA) and a first-guess (short-forecast files - ICMSHHARM+hhhh and ICMSHHARM+hhhh.sfx).

    Note

    The forecasts being verified will need both model state (ICMSHHARM+hhhh) and the "full" SURFEX file (ICMSHHARM+hhhh.sfx) available for the Scextr task. You may need to adjust the VERITIMES and SWRITUPTIMES settings in your ecf/config_exp.h file.

    The following settings are important:

    config_exp.h setting
    SCREXTRUse Screening (NCONF=002) to produce O-F data
    SCREXTR_TASKSNumber of parallel tasks for O-F extraction
    FGREFEXPExperiment name for FirstGuess. If set to undef it will use own forecasts
    OBREFEXPExperiment name for ODBs. If set to undef it will use own ODBs

    Running

    This extraction can be executed as part of a running experiment (with SCREXTER=yes) or using a standalone suite (PLAYFILE=allobsver).

    Output

    The output from the Scrextr task is a CCMA ODB with O-F statistics. This ODB is archived in $HM_DATA/archive/extract/obsver/odb_ver_${FGDTG}_${FCLENSTR}/ where FGDTG is the forecast cycle DTG and FCLENSTR is the forecast length verified. The ODB data is then converted to ODB-2 and sqlite files for use in Harp and other downstream applications using odbcon tools.

    +/

    Inputs

    The inputs for the allobs data extraction are the same as for a regular DA Screening task - observations (ECMA) and a first-guess (short-forecast files - ICMSHHARM+hhhh and ICMSHHARM+hhhh.sfx).

    Note

    The forecasts being verified will need both model state (ICMSHHARM+hhhh) and the "full" SURFEX file (ICMSHHARM+hhhh.sfx) available for the Scextr task. You may need to adjust the VERITIMES and SWRITUPTIMES settings in your ecf/config_exp.h file.

    The following settings are important:

    config_exp.h setting
    SCREXTRUse Screening (NCONF=002) to produce O-F data
    SCREXTR_TASKSNumber of parallel tasks for O-F extraction
    FGREFEXPExperiment name for FirstGuess. If set to undef it will use own forecasts
    OBREFEXPExperiment name for ODBs. If set to undef it will use own ODBs

    Running

    This extraction can be executed as part of a running experiment (with SCREXTER=yes) or using a standalone suite (PLAYFILE=allobsver).

    Output

    The output from the Scrextr task is a CCMA ODB with O-F statistics. This ODB is archived in $HM_DATA/archive/extract/obsver/odb_ver_${FGDTG}_${FCLENSTR}/ where FGDTG is the forecast cycle DTG and FCLENSTR is the forecast length verified. The ODB data is then converted to ODB-2 and sqlite files for use in Harp and other downstream applications using odbcon tools.

    diff --git a/previews/PR1153/Verification/CommonVerification/index.html b/previews/PR1153/Verification/CommonVerification/index.html index f11b4ddbf..595668ecc 100644 --- a/previews/PR1153/Verification/CommonVerification/index.html +++ b/previews/PR1153/Verification/CommonVerification/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -
    +
    diff --git a/previews/PR1153/Verification/Extract4verification/index.html b/previews/PR1153/Verification/Extract4verification/index.html index 708ba55c1..19714aba4 100644 --- a/previews/PR1153/Verification/Extract4verification/index.html +++ b/previews/PR1153/Verification/Extract4verification/index.html @@ -30,4 +30,4 @@ ... pressure(nlev_temp) val(1:nvar_temp) stid_2 lat lon hgt -...

    The accumulation time allows us to e.g. easily include different precipitation accumulation intervals.

    +...

    The accumulation time allows us to e.g. easily include different precipitation accumulation intervals.

    diff --git a/previews/PR1153/Verification/HARP/index.html b/previews/PR1153/Verification/HARP/index.html index 6512bb112..f7c3ee4a2 100644 --- a/previews/PR1153/Verification/HARP/index.html +++ b/previews/PR1153/Verification/HARP/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    harp

    harp is a set of R packages for manipulation, analysis, visualization and verification of data from regular grids. The most up to date information and tutorials can be found on the website for the 2024 training course

    +

    harp

    harp is a set of R packages for manipulation, analysis, visualization and verification of data from regular grids. The most up to date information and tutorials can be found on the website for the 2024 training course

    diff --git a/previews/PR1153/Verification/Obsmon/index.html b/previews/PR1153/Verification/Obsmon/index.html index 20c08226d..99f58ab0b 100644 --- a/previews/PR1153/Verification/Obsmon/index.html +++ b/previews/PR1153/Verification/Obsmon/index.html @@ -6,4 +6,4 @@

    OBSMON

    In 2014 a new version of the observational monitoring system entered trunk. The first official release containing obsmon was cy38h1.2

    The obsmon package consists of two components. The first is a fortran-based code that is run, for all the active observations types (defined in scr/include.ass), at the post-processing stage of an experiment. It generates statistics from the ODB and store data in three SQLite tables (ECMA/CCMA/ECMA_SFC(CANARI)). In addition the SQLite tables are concatenated in tables in the /ts directory at the end of the run.

    The second component is written in R using the Shiny web application framework. It allows the interactive visualization of the data contained in the SQLite tables produced by the first component of the package. This can be done either offline or via a server daemon (e.g. shiny.hirlam.org).

    For disambiguation, we will hereinafter use the terms "backend" and "frontend" to refer to the first and second components of obsmon, respectively.

    How to turn on backend obsmon?

    Obsmon is enabled by default in ecf/config_exp.h vi OBSMONITOR=obstat

    Note

    If you don't have any log-files from the monitoring experiment, you should disable plotlog from the OBSMONITOR= string in ecf/config_exp.h

    Note

    Make sure that the -DODBMONITOR pre-processor flag is active during compilation of util/monitor. This should only be an issue on untested platforms and is by default enabled on ECMWF.

    How to create statistics and SQLite tables offline/stand-alone:

    If you are running a normal harmonie experiment with the OBSMONITOR=obstat active, the following step is not relevant.

    Two new actions are implemented in the Harmonie script. Instead of start you can write obsmon and instead of prod you can write obsmonprod. This will use the correct definition file and only do post-processing. If you have your ODB files in another experiment you can add the variable OBSMON_EXP_ARCHIVE_ROOT to point to the archive directory in the experiment you are monitoring. This approach is used in the operational MetCoOp runs. If you set OBSMON_EXP=label the runs will be stored in $EXTRARCH/label/. This way you can use the same experiment to monitor all other experiments. The experiements do not need to belong to you as long as you have reading permissions to the experiment.

    1. as start:
     ${HM_REV}/config-sh/Harmonie obsmon DTG=YYYYMMDDHH DTGEND=YYYYMMDDHH OBSMON_EXP_ARCHIVE_ROOT=PATH-TO-ARCHIVE-DIRECTORY-TO-MONITOR OBSMON_EXP=MY-LABEL
    2. as prod:
     ${HM_REV}/config-sh/Harmonie obsmonprod DTGEND=YYYYMMDDHH OBSMON_EXP_ARCHIVE_ROOT=PATH-TO-ARCHIVE-DIRECTORY-TO-MONITOR OBSMON_EXP=MY-LABEL

    If you want to monitor an experiment stored on ECFS, you should specify OBSMON_EXP_ARCHIVE_ROOT with the full address (ectmp:/$USER/..... or ec:/$USER/...) e.g.

    OBSMON_EXP_ARCHIVE_ROOT=ectmp:/$USER/harmonie/MY-EXP OBSMON_EXP=MY-LABEL

    You can also monitor other users experiments as long as you have read-access to the data.

    How to visualize the SQLite tables using frontend obsmon:

    Download the code from its git repo at github:

    git clone git@github.com:Hirlam/obsmon.git 

    Instructions on how to install, configure and run the code can be found in the file docs/obsmon_documentation.pdf that is shipped with the code.

    How to extend backend obsmon with new observation types

    Step 1: Extract statistics from ODB

    In the scripts you must enable monitoring of your observation type. Each observation type is monitored if active in:

    msms/harmonie.tdf

    The script which calls the obsmon binary, is:

    scr/obsmon_stat

    This script set the correct namelist based on how you define your observation below.

    After the information is extracted, the different SQLite bases are gathered into one big SQLite file in the script:

    scr/obsmon_link_stat

    The observation types which the above script is gathering is defined in obtypes in this script:

    util/monitor/scr/monitor.inc

    Then let us introduce the new observation in the obsmon binary. The source code is in

    harmonie/util/monitor

    There are two modules controlling the extraction from ODB:

    mod/module_obstypes.f90
    -mod/module_obsmon.F90

    The first routine defines and initializes the observation type you want to monitor. The second calls the intialization defined in the first file. The important steps are to introduce namelist variables and a meaningful definition in the initialization of the observation type.

    The real extraction from ODB is done in

    cmastat/odb_extract.f90

    At the moment there are two different SQL files used, one for conventional and one for satelites. E.g. radar is handled as TEMP/AIRCRAFT.

    Step 2: Visualize the new observation in shiny (frontend obsmon)

    The logics of which observation type to display is defined in:

    src/observation_definitions.R

    In case of a new plot added, the plotting is defined in the files under:

    src/plots
    +mod/module_obsmon.F90

    The first routine defines and initializes the observation type you want to monitor. The second calls the intialization defined in the first file. The important steps are to introduce namelist variables and a meaningful definition in the initialization of the observation type.

    The real extraction from ODB is done in

    cmastat/odb_extract.f90

    At the moment there are two different SQL files used, one for conventional and one for satelites. E.g. radar is handled as TEMP/AIRCRAFT.

    Step 2: Visualize the new observation in shiny (frontend obsmon)

    The logics of which observation type to display is defined in:

    src/observation_definitions.R

    In case of a new plot added, the plotting is defined in the files under:

    src/plots
    diff --git a/previews/PR1153/Verification/Verification/index.html b/previews/PR1153/Verification/Verification/index.html index 4f4ffaf8c..5cf530381 100644 --- a/previews/PR1153/Verification/Verification/index.html +++ b/previews/PR1153/Verification/Verification/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -
    +
    diff --git a/previews/PR1153/Visualization/EPyGrAM/index.html b/previews/PR1153/Visualization/EPyGrAM/index.html index cc8f86919..fc4501a0d 100644 --- a/previews/PR1153/Visualization/EPyGrAM/index.html +++ b/previews/PR1153/Visualization/EPyGrAM/index.html @@ -6,4 +6,4 @@
    +domain_maker.py

    Enjoy!

    diff --git a/previews/PR1153/assets/README/index.html b/previews/PR1153/assets/README/index.html index 4b8e848b1..79630624f 100644 --- a/previews/PR1153/assets/README/index.html +++ b/previews/PR1153/assets/README/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    da_graph.svg is created in .github/workflows/documentation.yml. It can be recreated locally by using dot which is part of graphviz

    sudo apt install graphviz
    dot -Tsvg da_graph.dot -o da_graph.svg
    +

    da_graph.svg is created in .github/workflows/documentation.yml. It can be recreated locally by using dot which is part of graphviz

    sudo apt install graphviz
    dot -Tsvg da_graph.dot -o da_graph.svg
    diff --git a/previews/PR1153/index.html b/previews/PR1153/index.html index 7804955d6..4e2b73c34 100644 --- a/previews/PR1153/index.html +++ b/previews/PR1153/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    Harmonie System Documentation

    Welcome to the Harmonie system documentation

    Github

    Contributing

    To update a page:

    • Click the "Edit on Github" button at the top right of the page
    • Edit the markdown file on github.com
    • commit (this creates a new branch in your fork) and start a pull request

    When adding new pages also add them to docs/pages.jl so they appear in the navigation bar.

    To add a reference:

    • Update docs/references.bib using <Lastname><Year>` as the citation key.
    • Cite paper in markdown using [<Lastname><Year>](@cite)

    Instructions how to build the system documentation locally are here.

    +

    Harmonie System Documentation

    Welcome to the Harmonie system documentation

    Github

    Contributing

    To update a page:

    • Click the "Edit on Github" button at the top right of the page
    • Edit the markdown file on github.com
    • commit (this creates a new branch in your fork) and start a pull request

    When adding new pages also add them to docs/pages.jl so they appear in the navigation bar.

    To add a reference:

    • Update docs/references.bib using <Lastname><Year>` as the citation key.
    • Cite paper in markdown using [<Lastname><Year>](@cite)

    Instructions how to build the system documentation locally are here.

    diff --git a/previews/PR1153/references/index.html b/previews/PR1153/references/index.html index 3a553a877..b91865f01 100644 --- a/previews/PR1153/references/index.html +++ b/previews/PR1153/references/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HQ1BCP3LPJ', {'page_path': location.pathname + location.search + location.hash}); -

    References

    +

    References