Skip to content

Visualizing and processing output

Fredrik Jansson edited this page Jul 29, 2022 · 8 revisions

Enabling output

To save cross sections or 3D fields the following settings are needed in the namelist file for Dales. Output is saved in netCDF format by default.

&NAMCROSSSECTION
lcross      = .true.
crossheight = 20,40,80
dtav        = 60
/

crossheights is a list of z-levels for which to save horizontal cross sections.

&NAMFIELDDUMP
lfielddump  = .true.
dtav        = 60
/

Space savings and enabling compression

The upcoming Dales 4.2 enables compressed netCDF files, by switching to the netCDF4 format. The space saving is great, especially for the ql and qr which contains many 0s. Compressed netCDFs have the following quirks:

  • if the Dales is interrupted, the files are not readable.
  • the files cannot be read while Dales is running.

To work around these:

  • add periodic calls to NF90_SYNC(ncid) on the netCDF files
  • export HDF5_USE_FILE_LOCKING=FALSE before trying to view incomplete files

To save space, consider turning off output of all fields which are not needed. Fielddump saves many fields by default. For cloud visualization, only ql and possibly qr are needed.

Dales feature wishlist

  • add synchronization calls (added in 4.3, enabled with lsync = .true.)
  • make it possible to select which fields are saved

Viewing the output

  • ncview - oldfashioned but quick
    • OSX: brew install ncview, also install Xquartz
  • panoply
  • paraview - 3D views possible

Merging output

When Dales is run in parallel with MPI, each worker creates its own set of netCDF files. The following methods can be used to merge the pieces into single large files.

dalesview/merge_grids.py

Convenient, possibly slow for large, compressed 3D data. Works with Python 2 or 3. For troubleshooting run without parallelizing (-j 1) because the threading hides error messages.

Dalesview on GitHub

Dalesview on Cartesius

Module set, tested May 2020.

module load pre2019
module load nco/intel/4.6.0
module load netcdf4-python/1.2.9-intel-2016b-Python-2.7.12

CDO

Fast.

cdo -f nc4 -z zip_6 -r -O collgrid crossxy.0001.*.nc crossxy.0001.nc

Old versions (<2.0.4) don't work with variables on different grids at once, e.g. velocity variables and temperature. A solution is to merge a single variable at a time. Also the tiles must be listed in the correct order, otherwise the result is scrambled. CDO requires that the time dimension has units in the form seconds since 2020-01-02T00:00:00. DALES by default only gives the unit s, and then CDO outputs nonsense time values. A simple fix is to add xyear = 2020 in the &DOMAIN namelist - if both xyear and xday are present a proper time unit is written in the netCDF. (There seems to be an off-by-one bug in the date output. Day-of-year starts at 1 for the first day of the year.)

Needs CDO version >= 1.8 to avoid problem in collgrid - cdo collgrid (Abort): Variable name 4 not found! Needs CDO version >= 2.0.4 to work well with the DALES grids. (CDO version >= 2.0.6 will be even better and not loose the Source attribute)

Installing CDO on ECMWF-Atos

Install from source to get a new version.

cd $PERM
mkdir src
cd src
wget https://code.mpimet.mpg.de/attachments/download/26823/cdo-2.0.5.tar.gz

module load prgenv/gnu
module load gcc/11.2.0
module load netcdf4/4.7.4

tar -xzf cdo-2.0.5.tar.gz
cd cdo-2.0.5/
./configure --with-netcdf=`nc-config --prefix` --prefix=$PERM/local
make -j 8
make install

Run as $PERM/local/bin/cdo

CDO on Cartesius

module load 2019
module load CDO/1.9.5-intel-2018b
NX=`ls cape.x*y000.001.nc | wc -l` 
cdo -f nc4 -z zip_6 -r -O collgrid,$NX `ls fielddump.*.001.nc | sort -t . -k 3` 3d.nc

# can specify a single variable to merge:
cdo -f nc4 -z zip_6 -r -O collgrid,$NX,thlxy `ls crossxy.0001.*.nc | sort -t y -k 3` merged-crossxy-thl.nc
cdo -f nc4 -z zip_6 -r -O collgrid,$NX,twp `ls cape.*.nc | sort -t y -k 2` merged-cape-twp.nc

The first line determines the number of tiles in X. The sort commands sort the input files in the right order, so that consecutive tiles are adjacent in X.

One can add the flag -P to use n threads, which may be faster but sometimes leads to crashes in the NetCDF or HDF libraries.

CDO on OSX

Install cdo with brew: brew install cdo. Alessandro reports that the $NX construction above doesn't work on OSX (shell-dependent? The $ substitution inserts an extra space). Instead, manually specify the number of tiles in the X direction e.g. collgrid,4,thlxy without spaces.

Merging scripts bundled with Dales

dales/utils/

Change unlimited dimension of the netCDF, concatenate, change dimension. These scripts still merge only in one direction, they are from the time before 2D MPI parallelization. Probably they can be adapted.

Paraview

Don't merge - load all tiles in paraview. Note that paraview may show a gap between the tiles. This gap goes away if you switch to point rendering.