Skip to content

Prerequisites

K Clough edited this page Oct 28, 2024 · 3 revisions

Chombo and GRTresna require the follow libraries/packages/software

  • GNU Make (make)
    • Note that this should pretty much always be installed on any Unix-like (e.g. Linux/macOS) system so you shouldn't need to worry about this one
  • C shell (csh)
    • This is usually installed on clusters but may need to be installed on home systems
  • Perl (perl)
    • Again this should be installed on most Unix-like systems.
  • C++ compiler
    • We support GCC, the Intel C++ Classic Compiler and compilers based on the LLVM infrastructure (include Clang and the Intel oneAPI DPC++/C++ Compiler).
    • GRTresna uses some C++14 features so you need a new enough version:
      • Intel Classic v17 and greater, and GCC v5 and greater have full C++14 support.
      • Slightly older versions with partial C++14 support may work too but we cannot guarantee they will continue to do so.
    • Note that the Intel compiler uses the standard library headers from GCC by default so you will also need to ensure you have a new enough version of GCC if you wish to compile with the Intel compiler.
  • Fortran compiler
    • It's probably a good idea to choose the version/type corresponding to the C++ compiler.
    • GRTresna does not use Fortran.
    • Chombo uses a heavily macroed version of Fortran77 (called Chombo Fortran) which is preprocessed by the C preprocessor into compliant Fortran code.
  • MPI
    • This isn't compulsory if you are just installing on your own computer to run small tests but is necessary to run any larger simulations since it is the main way in which GRTresna/Chombo is parallelised.
    • Make sure it is compatible with your C++ compiler (you will need to provide the MPI wrapper to the C++ compiler e.g. mpicxx).
  • HDF5
    • Again this isn't strictly compulsory but it is the file format GRTresna writes 3D data in so you will probably want it.
    • If building without MPI, you only need a serial version of HDF5.
    • If building with MPI, then you will need a serial version and a parallel version compatible with your MPI library.
  • BLAS (Basic Linear Algebra Subprograms) and LAPACK (Linear Algebra Package) or Intel MKL (Math Kernel Library)
    • Intel's MKL library provides both BLAS and LAPACK routines and usually comes with the Intel compiler. If you wish to use Intel MKL with another compiler (e.g. GCC), then you can use the Intel MKL Link Line Advisor to determine the necessary flags to add to the Makefile variables described on the Compiling Chombo page.
    • It is important to link with serial (i.e. non-threaded) versions of these libraries as you may run into problems otherwise.
      • For Intel MKL with the Intel compiler, this is done by using the -mkl=sequential or -qmkl=sequential (with newer versions) flag.
      • On newer Cray systems with LibSci (which provides BLAS/LAPACK), if using OpenMP (and therefore the -fopenmp flag) it will link with the threaded version of LibSci by default. To circumvent this, one can set the environment variable
        export PE_LIBSCI_OMP_REQUIRES_openmp=""
        
        after loading the relevant cray-libsci module.

If you are trying to compile on your own computer, then we strongly recommend that you use a package manager to install them (e.g. apt on Ubuntu/Debian or Homebrew on macOS). For example, to install the LAPACK Library on Ubuntu, you would run the command sudo apt install liblapack-dev.

Many computer clusters use some form of modules command to control the environment. They will probably have all the prerequisites installed above but you will need to manually load them (and unload any preloaded conflicting ones). Here are some useful module commands:

  • module avail [<name>]
    • What modules are available/available with name <name>?
  • module [un]load <module name>
    • Load/unload a module.
  • module swap <old module> <new module>
    • Swap <old module> for <new module>.
  • module list
    • What modules are loaded currently?
  • module display <module name>
    • Tell me more about <module name>: how does it change the environment?

Some newer clusters use spack to manage the installation of modules. On these systems, modules installed with spack will typically be appended by a 7 character hash (e.g. hdf5-1.10.4-gcc-5.4.0-7zl2gou on the CSD3 cluster). If there are multiple modules with similar names that only differ by their hash, you can can query the dependencies with the command spack find -lvd <name>.