This document summarizes everything you need to know to get started with using or extending the DAPHNE system.
Please ensure that your development system meets the following requirements before trying to build the system.
(*) You can view the version numbers as an orientation rather than a strict requirement. Newer versions should work as well, older versions might work as well.
OS | distribution/version known to work (*) | Comment |
---|---|---|
GNU/Linux | Manjaro | Last checked in January 2023 |
GNU/Linux | Ubuntu 20.04 - 22.10 | All versions in that range work. 20.04 needs CMake installed from Snap. |
GNU/Linux | Ubuntu 18.04 | Used with Intel PAC D5005 FPGA, custom toolchain needed |
MS Windows | 10 Build 19041, 11 | Should work in Ubuntu WSL, using the provided Docker images is recommended |
Installing WSL and Docker should be straight forward using the documentation proveded by Microsoft. On an installed WSL container launching DAPHNE via Docker (see below) should work the same way as in a native installation.
tool/lib | version known to work (*) | comment |
---|---|---|
GCC/G++ | 9.3.0 | Last checked version: 12.2 |
clang | 10.0.0 | |
cmake | 3.20 | On Ubuntu 20.04, install by sudo snap install cmake --classic to fulfill the version requirement; apt provides only version 3.16.3. |
git | 2.25.1 | |
libssl-dev | 1.1.1 | Dependency introduced while optimizing grpc build (which used to build ssl unnecessarily) |
libpfm4-dev | 4.10 | This dependency is needed for profiling support [DAPHNE-#479] |
lld | 10.0.0 | |
ninja | 1.10.0 | |
pkg-config | 0.29.1 | |
python3 | 3.8.5 | |
numpy | 1.19.5 | |
pandas | 0.25.3 | |
java (e.g. openjdk) | 11 (1.7 should be fine) | |
gfortran | 9.3.0 | |
uuid-dev | ||
llvm-10-tools | 10, 15 | On Ubuntu 22.04 you may need to install a newer llvm-*-tools version, such as llvm-15-tools . |
wget | Used to fetch additional dependencies and other artefacts | |
jq | json commandline processor used in docker image generation scripts | |
*** | *** | *** |
CUDA SDK | 11.7.1 | Optional for CUDA ops |
OneAPI SDK | 2022.x | Optional for OneAPI ops |
Intel FPGA SDK or OneAPI FPGA Add-On | 2022.x | Optional for FPGAOPENCL ops |
- about 7.5 GB of free disk space to build from source (mostly due to dependencies)
- Optional:
- NVidia GPU for CUDA ops (tested on Pascal and newer architectures); 8GB for CUDA SDK
- Intel GPU for OneAPI ops (tested on Coffeelake graphics); 23 GB for OneAPI
- Intel FPGA for FPGAOPENCL ops (tested on PAC D5005 accelerator); 23 GB for OneAPI
The DAPHNE system is based on MLIR, which is a part of the LLVM monorepo. The LLVM monorepo is included in this repository as a submodule. Thus, clone this repository as follows to also clone the submodule:
git clone --recursive https://github.com/daphne-eu/daphne.git
Upstream changes to this repository might contain changes to the submodule (we might have upgraded to a newer version of MLIR/LLVM). Thus, please pull as follows:
# in git >= 2.14
git pull --recurse-submodules
# in git < 2.14
git pull && git submodule update --init --recursive
# or use this little convenience script
./pull.sh
Simply build the system using the build-script without any arguments:
./build.sh
When you do this the first time, or when there were updates to the LLVM submodule, this will also download and build the third-party material, which might increase the build time significantly. Subsequent builds, e.g., when you changed something in this repository, will be much faster.
If the build fails in between (e.g., due to missing packages), multiple build directories (e.g., daphne, antlr, llvm) require cleanup. To only remove build output use the following two commands:
./build.sh --clean
./build.sh --cleanDeps
If you want to remove downloaded and extracted artifacts, use this:
./build.sh --cleanCache
For convenience, you can call the following to remove them all.
./build.sh --cleanAll
See this page for more information.
As DAPHNE uses shared libraries, these need to be found by the operating system's loader to link them at runtime.
Since most DAPHNE setups will not end up in one of the standard directories (e.g., /usr/local/lib
), environment variables
are a convenient way to set everything up without interfering with system installations (where you might not even have
administrative privileges to do so).
# from your cloned DAPHNE repo or your otherwise extracted sources/binaries:
export DAPHNE_ROOT=$PWD
export LD_LIBRARY_PATH=$DAPHNE_ROOT/lib:$DAPHNE_ROOT/thirdparty/installed/lib:$LD_LIBRARY_PATH
# optionally, you can add the location of the DAPHNE executable to your PATH:
export PATH=$DAPHNE_ROOT/bin:$PATH
If you're running/compiling DAPHNE from a container you'll most probably *not* need to set these environment variables (unless you have reason to customize your setup - then it is assumed that you know what you are doing).
./test.sh
We use catch2 as the unit test framework. You can use all command line arguments of catch2 with test.sh
.
Write a little DaphneDSL script or use scripts/examples/hello-world.daph
...
x = 1;
y = 2;
print(x + y);
m = rand(2, 3, 100.0, 200.0, 1.0, -1);
print(m);
print(m + m);
print(t(m));
... and execute it as follows: bin/daphne scripts/examples/hello-world.daph
(This command works if Daphne is run
after building from source. Omit "build" in the path to the Daphne binary if executed from the binary distribution).
Optionally flags like --cuda
can be added after the daphne command and before the script file to activate support
for accelerated ops (see software requirements above and build instructions).
For further flags that can be set at runtime to activate additional functionality, run daphne --help
.
Building and Running with Containers [Alternative path for building and running the system and the tests]
If one wants to avoid installing dependencies and avoid conflicting with his/her existing installed libraries, one may use containers.
- you need to install Docker or Singularity: Docker version 20.10.2 or higher | Singularity version 3.7.0-1.el7 or higher are sufficient
- you can use the provided docker files and scripts to create and run DAPHNE.
A full description on containers is available in the containers subdirectory.
The following recreates all images provided by daphneeu
cd container
./build-containers.sh
Running in an interactive container can be done with this run script, which takes care of mounting your current directory and handling permissions:
# please customize this script first
./containers/run-docker-example.sh
For more about building and running with containers, refer (once again) to the directory containers/
and its
README.md.
For documentation about using containers in conjunction with our cluster deployment scripts, refer to Deploy.md.
As an entry point for exploring the source code, you might want to have a look at the code behind the daphne
executable, which can be found in src/api/cli/daphne.cpp
.
On the top-level, there are the following directories:
bin
: after compilation, generated binaries will be placed here (e.g., daphne)build
: temporary build outputcontainers
: scripts and configuration files to get/build/run with Docker or Singularity containersdeploy
: shell scripts to ease deployment in SLURM clustersdoc
: documentation written in markdown (e.g., what you are reading at the moment)lib
: after compilation, generated library files will be placed here (e.g., libAllKernels.so, libCUDAKernels.so, ...)scripts
: a collection of algorithms and examples written in DAPHNE's own domain specific language (DaphneDSL)src
: the actual source code, subdivided into the individual components of the systemtest
: test casesthirdparty
: required external software
You might want to have a look at
- the documentation
- the contribution guidelines
- the open issues