NOTE: This is work-in-progress and represents a prototype, not a full solution!
IFSBench is a prototype tool that aims to provide Python-based testing and performance benchmarking capabilities for IFS development workflows. It is based on Python wrapper classes and tools to create a set of lightweight benchmark scripts that provide additional features and a more "pythonic" flavour of tooling. The primary planned features are:
- Configurable per-test benchmark scripts with an improved CLI (command-line interface).
- Reference benchmark results are processed as pandas.DataFrame objects and can be stored (and thus version-controlled) in a variety of light-weight formats (eg, .csv) without the need for complete log files.
- Large benchmark setups (eg. tl159-fc or tco399-fc) that can symlink, copy or download necessary input data from pre-defined locations and thus do not need git(-lfs) or cmake-based symlinking at configure time.
- Ability to parse DrHook profiles (thanks to Iain Miller!) into commonly accessible formats (again based on pandas.DataFrames), as well as the traditional test-based output format.
Michael Lange ([email protected]), Balthasar Reuter ([email protected]), Johannes Bulin ([email protected])
License: Apache License 2.0 In applying this licence, ECMWF does not waive the privileges and immunities granted to it by virtue of its status as an intergovernmental organisation nor does it submit to any jurisdiction.
Contributions to ifsbench
are welcome. In order to do so, please open an issue where
a feature request or bug can be discussed. Then create a pull request with your
contribution and sign the contributors license agreement (CLA).
See INSTALL.md.
The code should be checked with pylint in order to conform with the standards
specified in .pylintrc
:
<build_dir>/ifsbench_env/bin/pylint --rcfile=.pylintrc ifsbench/ scripts/ tests/