You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The regression tests implemented in aeolis/tests/regression_tests/ checks whether whether running the simulation for the following cases produces the same netCDF file consistently across other code changes in the repository
1D/case1_small_waves
1D/case2_larger_waves
1D/case3_erosion_avalanching
2D/Barchan_dune
The testcases include
check whether netCDF file is created as part of the simulation
check whether aeolis.log file created as part of the simulation
check whether the array shape, dimension, and array values in the netCDF file produced are the same as the ones stored in a reference output for the same model configuration file.
Currently, the pytest output doesn't show the pass/fail status of testcase for each of the cases making it difficult to understand the test output and debug it in case of a failure.
Desired behavior
Display pass/fail status of each testcase per scenario in the pytest output.
Fix
The desired behavior can be achieved by breaking the large test into individual testcases and using parametrization.
@niketagrawal, this works well. I committed some changes to the branch to make the tests pass (mainly updating the expected outputs. I don't think we should test for case 3. The erosion module seems a bit buggy still. @ncohn maybe you can comment, is the erosion module robust enough for running generic tests?
Current behavior
The regression tests implemented in
aeolis/tests/regression_tests/
checks whether whether running the simulation for the following cases produces the same netCDF file consistently across other code changes in the repositoryThe testcases include
Currently, the pytest output doesn't show the pass/fail status of testcase for each of the cases making it difficult to understand the test output and debug it in case of a failure.
Desired behavior
Display pass/fail status of each testcase per scenario in the pytest output.
Fix
pytest -v
on the command line.The text was updated successfully, but these errors were encountered: