Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Debug intel after #8883

Closed
3 changes: 2 additions & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ _Short description of the approach_
- [ ] PR title captures the intent of the changes, and is fitting for release notes.
- [ ] Added appropriate release note label
- [ ] Commit history is consistent and clean, in line with the [contribution guidelines](https://github.com/equinor/ert/blob/main/CONTRIBUTING.md).
- [ ] Make sure tests pass locally (after every commit!)
- [ ] Make sure unit tests pass locally after every commit (`git rebase -i 10
--exec 'pytest tests/unit_tests -n logical -m "not integration_test"'`)

## When applicable
- [ ] **When there are user facing changes**: Updated documentation
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ jobs:

- name: Run benchmark
run: |
pytest tests/unit_tests/analysis/test_es_update.py::test_and_benchmark_adaptive_localization_with_fields --benchmark-json output.json
pytest tests/performance_tests/test_analysis.py::test_and_benchmark_adaptive_localization_with_fields --benchmark-json output.json

- name: Store benchmark result
uses: benchmark-action/github-action-benchmark@v1
Expand Down
16 changes: 8 additions & 8 deletions .github/workflows/build_and_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: [ '3.8', '3.9', '3.10', '3.11', '3.12' ]
python-version: [ '3.12' ]

uses: ./.github/workflows/build-wheels.yml
with:
Expand All @@ -33,8 +33,8 @@ jobs:
strategy:
fail-fast: false
matrix:
test-type: [ 'integration-tests', 'unit-tests', 'gui-test' ]
python-version: [ '3.8', '3.11', '3.12' ]
test-type: [ 'performance-tests', 'unit-tests', 'gui-tests', 'cli-tests' ]
python-version: [ '3.12' ]
os: [ ubuntu-latest ]
uses: ./.github/workflows/test_ert.yml
with:
Expand All @@ -47,7 +47,7 @@ jobs:
fail-fast: false
matrix:
os: [ ubuntu-latest ]
python-version: [ '3.8', '3.11', '3.12' ]
python-version: [ '3.12' ]
uses: ./.github/workflows/test_ert_with_slurm.yml
with:
os: ${{ matrix.os }}
Expand All @@ -58,7 +58,7 @@ jobs:
strategy:
fail-fast: false
matrix:
test-type: [ 'integration-tests', 'unit-tests', 'gui-test' ]
test-type: [ 'performance-tests', 'unit-tests', 'gui-tests', 'cli-tests' ]
python-version: [ '3.8', '3.12' ]
os: [ 'macos-13', 'macos-14', 'macos-14-large']
exclude:
Expand All @@ -80,9 +80,9 @@ jobs:
strategy:
fail-fast: false
matrix:
test-type: [ 'integration-tests', 'unit-tests', 'gui-test' ]
python-version: [ '3.12' ]
os: [ 'macos-latest' ]
test-type: [ 'performance-tests', 'unit-tests', 'gui-tests', 'cli-tests' ]
python-version: [ '3.8', '3.11', '3.12' ]
os: [ 'macos-14-large' ]
uses: ./.github/workflows/test_ert.yml
with:
os: ${{ matrix.os }}
Expand Down
80 changes: 0 additions & 80 deletions .github/workflows/coverage.yml

This file was deleted.

65 changes: 0 additions & 65 deletions .github/workflows/doctest.yml

This file was deleted.

42 changes: 35 additions & 7 deletions .github/workflows/test_ert.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,27 +42,55 @@ jobs:
run: |
uv pip install ".[dev]"

- name: Test GUI
if: inputs.test-type == 'gui-test'
- name: GUI Test
if: inputs.test-type == 'gui-tests'
run: |
pytest tests --junit-xml=junit.xml -v --mpl -m "requires_window_manager" --benchmark-disable
pytest --cov=ert --cov-report=xml:cov1.xml --junit-xml=junit.xml -v --mpl --benchmark-disable tests/ui_tests/gui

- name: CLI Test
if: inputs.test-type == 'cli-tests'
run: |
pytest --cov=ert --cov-report=xml:cov1.xml --junit-xml=junit.xml -n logical -v --benchmark-disable --dist loadgroup tests/ui_tests/cli

- name: Unit Test
if: inputs.test-type == 'unit-tests'
run: |
pytest tests --junit-xml=junit.xml -n logical --show-capture=stderr -v -m "not integration_test and not requires_window_manager" --benchmark-disable --dist loadgroup
pytest --cov=ert --cov-report=xml:cov1.xml --junit-xml=junit.xml -n logical --show-capture=stderr -v --benchmark-disable --dist loadgroup tests/unit_tests
pytest --doctest-modules --cov=ert --cov-report=xml:cov2.xml src/ --ignore src/ert/dark_storage

- name: Integration Test
if: inputs.test-type == 'integration-tests'
- name: Performance Test
if: inputs.test-type == 'performance-tests'
run: |
pytest tests --junit-xml=junit.xml -n logical --show-capture=stderr -v -m "integration_test and not requires_window_manager" --benchmark-disable
pytest --cov=ert --cov-report=xml:cov1.xml --junit-xml=junit.xml -n logical --show-capture=stderr -v --benchmark-disable --dist loadgroup tests/performance_tests

- name: Test for a clean repository
run: |
# Run this before the 'Test CLI' entry below, which produces a few files that are accepted for now. Exclude the wheel.
git status --porcelain | sed '/ert.*.whl$\|\/block_storage$/d'
test -z "$(git status --porcelain | sed '/ert.*.whl$\\|\\/block_storage$/d')"

- name: Upload coverage to Codecov
id: codecov1
uses: codecov/codecov-action@v4
continue-on-error: true
with:
token: ${{ secrets.CODECOV_TOKEN }}
fail_ci_if_error: true
files: cov1.xml,cov2.xml
flags: ${{ inputs.test-type }}
- name: codecov retry sleep
if: steps.codecov1.outcome == 'failure'
run: |
sleep 30
- name: Codecov retry
uses: codecov/codecov-action@v4
if: steps.codecov1.outcome == 'failure'
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: cov1.xml,cov2.xml
flags: ${{ inputs.test-type }}
fail_ci_if_error: ${{ github.ref == 'refs/heads/main' }}

- uses: test-summary/action@v2
continue-on-error: true
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test_ert_with_slurm.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ jobs:
run: |
set -e
export _ERT_TESTS_ALTERNATIVE_QUEUE=AlternativeQ
pytest tests/integration_tests/scheduler --slurm
pytest tests/unit_tests/scheduler --slurm

- name: Test poly-example on slurm
run: |
Expand Down
42 changes: 26 additions & 16 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,32 @@ The following is a set of guidelines for contributing to ERT.

1. Automatic code formatting is applied via pre-commit hooks. You
can see how to set that up [here](https://pre-commit.com/).
1. All code must be testable and unit tested.
2. All code must be testable and unit tested.

## Test categories

Tests that are in the `tests/unit_tests` directory and are
not marked with `integration_test` are ment to be exceptionally
fast and reliable. This is so that one can run those while
iterating on the code. This means special care has to
be made when placing tests here.

### Integration tests

By "integration test" we simply mean unit tests that did not quite
cut it, either because they are too slow, too unreliable, have difficult
to understand error messages, etc.

### UI tests

These tests are ment to test behavior from a user interaction view to
ensure that the application behaves the way the user expects independently
of code changes. We have two user interfaces, the cli and the gui so those
are subdirectories.

## Performance tests

Tests that runtime and memory performance does not degrade.

## Commits

Expand Down Expand Up @@ -63,18 +88,3 @@ noise in the review process.
* rebase onto base branch if necessary,
* squash whatever still needs squashing, and
* [fast-forward](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-linear-history) merge.

### Build documentation

You can build the documentation after installation by running
```bash
pip install ".[dev]"
sphinx-build -n -v -E -W ./docs ./tmp/ert_docs
```
and then open the generated `./tmp/ert_docs/index.html` in a browser.

To automatically reload on changes you may use

```bash
sphinx-autobuild docs docs/_build/html
```
21 changes: 21 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,13 @@ pip install -e ".[dev]"
pytest tests/
```

There are many kinds of tests in the `tests` directory, while iterating on your
code you can run a fast subset of the tests with

```sh
pytest -n logical tests/unit_tests -m "not integration_tests"
```

[Git LFS](https://git-lfs.com/) must be installed to get all the files. This is packaged as `git-lfs` on Ubuntu, Fedora or macOS Homebrew. For Equinor RGS node users, it is possible to use `git` from Red Hat Software Collections:
```sh
source /opt/rh/rh-git227/enable
Expand All @@ -75,6 +82,20 @@ If you checked out submodules without having git lfs installed, you can force gi
git submodule foreach "git lfs pull"
```

### Build documentation

You can build the documentation after installation by running
```bash
pip install ".[dev]"
sphinx-build -n -v -E -W ./docs ./tmp/ert_docs
```
and then open the generated `./tmp/ert_docs/index.html` in a browser.

To automatically reload on changes you may use

```bash
sphinx-autobuild docs docs/_build/html
```

### Style requirements

Expand Down
2 changes: 1 addition & 1 deletion ci/testkomodo.sh
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ start_tests () {
unset OMP_NUM_THREADS

basetemp=$(mktemp -d -p $_ERT_TESTS_SHARED_TMP)
pytest --timeout=3600 -v --$_ERT_TESTS_QUEUE_SYSTEM --basetemp="$basetemp" integration_tests/scheduler
pytest --timeout=3600 -v --$_ERT_TESTS_QUEUE_SYSTEM --basetemp="$basetemp" unit_tests/scheduler
rm -rf "$basetemp" || true

popd
Expand Down
2 changes: 1 addition & 1 deletion codecov.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ fixes:
comment:
# The code coverage is made up of 4 test runs so only after all coverage
# reports have been uploaded will the comparison be sane
after_n_builds: 4
after_n_builds: 16
Loading
Loading