Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raise warning when summary key and no simulator job present in ert config #8666

Merged
merged 3 commits into from
Oct 2, 2024

Conversation

DanSava
Copy link
Contributor

@DanSava DanSava commented Sep 9, 2024

Issue
Resolves #3004

Approach
Raise warning when summary key and no simulator job present in ert config

  • PR title captures the intent of the changes, and is fitting for release notes.
  • Added appropriate release note label
  • Commit history is consistent and clean, in line with the contribution guidelines.
  • Make sure tests pass locally (after every commit!)

When applicable

  • When there are user facing changes: Updated documentation
  • New behavior or changes to existing untested code: Ensured that unit tests are added (See Ground Rules).
  • Large PR: Prepare changes in small commits for more convenient review
  • Bug fix: Add regression test for the bug
  • Bug fix: Create Backport PR to latest release

@DanSava DanSava self-assigned this Sep 9, 2024
@DanSava DanSava force-pushed the improve-startup-suggestions branch 2 times, most recently from 7b5264c to dda7735 Compare September 9, 2024 11:06
@codecov-commenter
Copy link

codecov-commenter commented Sep 9, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.38%. Comparing base (e92f558) to head (028269b).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #8666   +/-   ##
=======================================
  Coverage   91.38%   91.38%           
=======================================
  Files         342      342           
  Lines       21028    21034    +6     
=======================================
+ Hits        19216    19222    +6     
  Misses       1812     1812           
Flag Coverage Δ
cli-tests 39.67% <100.00%> (+0.06%) ⬆️
gui-tests 73.56% <100.00%> (+<0.01%) ⬆️
performance-tests 50.18% <100.00%> (+0.12%) ⬆️
unit-tests 79.83% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@DanSava DanSava force-pushed the improve-startup-suggestions branch 2 times, most recently from a0f8da8 to d3f000a Compare September 20, 2024 05:54
Copy link
Contributor

@yngve-sk yngve-sk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic itself looks good, but I think this should ideally be done in summary_config.py as we want to avoid config-specific logic in ErtConfig. I think you can just traverse the config_dict in SummaryConfig.from_config_dict, and find out whether there is an ecl or flow forward model up, and expect the tests you introduced to work the same.

@DanSava DanSava force-pushed the improve-startup-suggestions branch 3 times, most recently from 62098c7 to b857a58 Compare September 30, 2024 08:42
@DanSava DanSava requested a review from yngve-sk September 30, 2024 08:42
Copy link
Contributor

@yngve-sk yngve-sk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, consider the one comment wrt cosmetic refactor and also jobs vs fm steps, I think we mostly want to call everything forward model steps and move away from the "jobs" notation (even though ert config keywords are still directly called INSTALL_JOB etc etc).

forward_model = config_dict.get(ConfigKeys.FORWARD_MODEL, [])
names = [step[0] for step in forward_model]
simulation_step_exists = False
for job_name in names:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would refactor this a bit to something like this:

simulation_step_exists = any(_name for _name in names if _name.lower() in ["eclipse","flow"])

and also not use job_name and stick with fm_step_name or step_name, starting w/ calling the names fm_step_names

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simulation_step_exists = any(_name for _name in names if _name.lower() in ["eclipse","flow"])

This looks nice and it is also equivalent to simulation_step_exists = any(_name.lower() in ["eclipse","flow"]) for _name in names but it does not capture the same logic.

To do everything in one line it should be something like:

any(any(sim_name in _name.lower() for sim_name in ["eclipse","flow"]) for _name in names)

@DanSava DanSava force-pushed the improve-startup-suggestions branch from b857a58 to 028269b Compare October 1, 2024 11:18
@DanSava DanSava merged commit 0774438 into equinor:main Oct 2, 2024
50 checks passed
@DanSava DanSava deleted the improve-startup-suggestions branch October 2, 2024 05:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

User feedback when all jobs completed successfully but the realization(s) failing
3 participants