Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use the ert plugin system for installing everest forward models #9148

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

verveerpj
Copy link
Contributor

@verveerpj verveerpj commented Nov 4, 2024

Issue
Closes #9147

Approach
Requires a PR in everest-models: equinor/everest-models#75

(Screenshot of new behavior in GUI if applicable)

  • PR title captures the intent of the changes, and is fitting for release notes.
  • Added appropriate release note label
  • Commit history is consistent and clean, in line with the contribution guidelines.
  • Make sure unit tests pass locally after every commit (git rebase -i main --exec 'pytest tests/ert/unit_tests -n logical -m "not integration_test"')

When applicable

  • When there are user facing changes: Updated documentation
  • New behavior or changes to existing untested code: Ensured that unit tests are added (See Ground Rules).
  • Large PR: Prepare changes in small commits for more convenient review
  • Bug fix: Add regression test for the bug
  • Bug fix: Create Backport PR to latest release

@verveerpj verveerpj added release-notes:skip If there should be no mention of this in release notes everest labels Nov 4, 2024
@verveerpj verveerpj self-assigned this Nov 4, 2024
@verveerpj verveerpj marked this pull request as draft November 4, 2024 14:53
@codecov-commenter
Copy link

codecov-commenter commented Nov 4, 2024

❌ 1 Tests Failed:

Tests completed Failed Passed Skipped
2323 1 2322 55
View the top 1 failed tests by shortest run time
tests/ert/unit_tests/forward_model_runner/test_forward_model_step.py::test_memory_profile_in_running_events
Stack Traces | 8.3s run time
@pytest.mark.integration_test
    @pytest.mark.flaky(reruns=5)
    @pytest.mark.usefixtures("use_tmpdir")
    def test_memory_profile_in_running_events():
        scriptname = "increasing_memory.py"
        with open(scriptname, "w", encoding="utf-8") as script:
            script.write(
                textwrap.dedent(
                    """\
                #!.../usr/bin/env python
                import time
                somelist = []
    
                for _ in range(10):
                    # 1 Mb allocated pr iteration
                    somelist.append(b' ' * 1024 * 1024)
                    time.sleep(0.1)"""
                )
            )
        executable = os.path.realpath(scriptname)
        os.chmod(scriptname, stat.S_IRWXU | stat.S_IRWXO | stat.S_IRWXG)
    
        fm_step = ForwardModelStep(
            {
                "executable": executable,
                "argList": [""],
            },
            0,
        )
        fm_step.MEMORY_POLL_PERIOD = 0.01
        emitted_timestamps: List[datetime] = []
        emitted_rss_values: List[Optional[int]] = []
        emitted_oom_score_values: List[Optional[int]] = []
        for status in fm_step.run():
            if isinstance(status, Running):
                emitted_timestamps.append(
                    datetime.fromisoformat(status.memory_status.timestamp)
                )
                emitted_rss_values.append(status.memory_status.rss)
                emitted_oom_score_values.append(status.memory_status.oom_score)
    
        # Any asserts on the emitted_rss_values easily becomes flaky, so be mild:
        assert (
            np.diff(np.array(emitted_rss_values[:-3])) >= 0
            # Avoid the tail of the array, then the process is tearing down
        ).all(), f"Emitted memory usage not increasing, got {emitted_rss_values[:-3]=}"
    
        memory_deltas = np.diff(np.array(emitted_rss_values[7:]))
        if not len(memory_deltas):
            # This can happen if memory profiling is lagging behind the process
            # we are trying to track.
            memory_deltas = np.diff(np.array(emitted_rss_values[2:]))
    
        lenience_factor = 4
        # Ideally this is 1 which corresponds to being able to track every memory
        # allocation perfectly. But on loaded hardware, some of the allocations can be
        # missed due to process scheduling. Bump as needed.
    
>       assert (
            max(memory_deltas) < lenience_factor * 1024 * 1024
            # Avoid the first steps, which includes the Python interpreters memory usage
        ), (
            "Memory increased too sharply, missing a measurement? "
            f"Got {emitted_rss_values=} with selected diffs {memory_deltas}. "
            "If the maximal number is at the beginning, it is probably the Python process "
            "startup that is tracked."
        )
E       AssertionError: Memory increased too sharply, missing a measurement? Got emitted_rss_values=[0, 10223616, 13500416, 16252928, 19529728, 20971520, 22806528, 25034752, 30539776, 32243712, 32636928, 33947648, 33947648, 33947648, 33947648, 34996224, 34996224, 34996224, 34996224, 36044800, 36044800, 36044800, 36044800, 36044800, 37093376, 37093376, 37093376, 38141952, 38141952, 38141952, 38141952, 39190528, 39190528, 39190528, 39190528, 40239104, 40239104, 40239104, 40239104, 41287680, 41287680, 41287680, 41287680, 42336256, 42336256, 42336256, 43384832, 43384832, 43384832, 43384832, 43778048, 0] with selected diffs [  5505024   1703936    393216   1310720         0         0         0
E            1048576         0         0         0   1048576         0         0
E                  0         0   1048576         0         0   1048576         0
E                  0         0   1048576         0         0         0   1048576
E                  0         0         0   1048576         0         0         0
E            1048576         0         0   1048576         0         0         0
E             393216 -43778048]. If the maximal number is at the beginning, it is probably the Python process startup that is tracked.
E       assert 5505024 < ((4 * 1024) * 1024)
E        +  where 5505024 = max(array([  5505024,   1703936,    393216,   1310720,         0,         0,\n               0,   1048576,         0,         0,         0,   1048576,\n               0,         0,         0,         0,   1048576,         0,\n               0,   1048576,         0,         0,         0,   1048576,\n               0,         0,         0,   1048576,         0,         0,\n               0,   1048576,         0,         0,         0,   1048576,\n               0,         0,   1048576,         0,         0,         0,\n          393216, -43778048]))

.../unit_tests/forward_model_runner/test_forward_model_step.py:207: AssertionError

To view more test analytics, go to the Test Analytics Dashboard
Got feedback? Let us know on Github

@@ -68,7 +67,7 @@ def get_forward_models(self):

pm = plugin_manager(Plugin1(), Plugin2())

jobs = list(chain.from_iterable(pm.hook.get_forward_models()))
jobs = set(chain.from_iterable(pm.hook.get_forward_models()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why set instead of list here? Did you get duplicate forward models showing up

@@ -64,8 +64,6 @@ def flow_config_path():
def get_forward_models():
"""
Return a list of dicts detailing the names and paths to forward models.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this hook spec deprecated now? I see only the example doc was removed

Copy link
Contributor

@yngve-sk yngve-sk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly LGTM, only a few small questions. If this runs fine on the GHA workflow against your Everest models PR I'd say go for it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
everest release-notes:skip If there should be no mention of this in release notes
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

Use the ForwardModelStepPlugin functionality to install Everest forward models from everest-models
3 participants