Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

totalsegmentator changes global torch settings without restoring original state #384

Open
Kenneth-Schroeder opened this issue Nov 7, 2024 · 2 comments

Comments

@Kenneth-Schroeder
Copy link

Hey all,
I noticed that global torch settings are changed when totalsegmentator.python_api.totalsegmentator is called.
They are not restored once the function finishes and can lead to significant performance degradation of follow-up torch functions.

I observed the following settings being changed:

  "torch_settings.num_threads": { # torch.set_num_threads(...)
    "old": 8,
    "new": 1
  },
  "cudnn_settings.benchmark": { # torch.backends.cudnn.benchmark = ...
    "old": false,
    "new": true
  }

Please make sure that totalsegmentator has no such side effects and either restores the global state or is isolated in it's own process.

@wasserth
Copy link
Owner

This is happening somewhere in the nnunet package. I will investigate that but might take some time. A fast solution would be to call TotalSegmentator from within python as a shell command via subprocess.call. Not very elegant but works.

@wasserth
Copy link
Owner

I committed a fix here:
1de4511

This is not in master yet. I will merge the branch when a few other features are ready.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants