GPU memory requirement difference depending on which environment I am on #1029
-
Hello,
However, I still get OOM message and now it tells me to allocate 20.04 GiB!
I guess I can solve this by adding additional GPU on my instance but I just want to understand this situation. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
Which operation you are trying? inference? training? If you have too big image, try something small.. you can also download sample dataset and try some of them (Ex Spleen) I can run inference for segmentation_spleen model on my windows (6GB GPU) |
Beta Was this translation helpful? Give feedback.
-
You are using very large image You can disable to run it on GPU by commenting Or you can try to use smaller image first and see all is ok e2e with the workflow (which should be) |
Beta Was this translation helpful? Give feedback.
-
In addition, as another option or try. If you using SlidingWindowInference, you could set |
Beta Was this translation helpful? Give feedback.
You are using very large image
image: (553, 449, 677)
POST - Transform (Activationsd) fails as it tries to do it in GPU (for faster performance/user experience)
You can disable to run it on GPU by commenting
EnsureTyped
in your post transform. This will be slow but you will get the result..Or you can try to use smaller image first and see all is ok e2e with the workflow (which should be)