Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Device objects are not cleared on wgpu-native #405

Closed
2 of 3 tasks
almarklein opened this issue Nov 4, 2023 · 8 comments
Closed
2 of 3 tasks

Device objects are not cleared on wgpu-native #405

almarklein opened this issue Nov 4, 2023 · 8 comments

Comments

@almarklein
Copy link
Member

almarklein commented Nov 4, 2023

This is a potential memory leak. Not a severe one, because people should probably not create many devices. See the memtests we added in #403. It looks like this is the only object that leaks.

I've looked into this, but have not been able to find a way to fix this in wgpu-py.

Plan / ideas:

  • Wait until we update to wgpu-native 0.18.0.1, with luck its fixed in a newer version of wgpu-core.
  • Check if this issue is already known. -> There are a few reports of memory leaks in wgpu-core, but not this one particularly.
  • Create a minimal example to reproduce this using wgpu-native directly, and create an issue for it.
This was referenced Nov 4, 2023
@Vipitis
Copy link
Contributor

Vipitis commented Nov 12, 2023

I think I am running into a memory leak as well, after running something between 50 and 100 different shadertoys through the utility (just creating the class, not even rendering a single frame), I can see my vram usage grow to about 4GB and then the python process exits. It doesn't throw any exceptions, nor are there any rust panics and it seems like it just finished (it's a kernel crash when when doing this inside a jupyter notebook).

It could be due to my Intel A750 GPU and their clearly buggy compute runtime, so I will double check the behavior on my laptop with nvidia GPU tomorrow.

Is there a some method to clear up vram apart from killing the python process?

@Korijn
Copy link
Collaborator

Korijn commented Nov 13, 2023

Is the CPU ram also growing? Or just the VRAM?

@almarklein
Copy link
Member Author

You could try to modify the shadertoy.py in _prepare_render(). Where it requests an adapter and then a device, it could use get_default_device() (from wgpu.utils.device). I think that would make sense anyway, and it might just solve your leak?

@Vipitis
Copy link
Contributor

Vipitis commented Nov 13, 2023

Is the CPU ram also growing? Or just the VRAM?

there was no increase in CPU RAM.

You could try to modify the shadertoy.py in _prepare_render(). Where it requests an adapter and then a device, it could use get_default_device() (from wgpu.utils.device). I think that would make sense anyway, and it might just solve your leak?

I changed this specific line with your recommendation and it seems to be an improvement. There is no longer any uncontrolled vram rise. It looks steady for the most part. I still encountered a crash after 450 shaders, but will investigate a bit more.

I can make a PR for that (if there isn't any more changes given the other PRs right now).

@almarklein
Copy link
Member Author

That's good to hear! Perhaps the crash after 450 shaders is not related to the leak. Still odd though. A PR for this part of the fix would be great!

@almarklein
Copy link
Member Author

Still leaking in wgpu-native 0.18.1.1

@almarklein
Copy link
Member Author

This will be fixed by #458

@Korijn
Copy link
Collaborator

Korijn commented Feb 7, 2024

This will be fixed by #458

Really!! Fantastic news 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants