Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

appearance takes nearly 11GB of GPU RAM on Tesla T4 #9

Open
wgetdddeb opened this issue Jul 3, 2024 · 2 comments
Open

appearance takes nearly 11GB of GPU RAM on Tesla T4 #9

wgetdddeb opened this issue Jul 3, 2024 · 2 comments

Comments

@wgetdddeb
Copy link

I am running the repo on my local GPU, which is Tesla T4 and other options generally take around 2GB of GPU RAM, but the appearance option alone takes 11GB of GPU RAM, though it uses the same loaded model. Can someone clarify this part or am I missing something? I took the same code as available in the HuggingFace space app.

@ZZZHANG-jx

@ZZZHANG-jx
Copy link
Owner

If it's the same image, the memory usage should be comparable for tasks other than the dewarping task (which uses 256x256 resolution input). Could you please try running this script inference.py to see if it has the same issue?

@bleachyin
Copy link

If it's the same image, the memory usage should be comparable for tasks other than the dewarping task (which uses 256x256 resolution input). Could you please try running this script inference.py to see if it has the same issue?

我遇到了同样的问题,我看到huggingface上的appearance,max_size是1600,当部署到runpod后,如果把图像沿用1600*1600,对于大一点的图片,模型就会爆显存,是由于网络中间的计算过大么。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants