Problem seeking assistance #40
xiangyuhangkaixin
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I have a query about the interplay between processing duration and memory utilization. When working with a WAV audio file that's an hour long, my server's Python 3 script hits a peak memory usage of 22.3GB. In an attempt to address this, I've incorporated flags like -e PYTORCH_NO_CUDA_MEMORY_CACHING=1 and --segment 2, yet the memory footprint stubbornly stays the same regardless of whether I set segment to 1, 2, or 3. The exact command sequence I'm employing is:
Bash:
_docker run --rm -i --name=demucs1 --gpus all -e PYTORCH_NO_CUDA_MEMORY_CACHING=1
-v /data/demucs/input:/data/input
-v /data/demucs/output:/data/output
-v /data/demucs/models:/data/models
xserrat/facebook-demucs:latest
"python3 -m demucs -n htdemucs_ft --out /data/output --mp3 --two-stems vocals --segment 2 /data/input/test_60m1.wav"
Do you have any suggestions or strategies to alleviate this substantial memory demand during processing?
Beta Was this translation helpful? Give feedback.
All reactions