Replies: 1 comment 2 replies
-
Hi, The proper way to use the PyTorch model at least, is to:
PyTorch models behave weirdly, expecially under load, when you have several threads working with the same model. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi All,
The model works well when the following code is put in the Global scope and when there it's used in the main thread:
model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad',
model='silero_vad',
force_reload=False,
trust_repo=True,
onnx=USE_ONNX)
(get_speech_timestamps,
save_audio,
read_audio,
VADIterator,
collect_chunks) = utils
It starts to fail if multithreads are using the model.
If I put the above code inside a function that is used by multithreads, the model runs successfully, except, it gives the following warning message:
C:\Users\JYe\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py:1501: UserWarning: operator () profile_node %669 : int[] = prim::profile_ivalue(%667)
does not have profile information (Triggered internally at ..\third_party\nvfuser\csrc\graph_fuser.cpp:108.)
return forward_call(*args, **kwargs)
Does anyone know how to get rid of this warning message properly, rather than disabling the warning messages by warnings.filterwarnings("ignore"). Note that I run the model on a Windows 10 machine.
Beta Was this translation helpful? Give feedback.
All reactions