-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError on main basic example #1
Comments
@kjMaru Please provide minimal reproducible code. The following code (like all examples from the current documentation) works for me, perhaps the problem is in the pytorch version. from stark.interfaces.silero import SileroSpeechSynthesizer
synthesizer = SileroSpeechSynthesizer(model_url='https://models.silero.ai/models/tts/ru/v4_ru.pt')` import torch
print(torch.__version__) # I have 2.0.1 |
I upgraded torch to 2.0.1....But!)))
|
This one looks like a different issue. Try to use a small vosk model. Also, please provide the full traceback and minimal reproducible code formatted using the ```markdown code snippets``` for every new issue/comment. You can also enable the syntax highlighting by specifying the language: |
This error you will get if you run basic sample from documentation
Traceback:
`Traceback (most recent call last):
File "/home/seeker/tmp/./sovetnik.py", line 11, in
synthesizer = SileroSpeechSynthesizer(model_url='https://models.silero.ai/models/tts/ru/v4_ru.pt')
File "/home/seeker/.local/lib/python3.10/site-packages/stark/interfaces/silero.py", line 37, in init
self.model = torch.package.PackageImporter(local_file).load_pickle('tts_models', 'model')
File "/home/seeker/.local/lib/python3.10/site-packages/torch/package/package_importer.py", line 271, in load_pickle
result = unpickler.load()
File "/usr/lib/python3.10/pickle.py", line 1213, in load
dispatchkey[0]
File "/usr/lib/python3.10/pickle.py", line 1254, in load_binpersid
self.append(self.persistent_load(pid))
File "/home/seeker/.local/lib/python3.10/site-packages/torch/package/package_importer.py", line 249, in persistent_load
loaded_reduces[reduce_id] = func(self, *args)
File "/home/seeker/.local/lib/python3.10/site-packages/torch/jit/_script.py", line 372, in unpackage_script_module
cpp_module = torch._C._import_ir_module_from_package(
RuntimeError:
Unknown builtin op: aten::scaled_dot_product_attention.
Here are some suggestions:
aten::_scaled_dot_product_attention
The original call is:
File ".data/ts_code/code/torch/torch/nn/functional.py", line 489
_114 = [bsz, num_heads, src_len0, head_dim]
v8 = torch.view(v6, _114)
attn_output5 = torch.scaled_dot_product_attention(q3, k8, v8, attn_mask16, dropout_p0, is_causal)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_115 = torch.permute(attn_output5, [2, 0, 1, 3])
_116 = torch.contiguous(_115)
'multi_head_attention_forward' is being compiled since it was called from 'MultiheadAttention.forward'
Serialized File ".data/ts_code/code/torch/torch/nn/modules/activation.py", line 44
_6 = "The fast path was not hit because {}"
_7 = "MultiheadAttention does not support NestedTensor outside of its fast path. "
_8 = torch.torch.nn.functional.multi_head_attention_forward
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_9 = uninitialized(Tuple[Tensor, Tensor])
_10 = uninitialized(Optional[Tensor])
`
The text was updated successfully, but these errors were encountered: