You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For some reason a final transcript is incomplete and is trimmed in the middle of the speech.
I've tried to change max_tokens and max_new_tokens parameter, but nothing has changed.
Also I didn't understand how to pass compute type and batch size as parameters.
PretrainedConfig and GenerationConfig don't have such parameters. Could anyone help me?
The text was updated successfully, but these errors were encountered:
Maybe I'm doing something wrong, but nothing changes. Variation of max_new_tokens in both processor.__call__ and model.generate does not affect the behavior of the model
System Info
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction (minimal, reproducible, runnable)
Expected behavior
For some reason a final transcript is incomplete and is trimmed in the middle of the speech.
I've tried to change max_tokens and max_new_tokens parameter, but nothing has changed.
Also I didn't understand how to pass compute type and batch size as parameters.
PretrainedConfig and GenerationConfig don't have such parameters. Could anyone help me?
The text was updated successfully, but these errors were encountered: