You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The list in the readme is non-exhaustive (updating it every time they add support for new models is a slog) but if you provide an invalid model it will tell you and list all the currently valid ones:
Is this a new feature request?
Wanted change
Whisper now supports new models not supported by the GPU tag image file.
Currently supported by this image file:
tiny, tiny.en, base, base.en, small, small.en, distil-small.en, medium, medium.en, distil-medium.en, large, large-v1, large-v2
Not supported by this image file:
large-v3, distil-large-v2, distil-large-v3, large-v3-turbo, or turbo
Please update to the version of faster-whisper that supported these models.
Reason for change
I would like to use larger models - especially v3-turbo with my nvidia GPUS, this should be more accurate and faster,
Proposed code change
Support the latest features?
https://github.com/SYSTRAN/faster-whisper/blob/a6f8fbae0060cccb8bbe71422d3546d8206ebfe1/faster_whisper/transcribe.py#L533
The text was updated successfully, but these errors were encountered: