Skip to content

linuxserver/docker-faster-whisper

linuxserver.io

Blog Discord Discourse Fleet GitHub Open Collective

The LinuxServer.io team brings you another container release featuring:

  • regular and timely application updates
  • easy user mappings (PGID, PUID)
  • custom base image with s6 overlay
  • weekly base OS updates with common layers across the entire LinuxServer.io ecosystem to minimise space usage, down time and bandwidth
  • regular security updates

Find us at:

  • Blog - all the things you can do with our containers including How-To guides, opinions and much more!
  • Discord - realtime support / chat with the community and the team.
  • Discourse - post on our community forum.
  • Fleet - an online web interface which displays all of our maintained images.
  • GitHub - view the source for all of our repositories.
  • Open Collective - please consider helping us by either donating or contributing to our budget

Scarf.io pulls GitHub Stars GitHub Release GitHub Package Repository GitLab Container Registry Quay.io Docker Pulls Docker Stars Jenkins Build LSIO CI

Faster-whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, which is a fast inference engine for Transformer models. This container provides a Wyoming protocol server for faster-whisper.

faster-whisper

Supported Architectures

We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here.

Simply pulling lscr.io/linuxserver/faster-whisper:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags.

The architectures supported by this image are:

Architecture Available Tag
x86-64 âś… amd64-<version tag>
arm64 ❌
armhf ❌

Version Tags

This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags.

Tag Available Description
latest âś… Stable releases
gpu âś… Releases with Nvidia GPU support

Application Setup

For use with Home Assistant Assist, add the Wyoming integration and supply the hostname/IP and port that Whisper is running add-on."

When using the gpu tag with Nvidia GPUs, make sure you set the container to use the nvidia runtime and that you have the Nvidia Container Toolkit installed on the host and that you run the container with the correct GPU(s) exposed. See the Nvidia Container Toolkit docs for more details.

For more information see the faster-whisper docs,

Read-Only Operation

This image can be run with a read-only container filesystem. For details please read the docs.

Usage

To help you get started creating a container from this image you can either use docker-compose or the docker cli.

docker-compose (recommended, click here for more info)

---
services:
  faster-whisper:
    image: lscr.io/linuxserver/faster-whisper:latest
    container_name: faster-whisper
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - WHISPER_MODEL=tiny-int8
      - WHISPER_BEAM=1 #optional
      - WHISPER_LANG=en #optional
    volumes:
      - /path/to/data:/config
    ports:
      - 10300:10300
    restart: unless-stopped
docker run -d \
  --name=faster-whisper \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Etc/UTC \
  -e WHISPER_MODEL=tiny-int8 \
  -e WHISPER_BEAM=1 `#optional` \
  -e WHISPER_LANG=en `#optional` \
  -p 10300:10300 \
  -v /path/to/data:/config \
  --restart unless-stopped \
  lscr.io/linuxserver/faster-whisper:latest

Parameters

Containers are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate <external>:<internal> respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.

Parameter Function
-p 10300 Wyoming connection port.
-e PUID=1000 for UserID - see below for explanation
-e PGID=1000 for GroupID - see below for explanation
-e TZ=Etc/UTC specify a timezone to use, see this list.
-e WHISPER_MODEL=tiny-int8 Whisper model that will be used for transcription. From tiny, base, small and medium, all with -int8 compressed variants
-e WHISPER_BEAM=1 Number of candidates to consider simultaneously during transcription.
-e WHISPER_LANG=en Language that you will speak to the add-on.
-v /config Local path for Whisper config files.
--read-only=true Run container with a read-only filesystem. Please read the docs.

Environment variables from files (Docker secrets)

You can set any environment variable from a file by using a special prepend FILE__.

As an example:

-e FILE__MYVAR=/run/secrets/mysecretvariable

Will set the environment variable MYVAR based on the contents of the /run/secrets/mysecretvariable file.

Umask for running applications

For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.

User / Group Identifiers

When using volumes (-v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID.

Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic.

In this instance PUID=1000 and PGID=1000, to find yours use id your_user as below:

id your_user

Example output:

uid=1000(your_user) gid=1000(your_user) groups=1000(your_user)

Docker Mods

Docker Mods Docker Universal Mods

We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.

Support Info

  • Shell access whilst the container is running:

    docker exec -it faster-whisper /bin/bash
  • To monitor the logs of the container in realtime:

    docker logs -f faster-whisper
  • Container version number:

    docker inspect -f '{{ index .Config.Labels "build_version" }}' faster-whisper
  • Image version number:

    docker inspect -f '{{ index .Config.Labels "build_version" }}' lscr.io/linuxserver/faster-whisper:latest

Updating Info

Most of our images are static, versioned, and require an image update and container recreation to update the app inside. With some exceptions (noted in the relevant readme.md), we do not recommend or support updating apps inside the container. Please consult the Application Setup section above to see if it is recommended for the image.

Below are the instructions for updating containers:

Via Docker Compose

  • Update images:

    • All images:

      docker-compose pull
    • Single image:

      docker-compose pull faster-whisper
  • Update containers:

    • All containers:

      docker-compose up -d
    • Single container:

      docker-compose up -d faster-whisper
  • You can also remove the old dangling images:

    docker image prune

Via Docker Run

  • Update the image:

    docker pull lscr.io/linuxserver/faster-whisper:latest
  • Stop the running container:

    docker stop faster-whisper
  • Delete the container:

    docker rm faster-whisper
  • Recreate a new container with the same docker run parameters as instructed above (if mapped correctly to a host folder, your /config folder and settings will be preserved)

  • You can also remove the old dangling images:

    docker image prune

Image Update Notifications - Diun (Docker Image Update Notifier)

Tip

We recommend Diun for update notifications. Other tools that automatically update containers unattended are not recommended or supported.

Building locally

If you want to make local modifications to these images for development purposes or just to customize the logic:

git clone https://github.com/linuxserver/docker-faster-whisper.git
cd docker-faster-whisper
docker build \
  --no-cache \
  --pull \
  -t lscr.io/linuxserver/faster-whisper:latest .

The ARM variants can be built on x86_64 hardware and vice versa using lscr.io/linuxserver/qemu-static

docker run --rm --privileged lscr.io/linuxserver/qemu-static --reset

Once registered you can define the dockerfile to use with -f Dockerfile.aarch64.

Versions

  • 18.07.24: - Rebase to Ubuntu Noble.
  • 19.05.24: - Bump CUDA to 12 on GPU branch.
  • 08.01.24: - Add GPU branch.
  • 25.11.23: - Initial Release.