Docker
hub.docker.com › r › linuxserver › faster-whisper
linuxserver/faster-whisper - Docker Image
© 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
LinuxServer.io
docs.linuxserver.io › images › docker-faster-whisper
faster-whisper - LinuxServer.io
docker run -d \ --name=faster-whisper \ -e PUID=1000 \ -e PGID=1000 \ -e TZ=Etc/UTC \ -e WHISPER_MODEL=tiny-int8 \ -e LOCAL_ONLY= `#optional` \ -e WHISPER_BEAM=1 `#optional` \ -e WHISPER_LANG=en `#optional` \ -p 10300:10300 \ -v /path/to/faster-whisper/data:/config \ --restart unless-stopped \ lscr.io/linuxserver/faster-whisper:latest
Videos
GitHub
github.com › linuxserver › docker-faster-whisper
GitHub - linuxserver/docker-faster-whisper
docker run -d \ --name=faster-whisper \ -e PUID=1000 \ -e PGID=1000 \ -e TZ=Etc/UTC \ -e WHISPER_MODEL=tiny-int8 \ -e LOCAL_ONLY= `#optional` \ -e WHISPER_BEAM=1 `#optional` \ -e WHISPER_LANG=en `#optional` \ -p 10300:10300 \ -v /path/to/faster-whisper/data:/config \ --restart unless-stopped \ lscr.io/linuxserver/faster-whisper:latest
Starred by 203 users
Forked by 15 users
Languages Dockerfile
GitHub
github.com › etalab-ia › faster-whisper-server
GitHub - etalab-ia/faster-whisper-server
faster-whisper-server is an OpenAI API-compatible transcription server which uses faster-whisper as its backend. Features: GPU and CPU support. Easily deployable using Docker.
Starred by 72 users
Forked by 17 users
Languages Python 97.9% | Nix 2.1%
Docker Hub
hub.docker.com › r › lewangdev › faster-whisper
lewangdev/faster-whisper - Docker Image
© 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
GitHub
github.com › SYSTRAN › faster-whisper
GitHub - SYSTRAN/faster-whisper: Faster Whisper transcription with CTranslate2
speaches is an OpenAI compatible server using faster-whisper. It's easily deployable with Docker, works with OpenAI SDKs/CLI, supports streaming, and live transcription.
Starred by 19.5K users
Forked by 1.6K users
Languages Python 99.9% | Dockerfile 0.1%
Docker Hub
hub.docker.com › r › fedirz › faster-whisper-server
fedirz/faster-whisper-server - Docker Image
faster-whisper-server · © 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
Docker
hub.docker.com › layers › linuxserver › faster-whisper › gpu-2.0.0-ls36 › images › sha256-b6b11a6285c129b02c1ddca87ae16ae2ab7f69b4a05a71a16c4f268313002097
Image Layer Details - linuxserver/faster-whisper:gpu-2.0.0-ls36
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
Reddit
reddit.com › r/localllama › faster whisper server - an openai compatible server with support for streaming and live transcription
r/LocalLLaMA on Reddit: Faster Whisper Server - an OpenAI compatible server with support for streaming and live transcription
May 27, 2024 -
Hey, I've just finished building the initial version of faster-whisper-server and thought I'd share it here since I've seen quite a few discussions around TTS. Snippet from README.md
faster-whisper-server is an OpenAI API compatible transcription server which uses faster-whisper as it's backend. Features:
GPU and CPU support.
Easily deployable using Docker.
Configurable through environment variables (see config.py).
Top answer 1 of 5
7
Great, I love seeing stuff like this packaged with a nice api. How big delay is it for "real time" STT? And something I've been looking a bit into, but couldn't get to work.. How about feeding it audio from a web browser's microphone api? Since you're using websockets I hope that's an end goal?
2 of 5
6
Did you include the diarization repo?
Hugging Face
huggingface.co › spaces › aadnk › faster-whisper-webui › blob › main › README.md
README.md · aadnk/faster-whisper-webui at main
Note that the models themselves are currently not included in the Docker images, and will be downloaded on the demand. To avoid this, bind the directory /root/.cache/whisper to some directory on the host (for instance /home/administrator/.cache/whisper), where you can (optionally) prepopulate the directory with the different Whisper models.
Docker
hub.docker.com › layers › linuxserver › faster-whisper › 2.0.0-ls42 › images › sha256-e533a52e3ba6ddf9f69b62b2104238f113f5c7137b509f986966b7a55282667f
Image Layer Details - linuxserver/faster-whisper:2.0.0-ls42
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
GitHub
github.com › SYSTRAN › faster-whisper › blob › master › docker › Dockerfile
faster-whisper/docker/Dockerfile at master · SYSTRAN/faster-whisper
Faster Whisper transcription with CTranslate2. Contribute to SYSTRAN/faster-whisper development by creating an account on GitHub.
Author SYSTRAN
GitHub
github.com › rhasspy › wyoming-faster-whisper
GitHub - rhasspy/wyoming-faster-whisper: Wyoming protocol server for faster whisper speech to text system
The --model can also be a HuggingFace model like Systran/faster-distil-whisper-small.en · NOTE: Models are downloaded to the first --data-dir directory. docker run -it -p 10300:10300 -v /path/to/local/data:/data rhasspy/wyoming-whisper \ --model tiny-int8 --language en
Starred by 265 users
Forked by 77 users
Languages Python 97.7% | Dockerfile 2.0% | Shell 0.3%
GitHub
github.com › SYSTRAN › faster-whisper › issues › 543
Build Faster Whisper in a docker container issues · Issue #543 · SYSTRAN/faster-whisper
November 4, 2023 - I've been able to run regular Whisper just fine from a docker build, but haven't been able to get it to work for Faster Whisper Currently using pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime as the base image and getting the issue "libcud...
Published Nov 04, 2023
Docker
hub.docker.com › layers › lsiodev › faster-whisper › 2.3.0-gpu-github › images › sha256-c53f94be4d4870c395eb0ec913395c79c8332bbf6054240da4bfca140ddb9b79
Image Layer Details - lsiodev/faster-whisper:2.3.0-gpu-github
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
GitHub
github.com › fedirz › faster-whisper-server
GitHub - speaches-ai/speaches
speaches is an OpenAI API-compatible server supporting streaming transcription, translation, and speech generation. Speach-to-Text is powered by faster-whisper and for Text-to-Speech piper and Kokoro are used.
Starred by 2.7K users
Forked by 342 users
Languages Python
Reddit
reddit.com › r/unraid › any guides for setting up faster-whisper to use a nvidia gpu? i can’t seem to make it work.
r/unRAID on Reddit: Any guides for setting up faster-whisper to use a NVIDIA GPU? I can’t seem to make it work.
May 5, 2025 -
I installed the container but I can’t seem to get it to load correctly. Pretty sure I need to copy cuDUNN files somewhere. I tried copying them into the appdata folder for faster-whisper but that didn’t work.
Top answer 1 of 2
1
I'm using subgen, which uses the whisper model and I just followed their unRAID guide on their GitHub page. After that, I just added the necessary arguments and variables to use the Nvidia GPU that you would add for any docker container that you want to use the Nvidia for.
2 of 2
1
Linuxserver has a faster-whisper container. https://docs.linuxserver.io/images/docker-faster-whisper/ Install the Nvidia driver plug in, and this container with GPU tag with —runtime=nvidia parameter, and it is good to go
GitHub
github.com › Sunwood-ai-labs › faster-whisper-docker
GitHub - Sunwood-ai-labs/faster-whisper-docker: Faster Whisper transcription with CTranslate2
Faster Whisper はOpenAIのWhisperモデルを再実装したもので、CTranslate2を使用して高速に音声認識を行います。このガイドでは、Dockerを使用してFaster Whisperを簡単に設定し、実行する方法を紹介します。
Author Sunwood-ai-labs