Docker
hub.docker.com › r › linuxserver › faster-whisper
linuxserver/faster-whisper - Docker Image
© 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
LinuxServer.io
docs.linuxserver.io › images › docker-faster-whisper
faster-whisper - LinuxServer.io
docker run -d \ --name=faster-whisper \ -e PUID=1000 \ -e PGID=1000 \ -e TZ=Etc/UTC \ -e WHISPER_MODEL=tiny-int8 \ -e LOCAL_ONLY= `#optional` \ -e WHISPER_BEAM=1 `#optional` \ -e WHISPER_LANG=en `#optional` \ -p 10300:10300 \ -v /path/to/faster-whisper/data:/config \ --restart unless-stopped \ lscr.io/linuxserver/faster-whisper:latest
Videos
GitHub
github.com › etalab-ia › faster-whisper-server
GitHub - etalab-ia/faster-whisper-server
faster-whisper-server is an OpenAI API-compatible transcription server which uses faster-whisper as its backend. Features: GPU and CPU support. Easily deployable using Docker.
Starred by 72 users
Forked by 17 users
Languages Python 97.9% | Nix 2.1%
GitHub
github.com › linuxserver › docker-faster-whisper
GitHub - linuxserver/docker-faster-whisper
docker run -d \ --name=faster-whisper \ -e PUID=1000 \ -e PGID=1000 \ -e TZ=Etc/UTC \ -e WHISPER_MODEL=tiny-int8 \ -e LOCAL_ONLY= `#optional` \ -e WHISPER_BEAM=1 `#optional` \ -e WHISPER_LANG=en `#optional` \ -p 10300:10300 \ -v /path/to/faster-whisper/data:/config \ --restart unless-stopped \ lscr.io/linuxserver/faster-whisper:latest
Starred by 203 users
Forked by 15 users
Languages Dockerfile
Docker Hub
hub.docker.com › r › lewangdev › faster-whisper
lewangdev/faster-whisper - Docker Image
© 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
Docker Hub
hub.docker.com › r › fedirz › faster-whisper-server
fedirz/faster-whisper-server - Docker Image
faster-whisper-server · © 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
GitHub
github.com › rhasspy › wyoming-faster-whisper
GitHub - rhasspy/wyoming-faster-whisper: Wyoming protocol server for faster whisper speech to text system
The --model can also be a HuggingFace model like Systran/faster-distil-whisper-small.en · NOTE: Models are downloaded to the first --data-dir directory. docker run -it -p 10300:10300 -v /path/to/local/data:/data rhasspy/wyoming-whisper \ --model tiny-int8 --language en
Starred by 265 users
Forked by 77 users
Languages Python 97.7% | Dockerfile 2.0% | Shell 0.3%
GitHub
github.com › SYSTRAN › faster-whisper
GitHub - SYSTRAN/faster-whisper: Faster Whisper transcription with CTranslate2
speaches is an OpenAI compatible server using faster-whisper. It's easily deployable with Docker, works with OpenAI SDKs/CLI, supports streaming, and live transcription.
Starred by 19.5K users
Forked by 1.6K users
Languages Python 99.9% | Dockerfile 0.1%
Docker
hub.docker.com › layers › linuxserver › faster-whisper › gpu-2.0.0-ls36 › images › sha256-b6b11a6285c129b02c1ddca87ae16ae2ab7f69b4a05a71a16c4f268313002097
Image Layer Details - linuxserver/faster-whisper:gpu-2.0.0-ls36
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
GitHub
github.com › SYSTRAN › faster-whisper › blob › master › docker › Dockerfile
faster-whisper/docker/Dockerfile at master · SYSTRAN/faster-whisper
Faster Whisper transcription with CTranslate2. Contribute to SYSTRAN/faster-whisper development by creating an account on GitHub.
Author SYSTRAN
Reddit
reddit.com › r/localllama › faster whisper server - an openai compatible server with support for streaming and live transcription
r/LocalLLaMA on Reddit: Faster Whisper Server - an OpenAI compatible server with support for streaming and live transcription
May 27, 2024 -
Hey, I've just finished building the initial version of faster-whisper-server and thought I'd share it here since I've seen quite a few discussions around TTS. Snippet from README.md
faster-whisper-server is an OpenAI API compatible transcription server which uses faster-whisper as it's backend. Features:
GPU and CPU support.
Easily deployable using Docker.
Configurable through environment variables (see config.py).
Top answer 1 of 5
7
Great, I love seeing stuff like this packaged with a nice api. How big delay is it for "real time" STT? And something I've been looking a bit into, but couldn't get to work.. How about feeding it audio from a web browser's microphone api? Since you're using websockets I hope that's an end goal?
2 of 5
6
Did you include the diarization repo?
Hugging Face
huggingface.co › spaces › aadnk › faster-whisper-webui › blob › main › README.md
README.md · aadnk/faster-whisper-webui at main
Note that the models themselves are currently not included in the Docker images, and will be downloaded on the demand. To avoid this, bind the directory /root/.cache/whisper to some directory on the host (for instance /home/administrator/.cache/whisper), where you can (optionally) prepopulate the directory with the different Whisper models.
Docker
hub.docker.com › layers › linuxserver › faster-whisper › 2.0.0-ls27 › images › sha256-47cf851a3bc080a79758fd63ddbd51595b8bdf880f40b31895df161971568ed2
Image Layer Details - linuxserver/faster-whisper:2.0.0-ls27
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
GitHub
github.com › SYSTRAN › faster-whisper › issues › 543
Build Faster Whisper in a docker container issues · Issue #543 · SYSTRAN/faster-whisper
November 4, 2023 - I've been able to run regular Whisper just fine from a docker build, but haven't been able to get it to work for Faster Whisper Currently using pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime as the base image and getting the issue "libcud...
Published Nov 04, 2023
Docker
hub.docker.com › layers › lsiodev › faster-whisper › 2.3.0-gpu-github › images › sha256-c53f94be4d4870c395eb0ec913395c79c8332bbf6054240da4bfca140ddb9b79
Image Layer Details - lsiodev/faster-whisper:2.3.0-gpu-github
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
GitHub
github.com › johniak › faster-whisper-docker-truenas
GitHub - johniak/faster-whisper-docker-truenas: Docker Image faster whisper for truenas
docker run -it -p 10300:10300 -v /path/to/download/models:/models -v /path/to/train:/train rhasspy/wyoming-speech-to-phrase --hass-websocket-uri 'ws://homeassistant.local:8123/api/websocket' --hass-token '<LONG_LIVED_ACCESS_TOKEN>' --retrain-on-start
Author johniak
Docker
hub.docker.com › layers › lsiodev › faster-whisper › 1.0.1-gpu-readme › images › sha256-6527cd769386ba01be0d87e37955136cd8305ea275211c7541a1188f4c23edc1
lsiodev/faster-whisper:1.0.1-gpu-readme
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
Docker
hub.docker.com › layers › linuxserver › faster-whisper › 2.0.0-ls48 › images › sha256-e40bee3a23c778f0a12c7d8feb0f2d5ab67bc25de10778ddabdcbc75680cd6de
Image Layer Details - linuxserver/faster-whisper:2.0.0-ls48
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.