🌐
Docker
hub.docker.com › r › linuxserver › faster-whisper
linuxserver/faster-whisper - Docker Image
© 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
🌐
LinuxServer.io
docs.linuxserver.io › images › docker-faster-whisper
faster-whisper - LinuxServer.io
docker run -d \ --name=faster-whisper \ -e PUID=1000 \ -e PGID=1000 \ -e TZ=Etc/UTC \ -e WHISPER_MODEL=tiny-int8 \ -e LOCAL_ONLY= `#optional` \ -e WHISPER_BEAM=1 `#optional` \ -e WHISPER_LANG=en `#optional` \ -p 10300:10300 \ -v /path/to/faster-whisper/data:/config \ --restart unless-stopped \ lscr.io/linuxserver/faster-whisper:latest
🌐
GitHub
github.com › linuxserver › docker-faster-whisper
GitHub - linuxserver/docker-faster-whisper
docker run -d \ --name=faster-whisper \ -e PUID=1000 \ -e PGID=1000 \ -e TZ=Etc/UTC \ -e WHISPER_MODEL=tiny-int8 \ -e LOCAL_ONLY= `#optional` \ -e WHISPER_BEAM=1 `#optional` \ -e WHISPER_LANG=en `#optional` \ -p 10300:10300 \ -v /path/to/faster-whisper/data:/config \ --restart unless-stopped \ lscr.io/linuxserver/faster-whisper:latest
Starred by 203 users
Forked by 15 users
Languages   Dockerfile
🌐
GitHub
github.com › etalab-ia › faster-whisper-server
GitHub - etalab-ia/faster-whisper-server
faster-whisper-server is an OpenAI API-compatible transcription server which uses faster-whisper as its backend. Features: GPU and CPU support. Easily deployable using Docker.
Starred by 72 users
Forked by 17 users
Languages   Python 97.9% | Nix 2.1%
🌐
Docker Hub
hub.docker.com › r › lewangdev › faster-whisper
lewangdev/faster-whisper - Docker Image
© 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
🌐
GitHub
github.com › SYSTRAN › faster-whisper
GitHub - SYSTRAN/faster-whisper: Faster Whisper transcription with CTranslate2
speaches is an OpenAI compatible server using faster-whisper. It's easily deployable with Docker, works with OpenAI SDKs/CLI, supports streaming, and live transcription.
Starred by 19.5K users
Forked by 1.6K users
Languages   Python 99.9% | Dockerfile 0.1%
🌐
Docker Hub
hub.docker.com › r › fedirz › faster-whisper-server
fedirz/faster-whisper-server - Docker Image
faster-whisper-server · © 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
🌐
Medium
egemengulpinar.medium.com › running-whisper-large-v3-on-docker-with-gpu-support-e8a5b5daabf9
Running OpenAI Whisper Model on Docker with GPU Support | by Egemen Gulpinar | Medium
August 28, 2024 - # Use Python 3.12 as the base image ... RUN chmod +x docker-entrypoint.sh # Set the entry point ENTRYPOINT ["./docker-entrypoint.sh"] ... torch torchaudio torchvision pybind11 python-dotenv faster-whisper nvidia-cudnn-cu11 nvidia-cublas-cu11 numpy...
🌐
Docker
hub.docker.com › layers › linuxserver › faster-whisper › gpu-2.0.0-ls36 › images › sha256-b6b11a6285c129b02c1ddca87ae16ae2ab7f69b4a05a71a16c4f268313002097
Image Layer Details - linuxserver/faster-whisper:gpu-2.0.0-ls36
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
Find elsewhere
🌐
Reddit
reddit.com › r/localllama › faster whisper server - an openai compatible server with support for streaming and live transcription
r/LocalLLaMA on Reddit: Faster Whisper Server - an OpenAI compatible server with support for streaming and live transcription
May 27, 2024 -

Hey, I've just finished building the initial version of faster-whisper-server and thought I'd share it here since I've seen quite a few discussions around TTS. Snippet from README.md

faster-whisper-server is an OpenAI API compatible transcription server which uses faster-whisper as it's backend. Features:

  • GPU and CPU support.

  • Easily deployable using Docker.

  • Configurable through environment variables (see config.py).

🌐
Hugging Face
huggingface.co › spaces › aadnk › faster-whisper-webui › blob › main › README.md
README.md · aadnk/faster-whisper-webui at main
Note that the models themselves are currently not included in the Docker images, and will be downloaded on the demand. To avoid this, bind the directory /root/.cache/whisper to some directory on the host (for instance /home/administrator/.cache/whisper), where you can (optionally) prepopulate the directory with the different Whisper models.
🌐
Docker
hub.docker.com › layers › linuxserver › faster-whisper › 2.0.0-ls27 › images › sha256-47cf851a3bc080a79758fd63ddbd51595b8bdf880f40b31895df161971568ed2
Image Layer Details - linuxserver/faster-whisper:2.0.0-ls27
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
🌐
Docker
hub.docker.com › layers › linuxserver › faster-whisper › 2.0.0-ls42 › images › sha256-e533a52e3ba6ddf9f69b62b2104238f113f5c7137b509f986966b7a55282667f
Image Layer Details - linuxserver/faster-whisper:2.0.0-ls42
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
🌐
GitHub
github.com › SYSTRAN › faster-whisper › blob › master › docker › Dockerfile
faster-whisper/docker/Dockerfile at master · SYSTRAN/faster-whisper
Faster Whisper transcription with CTranslate2. Contribute to SYSTRAN/faster-whisper development by creating an account on GitHub.
Author   SYSTRAN
🌐
GitHub
github.com › rhasspy › wyoming-faster-whisper
GitHub - rhasspy/wyoming-faster-whisper: Wyoming protocol server for faster whisper speech to text system
The --model can also be a HuggingFace model like Systran/faster-distil-whisper-small.en · NOTE: Models are downloaded to the first --data-dir directory. docker run -it -p 10300:10300 -v /path/to/local/data:/data rhasspy/wyoming-whisper \ --model tiny-int8 --language en
Starred by 265 users
Forked by 77 users
Languages   Python 97.7% | Dockerfile 2.0% | Shell 0.3%
🌐
GitHub
github.com › SYSTRAN › faster-whisper › issues › 543
Build Faster Whisper in a docker container issues · Issue #543 · SYSTRAN/faster-whisper
November 4, 2023 - I've been able to run regular Whisper just fine from a docker build, but haven't been able to get it to work for Faster Whisper Currently using pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime as the base image and getting the issue "libcud...
Published   Nov 04, 2023
🌐
Docker
hub.docker.com › layers › lsiodev › faster-whisper › 2.3.0-gpu-github › images › sha256-c53f94be4d4870c395eb0ec913395c79c8332bbf6054240da4bfca140ddb9b79
Image Layer Details - lsiodev/faster-whisper:2.3.0-gpu-github
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
🌐
GitHub
github.com › fedirz › faster-whisper-server
GitHub - speaches-ai/speaches
speaches is an OpenAI API-compatible server supporting streaming transcription, translation, and speech generation. Speach-to-Text is powered by faster-whisper and for Text-to-Speech piper and Kokoro are used.
Starred by 2.7K users
Forked by 342 users
Languages   Python
🌐
GitHub
github.com › Sunwood-ai-labs › faster-whisper-docker
GitHub - Sunwood-ai-labs/faster-whisper-docker: Faster Whisper transcription with CTranslate2
Faster Whisper はOpenAIのWhisperモデルを再実装したもので、CTranslate2を使用して高速に音声認識を行います。このガイドでは、Dockerを使用してFaster Whisperを簡単に設定し、実行する方法を紹介します。
Author   Sunwood-ai-labs