Docker
hub.docker.com › r › linuxserver › faster-whisper
linuxserver/faster-whisper - Docker Image
© 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
GitHub
github.com › manzolo › openai-whisper-docker
GitHub - manzolo/openai-whisper-docker: This Docker image provides a convenient environment for running OpenAI Whisper, a powerful automatic speech recognition (ASR) system.
This Docker image provides a convenient environment for running OpenAI Whisper, a powerful automatic speech recognition (ASR) system. - manzolo/openai-whisper-docker
Starred by 148 users
Forked by 19 users
Languages Dockerfile
Whisper docker container
I'm curious why you didn't use https://github.com/openai/whisper More on reddit.com
Connecting to Whisper in Docker
Looking for some help. HA running fine on RPI4/SSD. Have Voice / Assist running too (recent) - with local whisper/piper. Have Voice PE hardware running too. Now trying to also get piper / whisper running on local container (on NAS) - to (a) compare performance and (b) learn how to ahead of ... More on community.home-assistant.io
GPU support in wyoming-whisper docker
The wyoming-whisper docker currently does everything on CPU even if a GPU is present. It would be great if GPU support could be added. The underlying software (faster-whisper) supports that. I guess it doesn’t really help on a HA system normally because those usually run light hardware without ... More on community.home-assistant.io
using whisper in docker
Whisper and piper are indeed different ports which you can specify in your docker compose and then use when setting up the integration. Here's my docker compose file, then I use the ports specified when installing 2x wyoming integrations (one for each of them) piper: container_name: piper image: rhasspy/wyoming-piper ports: - '10200:10200' volumes: - '/media/storage/piper/data:/data' command: --voice en-gb-southern_english_female-low whisper: container_name: whisper image: rhasspy/wyoming-whisper ports: - '10300:10300' volumes: - '/media/storage/whisper/data:/data' command: --model tiny-int8 --language en More on reddit.com
Videos
06:58
INSANELY EASY! Deploy Whisper AI Locally with Docker - Speech to ...
15:05
n8n Tutorial: Automatic Transcripts with Whisper in Docker - YouTube
26:43
Build a Containerized Transcription API using Whisper Model and ...
06:58
INSANELY EASY! Deploy Whisper AI Locally with Docker ...
Docker Hub
hub.docker.com › r › onerahmet › openai-whisper-asr-webservice
onerahmet/openai-whisper-asr-webservice - Docker Image
openai-whisper-asr-webservice · © 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
Reddit
reddit.com › r/openai › whisper docker container
r/OpenAI on Reddit: Whisper docker container
April 25, 2023 -
So there wasn't a solution out that would allow you to submit your audio/video files to a dockerized flask app and have your subtitle files etc generated as if you were using whisper via the command line... so I made one. Go check it out here jlonge4/whisperAI-flask-docker: I built this project because there was no user friendly way to upload a file to a dockerized flask web form and have whisper do its thing via CLI in the background. Now there is. Enjoy! (github.com)
Top answer 1 of 4
2
I'm curious why you didn't use https://github.com/openai/whisper
2 of 4
2
Running on MacOS, had to remove two instances of static/files from app.py, otherwise was getting ‘file not found’ errors on submit. Form seemed to work after that, processed an mp3 to 100%. Then: /usr/local/lib/python3.8/dist-packages/torch/_utils.py:147: UserWarning: Failed to initialize NumPy: module compiled against API version 0xf but this version of numpy is 0xd (Triggered internally at /root/pytorch/torch/csrc/utils/tensor_numpy.cpp:84.) Not sure what to do about this.
LinuxServer.io
docs.linuxserver.io › images › docker-faster-whisper
faster-whisper - LinuxServer.io
Faster-whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, which is a fast inference engine for Transformer models. This container provides a Wyoming protocol server for faster-whisper. We utilise the docker manifest for multi-platform awareness.
Docker Hub
hub.docker.com › r › rhasspy › wyoming-whisper
rhasspy/wyoming-whisper - Docker Image
© 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
Bazarr Wiki
wiki.bazarr.media › Additional-Configuration › Whisper-Provider
Whisper Provider Setup - Bazarr Wiki
November 13, 2025 - whisper-asr-webservice supports multiple backends. Currently, there are two available options: ... The complete Docker Compose file can be found here.
GitHub
github.com › jhj0517 › Whisper-WebUI
GitHub - jhj0517/Whisper-WebUI: A Web UI for easy subtitle using whisper model.
Start the Whisper-WebUI and connect to the http://localhost:7860. Install and launch Docker-Desktop.
Starred by 2.6K users
Forked by 373 users
Languages Python 95.4% | Jupyter Notebook 2.6%
Docker Hub
hub.docker.com › r › openeuler › whisper
openeuler/whisper - Docker Image
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
The Awesome Garage
theawesomegarage.com › blog › home-assistant-with-wyoming-whisper-on-proxmox-with-nvidia-pci-passthrough-and-cuda-support-in-docker-on-lxc
Wyoming whisper and piper on Proxmox with NVIDIA GPU PCI passthrough and CUDA support in Docker | The awesome garage
February 8, 2024 - We need docker to run Whisper and Piper. Before we install docker, we need to update the system and for the sake of cleanliness, triple check and remove older docker components.
Docker Hub
hub.docker.com › r › lewangdev › faster-whisper
lewangdev/faster-whisper - Docker Image
© 2025 Docker, Inc. All rights reserved. | Terms of Service | Subscription Service Agreement | Privacy | Legal
Medium
learningdevops.medium.com › step-by-step-guide-to-building-a-dockerfile-for-deploying-openai-whisper-models-with-dynamic-model-7bfa8ba95cd7
Step-by-Step Guide to Building a Dockerfile for Deploying OpenAI Whisper Models with Dynamic Model Selection | by Ravi Pandit | Medium
November 13, 2024 - This command will process sample.wav with the small model of Whisper and the transcription output will be generated as defined in the container's configuration (either printed to the console or saved to a file). Using a Dockerfile to deploy Whisper with support for various models brings significant advantages in flexibility, consistency, and scalability.
Home Assistant
community.home-assistant.io › feature requests
GPU support in wyoming-whisper docker - Feature Requests - Home Assistant Community
March 12, 2025 - The wyoming-whisper docker currently does everything on CPU even if a GPU is present. It would be great if GPU support could be added. The underlying software (faster-whisper) supports that. I guess it doesn’t really help on a HA system normally because those usually run light hardware without ...
GitHub
github.com › etalab-ia › faster-whisper-server
GitHub - etalab-ia/faster-whisper-server
faster-whisper-server is an OpenAI API-compatible transcription server which uses faster-whisper as its backend. Features: GPU and CPU support. Easily deployable using Docker.
Starred by 72 users
Forked by 17 users
Languages Python 97.9% | Nix 2.1%