I used to compile ffmpeg on a linux machine with MinGW, but now I'm able to compile on a windows machine, in my case Windows 10.
NOTE: For me it only worked for ffmpeg versions >= 3.0 and I tested using VS 2013 and 2015
Few steps but very important:
Download and install (except YASM):
- Visual Studio 2013 or 2015
- YASM
- MSYS2
Steps:
- Install MSYS2 to a fixed folder (eg.: C:\Dev\msys64)
- Run msys2.exe
- Execute command "pacman -S make gcc diffutils" and press "Y" to install
- Close msys2
- Rename C:\Dev\msys64\usr\bin\link.exe to some other name (eg.: msys2_link.exe)
- Copy and rename "yasm--win64.exe" to "C:\Dev\yasm.exe"
- Add "C:\Dev" to environment variables PATH
- Run VS2013/2015 x86 (for x86) or x64 for (x64) Command Prompt
- Execute "C:\Dev\msys64\msys2_shell.cmd -msys -use-full-path"
- On the msys2 window execute "which cl" and you should see the path of your VS
- Execute "which link" and you should also see the path of your VS
- Go to the ffmpeg source path (eg.: "cd /c/ffmpeg3.3")
- Run ./configure and make
I use this configuration:
./configure \
--toolchain=msvc \
--arch=x86_64 \
--enable-yasm \
--enable-asm\
--enable-shared \
--enable-w32threads \
--disable-programs \
--disable-ffserver \
--disable-doc \
--disable-static \
--prefix=/c/ffmpeg3.3/DLLS
NOTE2: If you used the last line --prefix=/c/ffmpeg3.3/DLLS, as a final step, run make install and the binaries will be copied to that path
Hope it helped.
Best of luck
Answer from tweellt on Stack OverflowCan anyone tell me how you compile ffmpeg from a GitHub version, preferably into a static build?
How do I set up and use FFmpeg in Windows? - Video Production Stack Exchange
c++ - How do I include FFMpeg for development in a windows program? - Stack Overflow
FFmpeg Windows static builds
Videos
I used to compile ffmpeg on a linux machine with MinGW, but now I'm able to compile on a windows machine, in my case Windows 10.
NOTE: For me it only worked for ffmpeg versions >= 3.0 and I tested using VS 2013 and 2015
Few steps but very important:
Download and install (except YASM):
- Visual Studio 2013 or 2015
- YASM
- MSYS2
Steps:
- Install MSYS2 to a fixed folder (eg.: C:\Dev\msys64)
- Run msys2.exe
- Execute command "pacman -S make gcc diffutils" and press "Y" to install
- Close msys2
- Rename C:\Dev\msys64\usr\bin\link.exe to some other name (eg.: msys2_link.exe)
- Copy and rename "yasm--win64.exe" to "C:\Dev\yasm.exe"
- Add "C:\Dev" to environment variables PATH
- Run VS2013/2015 x86 (for x86) or x64 for (x64) Command Prompt
- Execute "C:\Dev\msys64\msys2_shell.cmd -msys -use-full-path"
- On the msys2 window execute "which cl" and you should see the path of your VS
- Execute "which link" and you should also see the path of your VS
- Go to the ffmpeg source path (eg.: "cd /c/ffmpeg3.3")
- Run ./configure and make
I use this configuration:
./configure \
--toolchain=msvc \
--arch=x86_64 \
--enable-yasm \
--enable-asm\
--enable-shared \
--enable-w32threads \
--disable-programs \
--disable-ffserver \
--disable-doc \
--disable-static \
--prefix=/c/ffmpeg3.3/DLLS
NOTE2: If you used the last line --prefix=/c/ffmpeg3.3/DLLS, as a final step, run make install and the binaries will be copied to that path
Hope it helped.
Best of luck
Assuming with x64 you mean the standard 64-bit version, yes it is possible. See the fate page for all tested builds of FFmpeg, there's various 32- and 64-bit versions of Visual Studio in that list, including VS2013 and VS2015 64-bit. Search for "Microsoft (R) C/C++ Optimizing Compiler Version 18.00.40629 for x64" (or "19.00.24215.1") or "VS2013"/"VS2015", all the way at the bottom. For exact build options, see here for 2013 or here for 2015. The important part is to open a Windows shell with the 64-bit commandline build tools in your $PATH and open a msys shell from there, and then run configure using the --arch=x86_64 --target-os=win64 --toolchain=msvc options. For more detail, see the MSVC compilation wiki page.
This is easier said than done, and has taken me over a month to figure out how to do without any issues, but I've spent enough time on it that I decided I'd document the process well enough to be completed virtually seamlessly by anyone following me.
Unfortunately, Cygwin's default toolchain (i.e. the gcc-core package included with the Cygwin installer) is inherently broken for cross-compiling purposes, and there doesn't seem to be any intent from the Cygwin maintainers to fix this, so currently, the only way to compile software for Windows with Cygwin is to set up a MinGW-w64 toolchain under it. Thankfully, this is as easy as installing a few packages. After this, we'll be compiling the remaining packages, before using a combination of both to compile FFmpeg itself.
Using this Guide
Following this guide in its entirety will build a static FFmpeg installation with external libraries such as fdk-aac, libopus, x265 and the SOX resampler. I may consider adding instructions for compiling specific external libraries to the guide if I get enough requests to do so for a particular library.
The dependencies used by this guide - made up of the MinGW-w64 cross-compile toolchain itself, all packages installed by apt-cyg and all packages compiled from source - will consume up to 2.8GB of disk space, although the guide also includes commands to clean up everything but the FFmpeg installation once done. The installation itself, made up of the binaries and documentation, occupies just over 200MB of disk space.
This guide will create a folder in your Home directory called ffmpeg_sources, where it will download and compile all of the packages being built from source. FFmpeg will be installed to /usr/local, where the FHS standard recommends that software compiled by the user is installed to. This location also has the secondary advantage of being on the system PATH by default in Cygwin, and so doesn't require the $PATH environment variable to be updated.
Install package manager dependencies
To begin with, download the latest version of the Cygwin installer to install the wget, tar, gawk and git packages. The good news is that these packages are dependencies for a tool that can prevent you from ever needing to use the Cygwin installer again.
Install the apt-cyg package manager
Next, install kou1okada's fork of the apt-cyg package manager. If you don't currently use a package manager for Cygwin, this step will not only make the rest of the guide a breeze, but will also make your Cygwin experience rival that of any Linux distribution.
Even if you already use a package manager for Cygwin, such as a different fork of the original apt-cyg, I highly recommend you replace it with this one, which is a much more fully-fledged piece of software compared to the original, as well as the only package manager for Cygwin that is currently in active development.
To install kou1okada's apt-cyg:
mkdir -p /usr/local/src &&
cd /usr/local/src &&
git clone https://github.com/kou1okada/apt-cyg.git &&
ln -s "$(realpath apt-cyg/apt-cyg)" /usr/local/bin/
Install build tools and set up the MinGW-w64 cross-compiler
apt-cyg install \
autoconf \
automake \
binutils \
cmake \
doxygen \
git \
libtool \
make \
mercurial \
mingw64-x86_64-SDL2 \
mingw64-x86_64-binutils \
mingw64-x86_64-fribidi \
mingw64-x86_64-gcc-core \
mingw64-x86_64-gcc-g++ \
mingw64-x86_64-headers \
mingw64-x86_64-libtheora \
mingw64-x86_64-libvpx \
mingw64-x86_64-runtime \
mingw64-x86_64-win-iconv \
mingw64-x86_64-windows-default-manifest \
mingw64-x86_64-zlib \
nasm \
pkg-config \
subversion \
texinfo \
yasm
Compile the dependencies
Each section below compiles an external library that will allow you to compile FFmpeg with support for that library enabled. Copy and paste the whole of each command into your shell.
If you decide you don't require your build of FFmpeg to support a given library, skip its section and remove the corresponding --enable-package line when compiling FFmpeg in the final stage of this guide.
Create a directory at the root of your Cygwin installation with the following:
rm -rf /ffmpeg_sources &&
mkdir -p /ffmpeg_sources
This is the directory we'll be downloading our source code to, and compiling it from.
libmp3lame
To compile the LAME audio codec for MP3:
cd /ffmpeg_sources && rm -rf lame-svn &&
svn checkout https://svn.code.sf.net/p/lame/svn/trunk/lame lame-svn &&
cd lame-svn &&
./configure --host=x86_64-w64-mingw32 --prefix="/usr/x86_64-w64-mingw32/sys-root/mingw" \
--enable-static --disable-shared &&
make -j$(nproc) &&
make install
libx264
To compile the x264 video codec:
cd /ffmpeg_sources && rm -rf x264 &&
git clone --depth 1 https://code.videolan.org/videolan/x264.git &&
cd x264 &&
./configure --cross-prefix=x86_64-w64-mingw32- --host=x86_64-w64-mingw32 \
--prefix="/usr/x86_64-w64-mingw32/sys-root/mingw" --enable-static &&
make -j$(nproc) &&
make install
libx265
To compile the x265 video codec:
cd /ffmpeg_sources && rm -rf x265 &&
hg clone https://bitbucket.org/multicoreware/x265 &&
cd x265/build/linux &&
cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="/usr/x86_64-w64-mingw32/sys-root/mingw" \
-DENABLE_SHARED=OFF -DCMAKE_EXE_LINKER_FLAGS="-static" ../../source \
-DCMAKE_TOOLCHAIN_FILE="/ffmpeg_sources/x265/build/msys/toolchain-x86_64-w64-mingw32.cmake" &&
make -j$(nproc) &&
make install
libogg/libvorbis
The Ogg format is a dependency for the Vorbis audio codec, so will need to be compiled before it:
cd /ffmpeg_sources && rm -rf ogg &&
git clone --depth 1 https://gitlab.xiph.org/xiph/ogg.git &&
cd ogg && ./autogen.sh &&
./configure --host=x86_64-w64-mingw32 --prefix="/usr/x86_64-w64-mingw32/sys-root/mingw" \
--enable-static --disable-shared &&
make -j$(nproc) &&
make install
Then compile Vorbis as normal:
cd /ffmpeg_sources && rm -rf vorbis &&
git clone --depth 1 https://gitlab.xiph.org/xiph/vorbis.git &&
cd vorbis && ./autogen.sh &&
./configure --host=x86_64-w64-mingw32 --prefix="/usr/x86_64-w64-mingw32/sys-root/mingw" \
--enable-static --disable-shared &&
make -j$(nproc) &&
make install
libaom
To compile the AV1 video encoder:
cd /ffmpeg_sources && rm -rf aom &&
git clone --depth 1 https://aomedia.googlesource.com/aom &&
mkdir -p /ffmpeg_sources/aom/build && cd /ffmpeg_sources/aom/build &&
cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="/usr/x86_64-w64-mingw32/sys-root/mingw" \
-DCMAKE_EXE_LINKER_FLAGS="-static" .. \
-DCMAKE_TOOLCHAIN_FILE="/ffmpeg_sources/aom/build/cmake/toolchains/x86_64-mingw-gcc.cmake" &&
make -j$(nproc) &&
make install
libopus
To compile the Opus audio encoder:
cd /ffmpeg_sources && rm -rf opus &&
git clone --depth 1 https://github.com/xiph/opus.git &&
cd opus && ./autogen.sh &&
./configure CFLAGS="-I/usr/local/llsp" --host=x86_64-w64-mingw32 --prefix="/usr/x86_64-w64-mingw32/sys-root/mingw" \
--enable-static --disable-shared &&
make -j$(nproc) &&
make install
libfdk-aac
To compile the Fraunhofer FDK encoder for AAC:
cd /ffmpeg_sources && rm -rf fdk-aac &&
git clone --depth 1 https://github.com/mstorsjo/fdk-aac &&
cd fdk-aac && autoreconf -fiv &&
./configure --host=x86_64-w64-mingw32 --prefix="/usr/x86_64-w64-mingw32/sys-root/mingw" \
--enable-static --disable-shared &&
make -j$(nproc) &&
make install
libsoxr
To compile the SOX resampler library, you'll first need to create a CMAKE toolchain file for the MinGW-w64 toolchain as the project doesn't include one by default.
Create a new file in the Cygwin root directory, and call it toolchain-x86_64-mingw32.cmake (make sure Windows is showing extensions, and that the extension is .cmake).
Copy and paste the following into the file:
SET(CMAKE_SYSTEM_NAME Windows)
SET(CMAKE_C_COMPILER /usr/bin/x86_64-w64-mingw32-gcc)
SET(CMAKE_CXX_COMPILER /usr/bin/x86_64-w64-mingw32-g++)
SET(CMAKE_RC_COMPILER /usr/bin/x86_64-w64-mingw32-windres)
SET(CMAKE_Fortran_COMPILER /usr/bin/x86_64-w64-mingw32-gfortran)
SET(CMAKE_AR:FILEPATH /usr/bin/x86_64-w64-mingw32-ar)
SET(CMAKE_RANLIB:FILEPATH /usr/bin/x86_64-w64-mingw32-ranlib)
SET(CMAKE_FIND_ROOT_PATH /usr/x86_64-w64-mingw32)
SET(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
SET(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
SET(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
SET(QT_BINARY_DIR /usr/x86_64-w64-mingw32/bin /usr/bin)
SET(Boost_COMPILER -gcc47)
Now you can compile the SOX resampler as normal:
cd /ffmpeg_sources && rm -rf soxr &&
git clone --depth 1 https://git.code.sf.net/p/soxr/code soxr &&
mkdir -p soxr/build && cd soxr/build &&
cmake -Wno-dev -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX="/usr/x86_64-w64-mingw32/sys-root/mingw" \
-DBUILD_SHARED_LIBS=OFF .. -DCMAKE_TOOLCHAIN_FILE="/toolchain-x86_64-mingw32.cmake" &&
make -j$(nproc) &&
make install
Compile the FFmpeg binary
The only thing that's left to is compile FFmpeg itself, using the libraries downloaded or compiled above:
cd /ffmpeg_sources && rm -rf ffmpeg &&
wget -O ffmpeg-snapshot.tar.bz2 https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2 &&
tar xvf ffmpeg-snapshot.tar.bz2 && rm -f ffmpeg-snapshot.tar.bz2 && cd ffmpeg &&
CFLAGS=-I/usr/x86_64-w64-mingw32/sys-root/mingw/include &&
LDFLAGS=-L/usr/x86_64-w64-mingw32/sys-root/mingw/lib &&
export PKG_CONFIG_PATH= &&
export PKG_CONFIG_LIBDIR=/usr/x86_64-w64-mingw32/sys-root/mingw/lib/pkgconfig &&
./configure \
--arch=x86_64 \
--target-os=mingw32 \
--cross-prefix=x86_64-w64-mingw32- \
--prefix=/usr/local \
--pkg-config=pkg-config \
--pkg-config-flags=--static \
--extra-cflags=-static \
--extra-ldflags=-static \
--extra-libs="-lm -lz -fopenmp" \
--enable-static \
--disable-shared \
--enable-nonfree \
--enable-gpl \
--enable-avisynth \
--enable-libaom \
--enable-libfdk-aac \
--enable-libfribidi \
--enable-libmp3lame \
--enable-libopus \
--enable-libsoxr \
--enable-libvorbis \
--enable-libvpx \
--enable-libx264 \
--enable-libx265 &&
make -j$(nproc) &&
make install
Remember to remove --enable-\*package\* lines for each package in the list above that you didn't download or compile a library for.
Compiling FFmpeg will take much longer than compilation of the external libraries, but once it's done, you should have a fully working binary enabled with all of the libraries you compiled it with. To run it, simply run ffmpeg in the Cygwin terminal.
Clean up/uninstall
By this point in the guide, you will have taken up around 2.8 GB of disk space with downloading, installing and compiling. The majority of this is now redundant, and should be cleaned up. More than 2.6 GB of it can be safely purged, which brings the total footprint of our FFmpeg installation down to as little as 200MB.
Post-install clean up
Running the following will free up more than 2.3GB of disk space:
apt-cyg remove \
cmake \
doxygen \
git \
mercurial \
subversion \
texinfo \
yasm &&
rm -rf /ffmpeg_sources &&
rm -rf /usr/local/lib/{libav*,libpost*,libsw*} &&
rm -rf /usr/local/lib/pkgconfig/{libav*,libpost*,libsw*} &&
rm -rf /usr/local/include/{libav*,libpost*,libsw*} &&
rm -rf /toolchain-x86_64-mingw32.cmake
As well as removing the ffmpeg_sources directory and unneeded static libraries, this will also remove any packages installed earlier that are no longer needed, except for those that are commonly needed for building tools on Cygwin/Linux.
Remove the cross-compiler
If you no longer intend to compile any other programs using the MinGW-w64 cross-compiling toolchain built earlier in this guide, you can safely uninstall it, as well as all the remaining packages installed earlier:
apt-cyg remove \
autotools \
autoconf \
automake \
gcc-core \
gcc-g++ \
pkg-config \
libtool \
make \
nasm \
mingw64-x86_64-SDL2 \
mingw64-x86_64-binutils \
mingw64-x86_64-fribidi \
mingw64-x86_64-gcc-core \
mingw64-x86_64-gcc-g++ \
mingw64-x86_64-headers \
mingw64-x86_64-libtheora \
mingw64-x86_64-libvpx \
mingw64-x86_64-runtime \
mingw64-x86_64-win-iconv \
mingw64-x86_64-windows-default-manifest \
mingw64-x86_64-zlib &&
rm -rf /usr/x86_64-w64-mingw32
This will free up an additional ~450 MB of space.
Uninstalling FFmpeg
If you ever need to reverse all of the steps in this guide and purge the FFmpeg binaries from your system, simply run the following:
apt-cyg remove \
autotools \
autoconf \
automake \
binutils \
cmake \
doxygen \
gcc-core \
gcc-g++ \
git \
libtool \
make \
mercurial \
mingw64-x86_64-SDL2 \
mingw64-x86_64-binutils \
mingw64-x86_64-fribidi \
mingw64-x86_64-gcc-core \
mingw64-x86_64-gcc-g++ \
mingw64-x86_64-headers \
mingw64-x86_64-libtheora \
mingw64-x86_64-libvpx \
mingw64-x86_64-runtime \
mingw64-x86_64-win-iconv \
mingw64-x86_64-windows-default-manifest \
mingw64-x86_64-zlib \
nasm \
pkg-config \
subversion \
texinfo \
yasm &&
rm -rf /ffmpeg_sources &&
rm -rf /usr/local/bin{ffmpeg,ffprobe,ffplay} &&
rm -rf /usr/local/lib/{libav*,libpost*,libsw*} &&
rm -rf /usr/local/lib/pkgconfig/{libav*,libpost*,libsw*} &&
rm -rf /usr/local/include/{libav*,libpost*,libsw*} &&
rm -rf /usr/local/share/doc/ffmpeg &&
rm -rf /usr/local/share/ffmpeg &&
rm -rf /usr/local/share/man/man1/ff* &&
rm -rf /usr/local/share/man/man3/{libav*,libpost*,libsw*} &&
rm -rf /usr/x86_64-w64-mingw32/ &&
rm -rf /toolchain-x86_64-mingw32.cmake
This will remove everything installed during the process of this guide, and revert your system to exactly how it was before starting it.
Instead of doing everything manually, you can use the media-autobuild_suite for Windows, which builds FFmpeg with almost all of its dependencies:
This Windows Batchscript setups a MinGW/GCC compiler environment for building ffmpeg and other media tools under Windows. After building the environment it retrieves and compiles all tools. All tools get static compiled, no external .dlls needed (with some optional exceptions)
The script gets continuously updated, and for most users, it will be the preferred way to get FFmpeg compiled under Windows.
FFmpeg is indeed a powerful video encoder/decoder tool¹. It operates in the command line, as opposed to using a GUI. Command line is that black window you find by typing [windows+r], then cmd in the popup field and hitting enter. This is also called "command prompt". Once setup, you enter FFmpeg commands in one of these windows to use it.
Here are the basic steps to "install" and use it:
Installation
- Download the latest FFmpeg build, courtesy of gyan.dev.
- Create a folder on your computer to unpack the zip file. This folder will be your "installation" folder. I chose
C:\Program Files\ffmpeg\. This is a good idea because you will treat this like a regular program. Unpack the zip file into this folder. - The folder should now contain a number of other folders, including one titled
binwhereffmpeg.exeis saved. We're not done yet. Double clicking that file does nothing. Remember, this is a command line program. It runs incmd. - Before you can use
ffmpeg.exeincmdyou have to tell your computer where it can find it. You need to add a new system path. First, right click This PC (Windows 10) or Computer (Windows 7) then clickProperties > Advanced System Settings > Advanced tab > Environment Variables. - In the Environment Variables window, click the "Path" row under the "Variable" column, then click Edit

- The "Edit environment variable" window looks different for Windows 10 and 7. In Windows 10 click New then paste the path to the folder that you created earlier where
ffmpeg.exeis saved. For this example, that isC:\Program Files\ffmpeg\bin\
In Windows 7 all the variables are listed in a single string, separated by a semicolon. Simply go the the end of the string, type a semicolon (;), then paste in the path.
- Click Ok on all the windows we just opened up. Just to be sure, reboot your computer before trying any commands.
FFmpeg is now "installed". The Command Prompt will now recognize FFmpeg commands and will attempt to run them. (If you are still having issues with Command Prompt not recognizing FFmpeg try running CMD as an admin. Alternatively, you can use windows powershell instead of cmd. If it still does not work double check to make sure each step was followed to completion.)
Alternative installation methods
I've not tried these myself, but they probably work, and they're easy to do. However, you can accidentally mess up important things if you're not careful.
First, if you open cmd with administrator privileges, you can run setx /m PATH "C:\ffmpeg\bin;%PATH%", and change C:\ffmpeg\bin to your path to FFmpeg. This uses cmd to do all the gui steps listed above. Easy peasy.
Second, user K7AAY reports that you can simply drop the FFmpeg executables in C:\Windows\System32 and run them from there without having to define the path variable because that path is already defined.
Updating FFmpeg
To update FFmpeg, just revisit the download page in step 1 above and download the zip file. Unpack the files and copy them over the old files in the folder you created in step 2.
Using FFmpeg
Using FFmpeg requires that you open a command prompt window, then type FFmpeg specific commands. Here is a typical FFmpeg command:
ffmpeg -i video.mp4 -vn -ar 44100 -ac 1 -b:a 32k -f mp3 audio.mp3
This command has four parts:
ffmpeg- This command tells cmd that we want to run FFmpeg commands. cmd will first look forffmpeg.exein one of the folders from step 6 in the Installation section. If it is found, it will attempt to run the command.-i video.mp4- This is an input file. We are going to be doing work on this file.-vn -ar 44100 -ac 1 -b:a 32k -f mp3- These are the "arguments". These characters are like mini commands that specify exactly what we want to do. In this case, it is saying create an mp3 file from the input source.
-vn- Leave out the video stream-ar 44100- Specifies audio resolution in hertz.-ac 1- Audio channels, only 1. This is effectively "make mono".-b:a 32k- Audio bitrate, set to 32 kbps.-f mp3- Force to MP3 conversion. Without this command, FFmpeg attempts to interpret what you want based on the extension you use in the output file name.
audio.mp3- This is the output file.
As you can probably guess, this short command makes an MP3 audio file from an MP4 file.
To run this command, assuming you have an MP4 file to try this on, follow these steps:
- Hit the Windows key + r.
- Type
cmdthen enter. - Change the path to where the file is that you want to work on. Type
cd [path]. It should look something likecd C:\Users\name\Desktop\. - Now type the FFmpeg command with the name of your input file. The command will run with some feedback. When it's done, cmd will be available for more commands.
This is the basic way to use FFmpeg. The commands can get far more complicated, but that's only because the program has so much power. Using the FFmpeg documentation, you can learn all the commands and create some very powerful scripts. After that, you can save these scripts into a .bat file so that you just have to double click a file instead of type out the whole command each time. For example, this answer contains a script that will create MP3's from all the MP4's in a folder. Then we would be combining the power of FFmpeg with the power of cmd, and that's a nice place to be when you have to do professional quality video/audio encoding on mountains of files.
- As a point if technical accuracy, FFmpeg is itself not an encoder or decoder. FFmpeg is a multimedia framework which can process almost all common and many uncommon media formats. It has thousands of to capture, decode, encode, modify, combine, and stream media, and it can make use of dozens of external libraries to do even more. Gyan.dev provides a succinct description.
The other answer gives a very good answer that covers the default way of installing it, I'd like to propose two ohter methods that are good for noobs and pros alike:
Option 1
Chocolatey is a package manager, it's a bit like the Microsoft Store, except that it's actually useful, it's all free, and it runs on the commandline. With chocolatey, installing ffmpeg—and setting up the correct $PATH etc.—is as simple as
choco install ffmpeg
It's way quicker and far safer than searching for the right website, finding the download, unzipping it, reading the installation documentation, googling how to set it up, downloading some dependancy etc. etc.
To install Chocolatey you run a command on the commandline, obvs. The website shows you how, but it is a simple cut-n-paste affair. https://chocolatey.org/
You can then check out over 6000 free packages available with choco list <search term here>. There are even non-CLI programs so it's not just for the hardcore. It makes setting up a new install of windows super easy: I have a list of software that I always install and just get chocolatey to do it for me: choco install firefox ffmpeg conemu edgedeflector ditto rainmeter imagemagick… and so on.
As an added bonus upgrading your software is as easy as choco upgrade all
Option 2
Winget is another package manager, that is built-in to recent versions of Windows. The recipe for installing ffmpeg with winget is similar: open a terminal (can be Powershell, wsl, or even cmd if you like banging rocks together) and type
winget install ffmpeg
This will download and install the build of ffmpeg that is hosted by gyan (at the time of writing). It will also work if you don't have admin privileges, which chocolatey prefers.
Complete C++ FFMPEG SETUP GUIDE 2024
Very length answer but this will teach you FFMPEG to play/playback video with audio from the bottom to the top like a Wizard!
How to add FFMPEG library to your code
FFMPEG is the largest open source multimedia framework for video playback and codec methods. FFMPEG requires building if you require include, libs and .dll's to utilise in a C/C++ program.
Step 1. Download this video - https://archive.org/download/NyanCatoriginal_201509 the download link is here: https://archive.org/download/NyanCatoriginal_201509/Nyan%20Cat%20%5Boriginal%5D.mp4
Step 2. Use an image splitter free program to split the audio and video e.g. I used this website https://restream.io/tools/audio-extractor to upload the video "Nyan cat [original]" and clicked split audio - then downloaded the audio file.
Step 3. Name the video file "nyan_cat_video.mp4" and the audio file "nyan_cat_audio.mp3
Step 4. Initialise SDL in your program
Step 5. Build FFMPEG following steps below
WINDOWS - you can build with MSYS2 and running below commands in Mingw-64 shell
first install MSYS2 from the Github releases page here - https://github.com/msys2/msys2-installer/releases/tag/nightly-x86_64
download the msys2-x86_64-latest.exe and follow instructions to install, generally choose the C:/ drive as the installation location e.g. c:/msys2
start the mingw64 shell within (as it's the easiest to work with) and run the below so that the MSYS2 Linux environment on Windows has all the dependencies to build FFMPEG
pacman -Syu --noconfirm &&
pacman -S --noconfirm mingw-w64-x86_64-toolchain \
mingw-w64-x86_64-SDL2 mingw-w64-x86_64-SDL2_image mingw-w64-x86_64-SDL2_mixer \
mingw-w64-x86_64-SDL2_ttf mingw-w64-x86_64-yasm mingw-w64-x86_64-gtest make git
Then once the linux environment is setup on your computer you can build using the same MSYS2 shell run the below commands. Note you can choose to disable programs during configure with --disable-programs e.g. ./configure --prefix=C:/ffmpeg --enable-shared --enable-gpl --disable-programs if you don't want them. Note running the configure command may take some time be patient.
mkdir -p C:/ffmpeg
cd c:\ffmpeg
git clone https://git.ffmpeg.org/ffmpeg.git .
./configure --prefix=C:/ffmpeg --enable-shared --enable-gpl
make -j$(nproc)
make install
Copy lib and include files to your c++ compilers locations
Copy-Item -Path C:/ffmpeg/lib/pkgconfig -Destination "C:/msys2/mingw64/bin" -Recurse -Force
Copy-Item -Path C:/ffmpeg/bin/* -Destination "C:/msys2/mingw64/bin" -Recurse -Force
Copy-Item -Path C:/ffmpeg/include/* -Destination "C:/msys2/mingw64/include" -Recurse -Force
Copy-Item -Path C:/ffmpeg/lib/* -Destination "C:/msys2/mingw64/lib" -Recurse -Force
Copy the dll's to your projects directory next to your main.cpp for example
Copy-Item -Path C:/msys2/mingw64/bin/avcodec-61.dll C:/Documents/nyancat -Force
Copy-Item -Path C:/msys2/mingw64/bin/avformat-61.dll C:/Documents/nyancat -Force
Copy-Item -Path C:/msys2/mingw64/bin/avutil-59.dll C:/Documents/nyancat -Force
Copy-Item -Path C:/msys2/mingw64/bin/swresample-5.dll C:/Documents/nyancat -Force
Link to your main program (I included all SDL libs for ease of use)
-lSDL2 -lSDL2_image -lSDL2_ttf -lSDL2_mixer
-lavformat -lavcodec -lavutil -lswscale -lavfilter -mwindows
MACOS - no need to build, brew has it already run in CLI: brew install ffmpeg Note you will need to use brew to install all the SDL libs No DLL required Link to your main program (i included all sdl libs for ease of use)
-lSDL2 -lSDL2_image -lSDL2_ttf -lSDL2_mixer -lavformat -lavcodec -lavutil -lswscale
LINUX - run below code
Note you will need to use apt etc., to install all the SDL libs. Note you can choose to disable programs during configure with --disable-programs e.g. ./configure --prefix=C:/ffmpeg --enable-shared --enable-gpl --disable-programs if you don't want them. Note running the configure command may take some time be patient.
git clone https://git.ffmpeg.org/ffmpeg.git .
./configure --prefix=${HOME}/ffmpeg/install --enable-shared --enable-gpl
make -j$(nproc)
sudo make install
Then copy the ffmpeg include and lib files into your usr lib and include
sudo cp -rf ${HOME}/ffmpeg/install/include/* /usr/include
sudo cp -rf ${HOME}/ffmpeg/install/lib/* /usr/lib
Copy the DLL's from the /usr/lib folder do your projects directory so these sit next to your main.cpp
sudo cp /usr/lib/libavutil.so ~/Documents/nyancat
sudo cp /usr/lib/libswscale.so ~/Documents/nyancat
sudo cp /usr/lib/libavformat.so ~/Documents/nyancat
sudo cp /usr/lib/libavcodec.so ~/Documents/nyancat
sudo cp /usr/lib/libswresample.so ~/Documents/nyancat
link the following
"-lavformat", "-lavcodec", "-lavutil", "-lswscale", "-lswresample",
EXAMPLE
Step 6. Include this class in desired program
bashinclude <FFmpegVideoPlayer.hpp>
Step 7. You will need SDL so that you can pass a window/renderer. If you don't know how to use SDL I suggest you first learn from YouTube + ChatGPT. That's how I learnt. Then in your code enter the below
bashFFmpegVideoPlayer videoPlayer("nyan_cat_video.mp4", "nyan_cat_audio.mp4", window, renderer);
Step 8. Play the video. It will block the stack until it ends like any standard function. So within your main loop you can call it like this
int main() {
std::cout << "Meow!" << std::endl;
sdl_start();
// Play Intro video for game/movie etc.
FFmpegVideoPlayer videoPlayer("nyan_cat_video.mp4", "nyan_cat_audio.mp4", window, renderer);
videoPlayer.playVideo();
// Then start the rest of the program
sdl_run();
sdl_exit();
return 0;
}
Step 9. A complete steps 6 - 8 is below for your ease of use name this file as main.cpp; */
#include <iostream>
#include <SDL2/SDL.h>
#include "FFmpegPlayer.h"
constexpr int SCREEN_WIDTH = 1280;
constexpr int SCREEN_HEIGHT = 720;
SDL_Window *window{};
SDL_Renderer *renderer{};
void sdl_start() {
if (SDL_Init(SDL_INIT_EVERYTHING) != 0) {
std::cerr << "Error: Failed to initialise SDL: " << SDL_GetError() << std::endl;
exit(1);
} else {
std::cerr << "Success: initialised: SDL2" << std::endl;
}
window = SDL_CreateWindow("BubbleUp", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE | SDL_WINDOW_MAXIMIZED);
if (!window) {
std::cerr << "Error: Failed to create SDL Window: " << SDL_GetError() << std::endl;
SDL_Quit();
exit(1);
} else {
std::cerr << "Success: initialised: SDL2 window" << std::endl;
}
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
if (!renderer) {
std::cerr << "Error: Failed to create Renderer: " << SDL_GetError() << std::endl;
SDL_DestroyWindow(window);
SDL_Quit();
exit(1);
} else {
std::cerr << "Success: initialised: SDL2 renderer" << std::endl;
}
SDL_SetRenderDrawBlendMode(renderer, SDL_BLENDMODE_BLEND); // Set blend mode
}
void handle(bool &quitEventLoop) {
SDL_Event e;
while (SDL_PollEvent(&e)) {
if (e.type == SDL_QUIT) {
quitEventLoop = true;
}
}
}
void update() {
// Update game state here
}
void draw() {
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255); // Black background
SDL_RenderClear(renderer);
// Draw game elements here
SDL_RenderPresent(renderer);
}
void sdl_run() {
bool quitEventLoop = false;
while (!quitEventLoop) {
handle(quitEventLoop);
update();
draw();
}
}
void sdl_exit() {
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(window);
SDL_Quit();
}
int main(int argc, char *argv[]) {
std::cout << "Meow!" << std::endl;
sdl_start();
// Play Intro video for game/movie etc.
FFmpegVideoPlayer videoPlayer("nyan_cat_video.mp4", "nyan_cat_audio.mp3", window, renderer);
videoPlayer.playVideo();
// Then start the rest of the program
sdl_run();
sdl_exit();
return 0;
}
Step 10. Create a file called FFmpegPlayer.h and fill with code below
#ifndef FFMPEGPLAYER_H
#define FFMPEGPLAYER_H
/*
Author: ENTER YOUR NAME HERE AVATAR
Dated: ENTER TODAYS DATE HERE
Minimum C++ Standard: C++17
Purpose: Class Declaration file
License: MIT License
*/
#include <iostream>
#include <cstdlib>
#include <SDL2/SDL.h>
#include <SDL2/SDL_ttf.h>
#include <SDL2/SDL_image.h>
#include <SDL2/SDL_mixer.h>
extern "C"
{
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>
#include <libswscale/swscale.h>
#include <libavutil/channel_layout.h>
}
class FFmpegVideoPlayer
{
private:
const char *videoFile{}; /**< the video filepath to play */
const char *musicFile{}; /**< the audio filepath to play, split from video, currently this code is not working to isolate audio stream requires manual splitting */
SDL_Window *window{}; /**< the SDL Window to play the video on passed as a reference in an existing SDL app */
SDL_Renderer *renderer{}; /**< the SDL renderer to play the video on passed as a reference in an existing SDL app */
AVFormatContext *formatCtx{}; /**< the video file */
int videoStreamIndex{}; /**< the found and split video stream */
int audioStreamIndex{}; /**< the found and split audio stream */
const AVCodec *videoCodec{}; /**< the found video codec */
const AVCodec *audioCodec{}; /**< the found audio codec */
AVCodecContext *videoCodecCtx{}; /**< the found video codec */
AVCodecContext *audioCodecCtx{}; /**< the found audio codec */
Mix_Music *music{}; /**< the SDL mixer to play the audio on passed as a reference in an existing SDL app */
public:
/**
* @brief default FFmpegVideoPlayer constructor
*/
FFmpegVideoPlayer(const char *videoFile, const char *musicFile, SDL_Window *window, SDL_Renderer *renderer)
: videoFile(videoFile), musicFile(musicFile), window(window), renderer(renderer), formatCtx(nullptr),
videoStreamIndex(-1), audioStreamIndex(-1), videoCodec(nullptr), audioCodec(nullptr),
videoCodecCtx(nullptr), audioCodecCtx(nullptr), music(nullptr)
{
avformat_network_init();
// Open the input file
if (avformat_open_input(&formatCtx, videoFile, nullptr, nullptr) < 0)
{
char errbuf[AV_ERROR_MAX_STRING_SIZE];
av_strerror(errno, errbuf, AV_ERROR_MAX_STRING_SIZE);
std::cerr << "Error: Could not open Video file: " << videoFile << " - " << errbuf << std::endl;
return;
}
// // Debug: Printing stream information for debugging
// std::cout << "Debug: Printing stream information for debugging:" << std::endl;
// av_dump_format(formatCtx, 0, videoFile, 0);
// Find the stream information
if (avformat_find_stream_info(formatCtx, nullptr) < 0)
{
char errbuf[AV_ERROR_MAX_STRING_SIZE];
av_strerror(errno, errbuf, AV_ERROR_MAX_STRING_SIZE);
std::cerr << "Error: Could not find stream information: " << errbuf << std::endl;
avformat_close_input(&formatCtx); // Close input on failure
return;
}
// Find the first video stream and audio stream
for (unsigned int i = 0; i < formatCtx->nb_streams; i++)
{
if (formatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO && videoStreamIndex < 0)
{
videoStreamIndex = i;
std::cout << "Found Video Stream: Index " << videoStreamIndex << std::endl;
// // Debug: Print video codec information
// std::cout << "Debug: Video Codec Information:" << std::endl;
// av_dump_format(formatCtx, videoStreamIndex, videoFile, 0);
}
else if (formatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO && audioStreamIndex < 0)
{
audioStreamIndex = i;
std::cout << "Found Audio Stream: Index " << audioStreamIndex << std::endl;
// // Debug: Print audio codec information
// std::cout << "Debug: Audio Codec Information:" << std::endl;
// av_dump_format(formatCtx, audioStreamIndex, videoFile, 0); // Changed this line
}
}
if (videoStreamIndex == -1 && audioStreamIndex == -1)
{
std::cerr << "Error: Could not find video or audio stream." << std::endl;
avformat_close_input(&formatCtx); // Close input on failure
return;
}
// Open video codec
if (videoStreamIndex >= 0)
{
videoCodecCtx = avcodec_alloc_context3(nullptr);
if (!videoCodecCtx)
{
std::cerr << "Error: Could not allocate video codec context." << std::endl;
avformat_close_input(&formatCtx); // Close input on failure
return;
}
avcodec_parameters_to_context(videoCodecCtx, formatCtx->streams[videoStreamIndex]->codecpar);
videoCodec = avcodec_find_decoder(videoCodecCtx->codec_id);
if (!videoCodec)
{
std::cerr << "Error: Could not find video codec." << std::endl;
avformat_close_input(&formatCtx); // Close input on failure
return;
}
if (avcodec_open2(videoCodecCtx, videoCodec, nullptr) < 0)
{
std::cerr << "Error: Could not open video codec." << std::endl;
avformat_close_input(&formatCtx); // Close input on failure
return;
}
}
// Open audio codec
if (audioStreamIndex >= 0)
{
audioCodecCtx = avcodec_alloc_context3(nullptr);
if (!audioCodecCtx)
{
std::cerr << "Error: Could not allocate audio codec context." << std::endl;
avformat_close_input(&formatCtx); // Close input on failure
return;
}
avcodec_parameters_to_context(audioCodecCtx, formatCtx->streams[audioStreamIndex]->codecpar);
audioCodec = avcodec_find_decoder(audioCodecCtx->codec_id);
if (!audioCodec)
{
std::cerr << "Error: Could not find audio codec." << std::endl;
avformat_close_input(&formatCtx); // Close input on failure
return;
}
if (avcodec_open2(audioCodecCtx, audioCodec, nullptr) < 0)
{
std::cerr << "Error: Could not open audio codec." << std::endl;
avformat_close_input(&formatCtx); // Close input on failure
return;
}
// Initialize SDL Mixer
if (Mix_OpenAudio(44100, MIX_DEFAULT_FORMAT, 2, 4096) == -1)
{
std::cerr << "Error: Failed to open audio channel: " << Mix_GetError() << std::endl;
return;
}
// Initialize music with SDL Mixer
music = Mix_LoadMUS(musicFile);
if (!music)
{
std::cerr << "Error: Failed to load music file: " << Mix_GetError() << std::endl;
return;
}
}
}
/**
* @brief default FFmpegVideoPlayer deconstructor
*/
~FFmpegVideoPlayer()
{
if (videoCodecCtx)
{
avcodec_free_context(&videoCodecCtx);
}
if (audioCodecCtx)
{
avcodec_free_context(&audioCodecCtx);
}
if (formatCtx)
{
avformat_close_input(&formatCtx);
}
if (music)
{
Mix_FreeMusic(music);
}
Mix_CloseAudio();
}
/**
* @brief play audioFile
*
* The audio playback pipeline is below
*
* Original file -> split into videoFile/musicFile -> musicFile -> initialise SDL_Mixer -> Mix_PlayMusic -> playAudio() -> playVideo() {playAudio()}
*/
void playAudio() const
{
// Play music
if (Mix_PlayMusic(music, 0) == -1)
{
std::cerr << "Error: Failed to play music: " << Mix_GetError() << std::endl;
return;
}
}
/**
* @brief play the video and call the audio at the same time
*
* The video playback pipeline is below
*
* Original file -> split into videoFile/musicFile -> videoFile -> videoStream -> while (packet -> frame -> texture -> renderer)
*
* Their is an input event loop because it's important to allow moving/resizing/closing the window while the video is playing
* if you havn't initialised the event loop outside of this function yet.
*
*/
void playVideo() const
{
// Check if constructor initialised successfully
if (!formatCtx || !videoCodecCtx || !audioCodecCtx)
{
std::cerr << "Error: FFmpeg initialization failed." << std::endl;
return;
}
// Create SDL texture
SDL_Texture *texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING,
videoCodecCtx->width, videoCodecCtx->height);
if (texture == nullptr)
{
std::cerr << "Error: Could not create SDL texture: " << SDL_GetError() << std::endl;
return;
}
// Allocate frame
AVFrame *frame = av_frame_alloc();
AVPacket packet;
// Calculate delay to achieve desired frame rate
double frame_delay = 1.0 / av_q2d(formatCtx->streams[videoStreamIndex]->avg_frame_rate);
// Start audio playback
playAudio();
// Start video playback loop
bool end_of_stream = false; // Flag to track end of video stream
while (!end_of_stream) // Loop until end of stream
{
// Read video frames
while (!end_of_stream && av_read_frame(formatCtx, &packet) >= 0)
{
if (packet.stream_index == videoStreamIndex)
{
avcodec_send_packet(videoCodecCtx, &packet);
avcodec_receive_frame(videoCodecCtx, frame);
SDL_RenderClear(renderer);
// Update the video texture
SDL_UpdateYUVTexture(texture, nullptr,
frame->data[0], frame->linesize[0],
frame->data[1], frame->linesize[1],
frame->data[2], frame->linesize[2]);
// Render the video frame
SDL_RenderCopy(renderer, texture, nullptr, nullptr);
SDL_RenderPresent(renderer);
// Delay the video so that it plays at the same speed as video stream
SDL_Delay((Uint32)(frame_delay * 1000)); // Convert to milliseconds
/*
It's necessary to have an event loop here.
The reason is while the video is playing, if you move the window
SDL expects their to be a key input handle loop logic to take care of this event
but because their are no case for events it will crash.
Thus we initialise a fake event loop and also pressing any button skips the cutscene which
can be customised
*/
SDL_Event event;
while (SDL_PollEvent(&event))
{
switch (event.type)
{
case SDL_QUIT:
case SDL_KEYDOWN:
case SDL_CONTROLLERBUTTONDOWN:
case SDL_MOUSEBUTTONDOWN:
case SDL_FINGERDOWN:
end_of_stream = true;
break;
}
}
}
av_packet_unref(&packet);
}
// When video stream ends, quit and return
if (av_read_frame(formatCtx, &packet) == AVERROR_EOF)
end_of_stream = true;
}
// Cleanup
av_frame_free(&frame);
SDL_DestroyTexture(texture);
}
};
#endif //FFMPEGPLAYER_H
Step 11. Ensure that your list of files looks like the below. Note I included the DLL's for both Linux and Windows. If your just building on windows then only copy those dll's over. The reason I have both is to make the software multiplatform (portable)
avcodec-61.dll
avformat-61.dll
avutil-59.dll
FFmpegPlayer.h
libavcodec.so
libavformat.so
libavutil.so
libcurl-4.dll
libswresample.so
libswscale.so
main.cpp
nyan_cat_audio.mp3
nyan_cat_video.mp4
SDL2.dll
SDL2_image.dll
SDL2_mixer.dll
SDL2_TTF.dll
swresample-5.dll
Step 12. Build by running the below in the command line:
PS C:\Users\Sumeet\Documents\nyancat> g++ main.cpp -o main -lmingw32 -lSDL2main -lSDL2 -lSDL2_image -lSDL2_ttf -lSDL2_mixer -lavcodec -lavformat -lavutil -lswscale -lswresample
PS C:\Users\Sumeet\Documents\nyancat> ./main.exe
Meow!
Success: initialised: SDL2
Success: initialised: SDL2 window
Success: initialised: SDL2 renderer
Found Video Stream: Index 0
Found Audio Stream: Index 1
PS C:\Users\Sumeet\Documents\nyancat>
Step 13. Done. Be proud of yourself Avatar!
Step 14 (OPTIONAL) CMAKE - Install cmake on your computer by running the below code in your CLI Macos: brew install cmake Linux: sudo apt install cmake Windows: winget install CMake
Then create a CMAKElists.txt and enter code in below and run
cmake_minimum_required(VERSION 3.28)
project(nyancat)
# Set the C++ standard
set(CMAKE_CXX_STANDARD 17)
# Add your source files
add_executable(nyancat main.cpp
FFmpegPlayer.h)
# Include directories for FFmpeg
include_directories(C:/ffmpeg/include)
# Link directories for FFmpeg
link_directories(C:/ffmpeg/lib)
# Link SDL + FFmpeg libraries (mingw32 is optional as that's for my Windows MSYS2 setup)
target_link_libraries(nyancat
mingw32
SDL2main
SDL2
SDL2_image
SDL2_ttf
SDL2_mixer
avformat
avcodec
avutil
swscale
swresample
)
then run the below to use Cmake to compile the software and output a binary/executable main.exe/main
cmake ..
make
A quick way of integrating ffmpeg into a c++ project is using prebuilt package from conan.
In this case you shall have:
- Conan cli installed https://conan.io
- Configure ffmpeg dependency in your project using this as reference.
There is a ffmpeg package ready in ConanCenter