🌐
GitHub
github.com › dnaprawa › dockerfile-best-practices
GitHub - dnaprawa/dockerfile-best-practices: Dockerfile Best Practices · GitHub
Each RUN instruction in your Dockerfile will end up creating an additional layer in your final image. The best practice is to limit the amount of layers to keep the image lightweight.
Starred by 250 users
Forked by 19 users
🌐
TestDriven.io
testdriven.io › blog › docker-best-practices
Docker Best Practices for Python Developers | TestDriven.io
February 12, 2024 - Use `pip install --no-cache-dir <package>` Dockerfile:9 DL3059 info: Multiple consecutive `RUN` instructions. Consider consolidation. Dockerfile:17 DL3025 warning: Use arguments JSON notation for CMD and ENTRYPOINT arguments · You can see it in action online at https://hadolint.github.io/hadolint/. There's also a VS Code Extension.
Discussions

dockerfile - Docker non-root User Best Practices for Python Images? - Stack Overflow
I have been building some python Docker images recently. Best practice is obviously not to run containers as root user and remove sudo privileges from the non-privileged user. But I have been wonde... More on stackoverflow.com
🌐 stackoverflow.com
The Perfect Python Dockerfile - better performance and security
One of your perks for using a virtual environment is Easy to copy packages folder between multi-stage builds but if you run pip as a non-root user and install your packages with --user you can get the same benefit without a virtual environment. Also your final L29 has COPY . . but that's going to copy your files as the root user instead of your custom user. Using USER myuser alone isn't enough to make COPY commands be owned by that user. You need to explicitly --chown them with the COPY command. Also what happens if your application requires using a package that requires C dependencies? For example the PostgreSQL package requires this. You'll end up having to install a number of apt packages to handle that. You may also want to use python as the user instead of myuser because other official Docker images use the name of the programming runtime as the name of the user. If the Python image ever creates a user for you by default in the future it'll probably end up being named python. You may also want to consider setting your PYTHONPATH and moving all of that gunicorn configuruation into its own file. You also have a bunch of individual ENV instructions. Those are going to create a layer for each one but you can combine them all into 1 layer by adding them in 1 call. You may also want to set a USER env variable because certain command line tools will expect that env variable to exist and if it doesn't you could get unexpected side effects. There's also handling assets too, such as running collectstatic but only in production mode, and likely also copying assets from a multi-stage Webpack stage. If you're curious I have a full example of the above at https://github.com/nickjj/docker-django-example . I'll also be talking about that Docker / Docker Compose set up during a live demo in a few days at DockerCon. More on reddit.com
🌐 r/django
24
78
May 24, 2021
The Perfect Python Dockerfile - better performance and security

Those are nice and surprising insights, especially on the performance side.

I wonder however what's the point of using virtual envs in a docker machine ?

Virtual envs are necessary on your local machine to separe different projects with different requirements, but in the docker context your code should be fairly isolated.

More on reddit.com
🌐 r/Python
51
298
May 19, 2021
What's the best docker image to have a python environment so I could run my scripts on there instead of on Windows? I'd prefer not to use a VM.
Doesn’t docker for windows run containers in a vm More on reddit.com
🌐 r/homelab
21
2
October 23, 2023
🌐
GitHub
github.com › openshift › dockerexec › blob › master › vendor › src › github.com › docker › docker › docs › sources › articles › dockerfile_best-practices.md
dockerexec/vendor/src/github.com/docker/docker/docs/sources/articles/dockerfile_best-practices.md at master · openshift/dockerexec
Dockerfiles adhere to a specific format and use a specific set of instructions. You can learn the basics on the Dockerfile Reference page. If you’re new to writing Dockerfiles, you should start there. This document covers the best practices and methods recommended by Docker, Inc.
Author   openshift
🌐
GitHub
gist.github.com › frankmeza › 61d97b2fd31ed5e005eb6128bd33210d
NOTES: Dockerfile best practices · GitHub
source: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/ Dockerfiles are a list of steps to build an image · FROM ubuntu:18.04 COPY . /app RUN make /app CMD python /app/app.py · FROM - creates a layer from the ubuntu:18.04 Docker image.
🌐
GitHub
github.com › orgs › python-poetry › discussions › 1879
Document docker poetry best practices · python-poetry · Discussion #1879
It would be great to have >=1 example Dockerfile on best practices for using poetry with a Dockerfile (e.g., a Dockerfile for a VS Code devcontainer). I would like to switch to poetry, but I'm not sure how to best modify my current Dockerfile setup that I use for my devcontainer environments: # Use an official Python runtime as a base image FROM python:3.11-slim-buster # Set the working directory in the container WORKDIR /my_app # Install system dependencies RUN apt-get update && \ apt-get install -y --no-install-recommends gcc curl gnupg2 && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* # Copy the requirements.txt file into the container COPY requirements.txt ./ # Copy the package files into the container COPY ./my_app ./ # Install any needed Python packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt
🌐
GitHub
github.com › danielstankw › Dockerfile-best-practices
GitHub - danielstankw/Dockerfile-best-practices: Compilation of Dockerfile best practices/ optimizations
Running containers with a non-root user is a critical security best practice that helps prevent container breakout attacks and limits potential damage from compromised applications.
Author   danielstankw
🌐
Divio
divio.com › blog › best-practices-writing-dockerfiles
Best Practices for Writing Dockerfiles
March 21, 2023 - Environment variables can also be set at build time by using the “-e” flag to docker build, e.g. docker build -t my_img -e GITHUB_API_KEY=$GITHUB_API_KEY. Building off #3, let’s say you want to set certain values in your Dockerfile dynamically, at build time. For example, you might want to build different versions of your image that use different versions of the base Python image, such as one version for Python 2.7 and another for Python 3.10.
🌐
GitHub
github.com › FuriKuri › docker-best-practices
GitHub - FuriKuri/docker-best-practices: Collection of some docker practices · GitHub
Docker Captain Alex Ellis has provided a set of Dockerfiles for ARM on Github for common software such as Node.js, Python, Consul and Nginx: alexellis/docker-arm
Starred by 397 users
Forked by 63 users
Find elsewhere
🌐
GitHub
github.com › juan131 › dockerfile-best-practices
GitHub - juan131/dockerfile-best-practices: Best Practices writing a Dockerfile
Best Practices writing a Dockerfile. Contribute to juan131/dockerfile-best-practices development by creating an account on GitHub.
Starred by 99 users
Forked by 19 users
Languages   JavaScript 55.5% | Dockerfile 44.5% | JavaScript 55.5% | Dockerfile 44.5%
🌐
GitHub
github.com › chrislevn › dockerfile-practices
GitHub - chrislevn/dockerfile-practices: Good practices on writing Dockerfile
June 13, 2023 - Adhering to these best practices can help meet these requirements and ensure that your containerized applications pass security audits. ... FROM ubuntu:20.04 # Set working directory WORKDIR /app # Copy application files COPY . /app # Create a non-root user RUN groupadd -r myuser && useradd -r -g myuser myuser # Set ownership and permissions RUN chown -R myuser:myuser /app # Switch to the non-root user USER myuser # Run the application CMD ["python3", "app.py"]
Starred by 50 users
Forked by 9 users
🌐
Snyk
snyk.io › blog › best-practices-containerizing-python-docker
Best practices for containerizing Python applications with Docker | Snyk
November 11, 2021 - But it creates another problem: you'd be adding all the system-level dependencies from the image we used for compiling the dependencies to the final Docker base image — and we don't want that to happen (remember our best practice to achieve as small a Docker base image as possible). With that first option ruled out, let’s explore the second: using a virtualenv. If we do that, we would end up with the following Dockerfile. 1FROM python:3.10-slim as build 2RUN apt-get update 3RUN apt-get install -y --no-install-recommends \ 4 build-essential gcc 5 6WORKDIR /usr/app 7RUN python -m venv /usr/app/venv 8ENV PATH="/usr/app/venv/bin:$PATH" 9 10COPY requirements.txt .
Top answer
1 of 2
33

In general, the easiest safe approach is to do everything in your Dockerfile as the root user until the very end, at which point you can declare an alternate USER that gets used when you run the container.

FROM ???
# Debian adduser(8); this does not have a specific known uid
RUN adduser --system --no-create-home nonroot

# ... do the various install and setup steps as root ...

# Specify metadata for when you run the container
USER nonroot
EXPOSE 12345
CMD ["my_application"]

For your more specific questions:

Is installing packages with apt-get as root ok?

It's required; apt-get won't run as non-root. If you have a base image that switches to a non-root user you need to switch back with USER root before you can run apt-get commands.

Best location to install these packages?

The normal system location. If you're using apt-get to install things, it will put them in /usr and that's fine; pip install will want to install things into the system Python site-packages directory; and so on. If you're installing things by hand, /usr/local is a good place for them, particularly since /usr/local/bin is usually in $PATH. The "user home directory" isn't a well-defined concept in Docker and I wouldn't try to use it.

When installing python packages with pip as root, I get the following warning...

You can in fact ignore it, with the justification you state. There are two common paths to using pip in Docker: the one you show where you pip install things directly into the "normal" Python, and a second path using a multi-stage build to create a fully-populated virtual environment that can then be COPYed into a runtime image without build tools. In both cases you'll still probably want to be root.

Anything else I am missing or should be aware of?

In your Dockerfile:

## get UID/GID of host user for remapping to access bindmounts on host
ARG UID
ARG GID

This is not a best practice, since it means you'll have to rebuild the image whenever someone with a different host uid wants to use it. Create the non-root user with an arbitrary uid, independent from any specific host user.

RUN usermod -aG sudo flaskuser

If your "non-root" user has unrestricted sudo access, they are effectively root. sudo has some significant issues in Docker and is never necessary, since every path to run a command also has a way to specify the user to run it as.

RUN chown flaskuser:users /tmp/requirements.txt

Your code and other source files should have the default root:root ownership. By default they will be world-readable but not writeable, and that's fine. You want to prevent your application from overwriting its own source code, intentionally or otherwise.

RUN chmod -R  777 /usr/local/lib/python3.11/site-packages/*

chmod 0777 is never a best practice. It gives a place for unprivileged code to write out their malware payloads and execute them. For a typical Docker setup you don't need chmod at all.

The bind mounted workspace is only for development, for a production image I would copy the necessary files/artifacts into the image/container.

If you use a bind mount to overwrite all of the application code with content from the host, then you're not actually running the code from the image, and some or all of the Dockerfile's work will just be lost. This means that, when you go to production without the bind mount, you're running an untested setup.

Since your development environment will almost always be different from your production environment in some way, I'd recommend using a non-Docker Python virtual environment for day-to-day development, have good (pytest) unit tests that can run outside the container, and do integration testing on the built container before deploying.

Permission issues can also come up if your application is trying to write out files to a host directory. The best approach here is to restructure your application to avoid it, storing the data somewhere else, like a relational database. In this answer I discuss permission setup for a bind-mounted data directory, though that sounds a little different from what you're asking about here.

2 of 2
-1

Thanks again for your extensive explanations David.

I had to digest all of that and after some more reading on the topic I finally grasped everything you said (so I hope).

The reason I first added the user with a UID/GID matching the host user was, that when I started, I ran my containers on my NAS, which only allows to SSH with root. So running the container with root while the project folder is owned by another user would result in permission issues when the Container-user was trying to access the bind mounted files. Back then I did not quite understand all of that so I carried a false thought along that the container user must always match the host user id.

So I have changed my Dockerfile to use an arbitrary user like you suggested, removed all the unnecessary chown/chmod and I can run this successfully on my local macbook and on a VPS I am currently testing out.

## ################################################################
## WEB Builder Stage
## ################################################################
FROM python:3.10-slim-buster AS builder

## ----------------------------------------------------------------
## Install Packages
## ----------------------------------------------------------------
RUN apt-get update \
    && apt-get install -y libmariadb3 libmariadb-dev \
    && apt-get install -y gcc \
    ## cleanup
    && apt-get clean \
    && apt-get autoclean \
    && apt-get autoremove --purge  -y \
    && rm -rf /var/lib/apt/lists/*

## ----------------------------------------------------------------
## Add venv
## ----------------------------------------------------------------
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

## ----------------------------------------------------------------
## Install python packages
## ----------------------------------------------------------------
COPY ./requirements.txt /tmp/requirements.txt
RUN python3 -m pip install --upgrade pip \
 && python3 -m pip install wheel \
 && python3 -m pip install  --disable-pip-version-check --no-cache-dir -r /tmp/requirements.txt




## ################################################################
## Final Stage
## ################################################################
FROM python:3.10-slim-buster

## ----------------------------------------------------------------
## add user so we can run things as non-root
## ----------------------------------------------------------------
RUN adduser flaskuser

## ----------------------------------------------------------------
## Copy from builder and set ENV for venv
## ----------------------------------------------------------------
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

## ----------------------------------------------------------------
## Set Python ENV
## ----------------------------------------------------------------
ENV PYTHONUNBUFFERED=1 \   PYTHONPATH="${PYTHONPATH}:/workspace/web/app:/opt/venv/bin:/opt/venv/lib/python3.10/site-packages"

## ----------------------------------------------------------------
## Copy app files into container
## ----------------------------------------------------------------
WORKDIR /workspace/web
COPY . .

## ----------------------------------------------------------------
## Switch to non-priviliged user and run app
## the entrypoint script runs either uwisg or flask dev server
## depending on FLASK_ENV
## ----------------------------------------------------------------
USER flaskuser
CMD ["/workspace/web/docker-entrypoint.sh"]

If I want to run the container on my NAS (from the NAS host CLI with root) using bind mounts, I can still do so by using a docker-compose.override.yml that will contain

 myservice:
   user: "{UID}:{GID}"

where "{UID}:{GID}" are matching my host user who owns the bind mounted folder.

But I am also gonna change this. I am developing and testing only locally now and might use my NAS as sort of first integration environment where I will just test the fully built containers/images pulled from a registry (so no need for bind mounts anymore.

I also started to use multistage builds, which, besides making the final images way smaller should hopefully decrease the attack surface by not including unnecessary build dependencies.

🌐
Particle Filters
sassafras13.github.io › DockerBestPractices
Docker Best Practices
The good people at Docker have some best practices that they recommend. First and foremost, they recommend that you do everything you can to keep your images as small as possible [1]. A good way to do this right off the bat is to be careful in what you choose for your base image - for example, don’t choose the official image of all of ubuntu 18.04 if you just need a base image of Python3 [1].
🌐
Reddit
reddit.com › r/django › the perfect python dockerfile - better performance and security
r/django on Reddit: The Perfect Python Dockerfile - better performance and security
May 24, 2021 -

Having a reliable Dockerfile as your base can save you hours of headaches and bigger problems down the road.

https://luis-sena.medium.com/creating-the-perfect-python-dockerfile-51bdec41f1c8

This article shares a Dockerfile base that has been battle-tested through many different projects.

This can also serve as a succinct tutorial of the different features/commands used to improve the final image.

Nothing is perfect I know! Please feel free to provide any feedback and we can iterate on the shared Dockerfile if needed.

Top answer
1 of 5
36
One of your perks for using a virtual environment is Easy to copy packages folder between multi-stage builds but if you run pip as a non-root user and install your packages with --user you can get the same benefit without a virtual environment. Also your final L29 has COPY . . but that's going to copy your files as the root user instead of your custom user. Using USER myuser alone isn't enough to make COPY commands be owned by that user. You need to explicitly --chown them with the COPY command. Also what happens if your application requires using a package that requires C dependencies? For example the PostgreSQL package requires this. You'll end up having to install a number of apt packages to handle that. You may also want to use python as the user instead of myuser because other official Docker images use the name of the programming runtime as the name of the user. If the Python image ever creates a user for you by default in the future it'll probably end up being named python. You may also want to consider setting your PYTHONPATH and moving all of that gunicorn configuruation into its own file. You also have a bunch of individual ENV instructions. Those are going to create a layer for each one but you can combine them all into 1 layer by adding them in 1 call. You may also want to set a USER env variable because certain command line tools will expect that env variable to exist and if it doesn't you could get unexpected side effects. There's also handling assets too, such as running collectstatic but only in production mode, and likely also copying assets from a multi-stage Webpack stage. If you're curious I have a full example of the above at https://github.com/nickjj/docker-django-example . I'll also be talking about that Docker / Docker Compose set up during a live demo in a few days at DockerCon.
2 of 5
6
Nice write-up! A few comments: You included 3.9-slim and 3.9-slim-buster in your benchmarks, but they're actually the same image. If you look at the tags list on Docker Hub, you'll find that both are aliases to the same image. I'm curious why you had so much variation between the two benchmarks as well... maybe there are some other confounding variables. You can further improve the build time by merging neighboring ENV or RUN steps in a single step. For example: ENV PYTHONUNBUFFERED=1 ENV VIRTUAL_ENV=:/home/myuser/venv" ENV PATH="/home/myuser/venv/bin:$PATH" can be turned into: ENV PYTHONUNBUFFERED=1 \ VIRTUAL_ENV="/home/myuser/venv" \ PATH="/home/myuser/venv/bin:$PATH" which will create a single image layer instead of 3.
🌐
Tchut-Tchut Blog
beenje.github.io › blog › posts › dockerfile-anti-patterns-and-best-practices
Dockerfile anti-patterns and best practices | Tchut-Tchut Blog
March 16, 2017 - FROM python:3.6 WORKDIR /app COPY requirements.txt /app/requirements.txt RUN pip install -r requirements.txt COPY . /app ENTRYPOINT ["python"] CMD ["ap.py"] With this Dockerfile, the RUN pip command will only be re-run when the requirements.txt file changes.
🌐
Docker
docs.docker.com › manuals › docker build › best practices
Best practices | Docker Docs
# syntax=docker/dockerfile:1 FROM ubuntu:24.04 RUN apt-get -y update && apt-get install -y --no-install-recommends python3
🌐
Docker
docs.docker.com › guides › python › containerize your app
Containerize a Python application
This section walks you through containerizing and running a Python application. The sample application uses the popular FastAPI framework. Clone the sample application to use with this guide. Open a terminal, change directory to a directory that you want to work in, and run the following command to clone the repository: $ git clone https://github.com/estebanx64/python-docker-example && cd python-docker-example
🌐
GitHub
github.com › glours › dockerfile-best-practices
GitHub - glours/dockerfile-best-practices · GitHub
Contribute to glours/dockerfile-best-practices development by creating an account on GitHub.
Author   glours
🌐
ca, duh
caduh.com › home › blog › dockerfile best practices — fast builds, small images, safer containers
Dockerfile Best Practices — fast builds, small images, safer containers
September 18, 2025 - # syntax=docker/dockerfile:1.7 FROM python:3.12-slim AS build WORKDIR /app ENV PIP_DISABLE_PIP_VERSION_CHECK=1 PIP_NO_CACHE_DIR=1 COPY pyproject.toml poetry.lock* requirements*.txt* ./ RUN --mount=type=cache,target=/root/.cache/pip python -m venv /opt/venv ENV PATH="/opt/venv/bin:$PATH" # Prefer pyproject/poetry; fall back to requirements.txt RUN bash -lc ' if [ -f poetry.lock ]; then pip install poetry && poetry install --only main --no-root; elif ls requirements*.txt >/dev/null 2>&1; then pip install -r requirements.txt; fi' FROM python:3.12-slim AS run ENV VIRTUAL_ENV=/opt/venv PATH="/opt/venv/bin:$PATH" WORKDIR /app COPY --from=build /opt/venv /opt/venv COPY .
🌐
Collabnix
collabnix.com › 10-essential-docker-best-practices-for-python-developers-in-2025
Top 10 Docker Best Practices for Python Developers
August 4, 2025 - This comprehensive guide covers 10 essential Docker best practices specifically tailored for Python developers.