You're actually injecting your source code using volumes:, not during the image build, and this doesn't honor .dockerignore.
Running a Docker application like this happens in two phases:
- You build a reusable image that contains the application runtime, any OS and language-specific library dependencies, and the application code; then
- You run a container based on that image.
The .dockerignore file is only considered during the first build phase.
In your setup, you don't actually COPY anything in the image beyond the requirements.txt file. Instead, you use volumes: to inject parts of the host system into the container. This happens during the second phase, and ignores .dockerignore.
The approach I'd recommend for this is to skip the volumes:, and instead COPY the required source code in the Dockerfile. You should also generally indicate the default CMD the container will run in the Dockerfile, rather than requiring it it the docker-compose.yml or docker run command.
FROM python:3.9-slim-buster
# Do the OS-level setup _first_ so that it's not repeated
# if Python dependencies change
RUN apt-get update && apt-get install -y ...
WORKDIR /django-app
# Then install Python dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Then copy in the rest of the application
# NOTE: this _does_ honor .dockerignore
COPY . .
# And explain how to run it
ENV PYTHONUNBUFFERED=1
EXPOSE 8000
USER userapp
# consider splitting this into an ENTRYPOINT that waits for the
# the database, runs migrations, and then `exec "$@"` to run the CMD
CMD sleep 7; python manage.py migrate; python manage.py runserver 0.0.0.0:8000
This means, in the docker-compose.yml setup, you don't need volumes:; the application code is already inside the image you built.
version: "3.8"
services:
app:
build: .
ports:
- 8000:8000
depends_on:
- db
# environment: [PGHOST=db]
# no volumes: or container_name:
db:
image: postgres
volumes: # do keep for persistent database data
- ./data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
# ports: ['5433:5432']
This approach also means you need to docker-compose build a new image when your application changes. This is normal in Docker.
For day-to-day development, a useful approach here can be to run all of the non-application dependencies in Docker, but the application itself outside a container.
# Start the database but not the application
docker-compose up -d db
# Create a virtual environment and set it up
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
# Set environment variables to point at the Docker database
export PGHOST=localhost PGPORT=5433
# Run the application locally
./manage.py runserver
Doing this requires making the database visible from outside Docker (via ports:), and making the database location configurable (probably via environment variables, set in Compose with environment:).
You're actually injecting your source code using volumes:, not during the image build, and this doesn't honor .dockerignore.
Running a Docker application like this happens in two phases:
- You build a reusable image that contains the application runtime, any OS and language-specific library dependencies, and the application code; then
- You run a container based on that image.
The .dockerignore file is only considered during the first build phase.
In your setup, you don't actually COPY anything in the image beyond the requirements.txt file. Instead, you use volumes: to inject parts of the host system into the container. This happens during the second phase, and ignores .dockerignore.
The approach I'd recommend for this is to skip the volumes:, and instead COPY the required source code in the Dockerfile. You should also generally indicate the default CMD the container will run in the Dockerfile, rather than requiring it it the docker-compose.yml or docker run command.
FROM python:3.9-slim-buster
# Do the OS-level setup _first_ so that it's not repeated
# if Python dependencies change
RUN apt-get update && apt-get install -y ...
WORKDIR /django-app
# Then install Python dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Then copy in the rest of the application
# NOTE: this _does_ honor .dockerignore
COPY . .
# And explain how to run it
ENV PYTHONUNBUFFERED=1
EXPOSE 8000
USER userapp
# consider splitting this into an ENTRYPOINT that waits for the
# the database, runs migrations, and then `exec "$@"` to run the CMD
CMD sleep 7; python manage.py migrate; python manage.py runserver 0.0.0.0:8000
This means, in the docker-compose.yml setup, you don't need volumes:; the application code is already inside the image you built.
version: "3.8"
services:
app:
build: .
ports:
- 8000:8000
depends_on:
- db
# environment: [PGHOST=db]
# no volumes: or container_name:
db:
image: postgres
volumes: # do keep for persistent database data
- ./data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
# ports: ['5433:5432']
This approach also means you need to docker-compose build a new image when your application changes. This is normal in Docker.
For day-to-day development, a useful approach here can be to run all of the non-application dependencies in Docker, but the application itself outside a container.
# Start the database but not the application
docker-compose up -d db
# Create a virtual environment and set it up
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
# Set environment variables to point at the Docker database
export PGHOST=localhost PGPORT=5433
# Run the application locally
./manage.py runserver
Doing this requires making the database visible from outside Docker (via ports:), and making the database location configurable (probably via environment variables, set in Compose with environment:).
That's not actually your case, but in general an additional cause of ".dockerignore not ignoring" is that it applies the filters to whole paths relative to the context dir, not just basenames, so the pattern:
__pycache__
*.pyc
applies only to the docker context's root directory, not to any of subdirectories.
In order to make it recursive, change it to:
**/__pycache__
**/*.pyc
Previously, using a single Dockerfile with it's associated .dockerignore, it worked fine. However, when trying to follow this through with docker compose, it now fails to work. I will provide my structure.
I have uploaded my folder structure to https://imgur.com/65bjhat for easy reading.
My Dockerfile is,
# Use the official Python image as a base image FROM python:3.9 Prevents Python from writing pyc files to disc (equivalent to python -B option) ENV PYTHONDONTWRITEBYTECODE 1 Prevents Python from buffering stdout and stderr (equivalent to python -u option) ENV PYTHONUNBUFFERED 1 WORKDIR /app COPY . /app/ RUN pip install -r requirements.txt
My .dockerignore is,
venv/* **/migrations .dockerignore Dockerfile images/*
My docker-compose.yaml is
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app:/usr/src/app/
ports:
- "8000:8000"
env_file:
- ./.dev.env
depends_on:
- db
db:
image: postgres:16
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=radix_fitness_postgresql_db
volumes:
postgres_data:I run this using,
docker compose -f compose-dev.yaml up
However, when looking inside my container, everything inside my .dockerignore is still prevent such as my migration folders, venv, etc...
The only thing I did was switch to docker compose, but it seems to be treated identically as my Dockerfile inside of the app directory should be using the .dockerignore. I wonder if COPY is overriding the behaviour of .dockerignore?
» pip install generate-dockerignore-from-gitignore