The application and its database could be running on the same host or different hosts, and in Docker or not.
If the application and its database are running on different hosts, there is nothing unusual about setting this up in Docker. Configure your application with the DNS name of the database server. (I would recommend passing this via environment variables rather than modifying the settings.py file.)
Docker Compose syntax:
environment:
MYSQL_HOST: mysql.example.com
If both are running in the same Docker setup, then Docker provides an internal DNS setup for one to reach the other. In Docker Compose, you can use the services: key as a host name; in plain Docker, you need to manually docker network create but then this trick works.
Plain Docker example:
docker network create app
docker run --net app --name mysql -v $PWD/mysql:/var/lib/mysql/data mysql
docker run --net app --name app -e MYSQL_HOST=mysql myapp
If the database is running on the same host as the application, but outside Docker, and the host is a Mac or Windows system running the Docker Desktop application, then there is a special host.docker.internal hostname
docker run -e MYSQL_HOST=host.docker.internal myapp
For a native-Linux host this shortcut doesn't exist and you need to find out the host's IP address, but then you can treat this like the first case.
Answer from David Maze on Stack OverflowVideos
Hi, The title says it all. I have installed phpmyadmin which primarily use for managing databases. I have created a django app docker which trying to connect to the database on my machine it is showing this error,
django.db.utils.OperationalError: (2002, "Can't connect to MySQL server on '127.0.0.1' (115)")
Here is my Dockerfile
# Dockerfile FROM python:3.8
install nginx
RUN apt-get update && apt-get install nginx vim -y --no-install-recommends COPY nginx.default /etc/nginx/sites-available/default RUN ln -sf /dev/stdout /var/log/nginx/access.log
&& ln -sf /dev/stderr /var/log/nginx/error.log
The enviroment variable ensures that the python output is set straight
to the terminal with out buffering it first
ENV PYTHONUNBUFFERED 1
create root directory for our project in the container
RUN mkdir /opt/app
set working dir
WORKDIR /opt/app
ENV PORT=8080
EXPOSE 8080
Copy the project files to working dir
COPY . /opt/app
install dependencies, you can change it to production.txt to deploy on the production env
RUN pip install -r requirements/development.txt
CMD ["python", "manage.py", "runserver"]
This is my database config
DATABASES = {"default": { 'ENGINE': 'django.db.backends.mysql', 'NAME': env.str('DB_DATABASE'), 'HOST': env.str('DB_HOST'), 'USER': env.str('DB_USERNAME'), 'PASSWORD': env.str('DB_PASSWORD') } } DATABASES["default"]["ATOMIC_REQUESTS"] = True
Any help would be appreciated....
I have created a fair few small (and one giant sprawling) Django project that are in use by small groups of consistent people (think work groups).
Up to this point, I've built sites inside python venv's and hosted with Apache mod_wsgi, all on a couple of AWS virtual machines (EC2 instances).
As I make more little Django sites, it seems like it's getting time to move into containers to keep a bit more explicit definition around package requirements/versions, transition between servers, easier local testing, etc. It seems like most tutorials out there are for toy projects on bare metal (raises hand) or using Django for Kubernetes style dynamic deployment, load balancing, etc.
Does anyone have a good resource for building / deploying relatively simple Django projects to a container for general containerization. Things like, packaging process, pros and cons of running the database in the same container / different container / bare metal, etc.