You will have to do the following:
Make the content of the SQL file as a ConfigMap like this, for example:
apiVersion: v1 kind: ConfigMap metadata: name: mariadb-config data: mariadb-schema: "DROP DATABASE IF EXISTS test;\n\nCREATE DATABASE IF NOT EXISTS test;Make a volume from this config map in your deployment yaml like this:
volumes: - name: mariadb-schema-config-vol configMap: name: mariadb-config defaultMode: 420 items: - key: mariadb-schema path: mariadb-schema.sqlAnd volume mount like this:
volumeMounts: - mountPath: /var/db/config name: mariadb-schema-config-vol
Then your init container command will be like:
['sh', '-c', 'psql -a -f /var/db/config/mariadb-config.sql']
For your second question, make a shell script that reads the env variables (The db credentials - I am presuming that you are having them in secrets and using them as env variables) and then invoke this command:
psql -a -f /var/db/config/mariadb-config.sql
So to make this happen the content of this script should be in a config map and execute the script from a volume mount, just like the above example.
Hope this helps.
Answer from Amit on Stack Overflowpostgresql - How to mount a sql file in a Init Container in order to bootstrap Postgres Database - Stack Overflow
bash - How to create User/Database in script for Docker Postgres - Stack Overflow
docker - How to initialize PostgreSQL database directly within a Dockerfile? - Stack Overflow
postgresql - Create or update existing postgres db container through kubernetes job - Stack Overflow
Videos
You will have to do the following:
Make the content of the SQL file as a ConfigMap like this, for example:
apiVersion: v1 kind: ConfigMap metadata: name: mariadb-config data: mariadb-schema: "DROP DATABASE IF EXISTS test;\n\nCREATE DATABASE IF NOT EXISTS test;Make a volume from this config map in your deployment yaml like this:
volumes: - name: mariadb-schema-config-vol configMap: name: mariadb-config defaultMode: 420 items: - key: mariadb-schema path: mariadb-schema.sqlAnd volume mount like this:
volumeMounts: - mountPath: /var/db/config name: mariadb-schema-config-vol
Then your init container command will be like:
['sh', '-c', 'psql -a -f /var/db/config/mariadb-config.sql']
For your second question, make a shell script that reads the env variables (The db credentials - I am presuming that you are having them in secrets and using them as env variables) and then invoke this command:
psql -a -f /var/db/config/mariadb-config.sql
So to make this happen the content of this script should be in a config map and execute the script from a volume mount, just like the above example.
Hope this helps.
Was looking into something similar and found the following approach as of 11.5.24 (postgres:16.2 docker image and kubernetes v1.31):
- I had an existing SQL init file
init.sql. - I ran
kubectl create configmap initsql --from-file=init.sqlto generate aconfigMapfile. I renamed the file something likepostgres-init.yaml. - I
kubectl apply -f postgres-init.yaml. - I have a standard-fare
postgres-deployment.yaml- basically copied from the official examples with a few modification. - In
volumes, I added:volumes: - ... - name: sql-init-mount configMap: name: initsql items: - key: init.sql path: init.sql - In the same
postgres-deployment.yamlI added to thepostgres containeritself:volumeMounts: - mountPath: /docker-entrypoint-initdb.d name: sql-init-mount - The official
postgresDocker image supports automatic initialization of any scripts dropped intodocker-entrypoint-initdb.d.
I had a suspicion that using a configMap might accomplish the same thing without using containerInit or any commands. Turns out I was correct, you may need to wait moment but if you:
minikube dashboardthe deployedpostgres podcan be found and you can easilyexecin (if you're doing this locally) to verify the existing of the file (created corrected) and run:psql -U postgres -d postgresto login and query.
EDIT - since Jul 23, 2015
The official postgres docker image will run .sql scripts found in the /docker-entrypoint-initdb.d/ folder.
So all you need is to create the following sql script:
init.sql
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
and add it in your Dockerfile:
Dockerfile
FROM library/postgres
COPY init.sql /docker-entrypoint-initdb.d/
But since July 8th, 2015, if all you need is to create a user and database, it is easier to just make use to the POSTGRES_USER, POSTGRES_PASSWORD and POSTGRES_DB environment variables:
docker run -e POSTGRES_USER=docker -e POSTGRES_PASSWORD=docker -e POSTGRES_DB=docker library/postgres
or with a Dockerfile:
FROM library/postgres
ENV POSTGRES_USER docker
ENV POSTGRES_PASSWORD docker
ENV POSTGRES_DB docker
for images older than Jul 23, 2015
From the documentation of the postgres Docker image, it is said that
[...] it will source any *.sh script found in that directory [
/docker-entrypoint-initdb.d] to do further initialization before starting the service
What's important here is "before starting the service". This means your script make_db.sh will be executed before the postgres service would be started, hence the error message "could not connect to database postgres".
After that there is another useful piece of information:
If you need to execute SQL commands as part of your initialization, the use of Postgres single user mode is highly recommended.
Agreed this can be a bit mysterious at the first look. What it says is that your initialization script should start the postgres service in single mode before doing its actions. So you could change your make_db.ksh script as follows and it should get you closer to what you want:
NOTE, this has changed recently in the following commit. This will work with the latest change:
export PGUSER=postgres
psql <<- EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL
Previously, the use of --single mode was required:
gosu postgres postgres --single <<- EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL
By using docker-compose:
Assuming that you have following directory layout:
$MYAPP_ROOT/docker-compose.yml
/Docker/init.sql
/Docker/db.Dockerfile
File: docker-compose.yml
version: "3.3"
services:
db:
build:
context: ./Docker
dockerfile: db.Dockerfile
volumes:
- ./var/pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
File: Docker/init.sql
CREATE USER myUser;
CREATE DATABASE myApp_dev;
GRANT ALL PRIVILEGES ON DATABASE myApp_dev TO myUser;
CREATE DATABASE myApp_test;
GRANT ALL PRIVILEGES ON DATABASE myApp_test TO myUser;
File: Docker/db.Dockerfile
FROM postgres:11.5-alpine
COPY init.sql /docker-entrypoint-initdb.d/
Composing and starting services:
docker-compose -f docker-compose.yml up --no-start
docker-compose -f docker-compose.yml start
Is there a way to keep PostgreSQL up in between steps?
=> nope. the idea of steps in Dockerfile is to prepare image
you need something like init.sql
Dockerfile
FROM library/postgres
COPY init.sql /docker-entrypoint-initdb.d/
In init.sql you can put logic to run on empty db if you run the container in first time, on next container startup if db already initialized, than run of init.sql will be skipped
See more details here How to create User/Database in script for Docker Postgres
Found a way to run PostgreSQL operations in between steps. Basically I just need to restart the db and wait for it to come up before running anything on it.
Copy the following code into a script that will be run during a step:
psqlversion=11
pg_initd=/etc/init.d/postgresql-$psqlversion
# We need to start postgresql again before modifying the PostgreSQL DB
# since every RUN command in Dockerfile is like a snapshot
$pg_initd start
until /opt/carillon/pg/bin/pg_isready; do sleep 1; done;
Anything after the above snippet, PostgreSQL will be available.
You have to mount the SQL file as a volumen from a configmap and use the psql cli to execute the commands from mounted file.
To execute commands from file you can change the command parameter on the yaml by this:
psql -a -f sqlCommand.sql
The configmap needs to be created using the file you pretend to mount more info here
kubectl create configmap sqlCommands.sql --from-file=sqlCommands.sql
Then you have to add the configmap and the mount statement on your job yaml and modify the command to use the mounted file.
apiVersion: batch/v1
kind: Job
metadata:
name: init-db
spec:
template:
metadata:
name: init-db
labels:
app: init-postgresdb
spec:
containers:
- image: "docker.io/bitnami/postgresql:11.5.0-debian-9-r60"
name: init-db
command: [ "bin/sh", "-c", "psql -a -f /sqlCommand.sql" ]
volumeMounts:
- name: sqlCommand
mountPath: /sqlCommand.sql
env:
- name: DB_HOST
value: "knotted-iguana-postgresql"
- name: DB_DATABASE
value: "postgres"
volumes:
- name: sqlCommand
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: sqlCommand.sql
restartPolicy: OnFailure
You should make a docker file for the same first, execute it and map the same working docker image to the kubernetes job yaml file.
You can add an entrypoint.sh in docker file, where you can place your scripts to be executed
I have a standard postgres container running, with the pg_data volume mapped to a directory on the host machine.
I want to be able to run an init script everytime I build or re-build the container, to run migrations and other such things. However, any script or '.sql' file placed in /docker-entrypoint-initdb.d/ only gets executed if the pg_data volume is empty.
What is the easiest solution to this – at the moment I could make a pg_dump pf the pg_data directory, then remove it’s content, and restore from the pg_dump, but it seems pointlessly convoluted and open to errors with potential data loss.
I there a way to specify an initialization script for Postgres, using docker run?
If I use Docker Compose, this line will copy the script and run it:
volumes: - ./init.sql:/docker-entrypoint-initdb.d/init.sql
However, this syntax doesn't work in the CLI:
$ docker volume create postgres-data $ docker run --detach \ --rm \ --name postgres \ --publish 5432:5432 \ --env-file .env \ --volume postgres-data:/var/lib/postgresql/data \ --volume ./init.sql:/docker-entrypoint-initdb.d/init.sql \ postgres:9.6.22 docker: Error response from daemon: create ./init.sql: "./init.sql" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
Is this possible?
If, when you start your Docker Compose, you're getting:
PostgreSQL Database directory appears to contain a database; Skipping initialization
you need to proactively remove the volumes which were set up to store the database.
The command docker-compose down doesn't do this automatically.
You can request removal of volumes like this:
docker-compose down --volumes
Be warned that this will delete any data you had in any database before. You can't get this data back if you remove the volume which contained it!
According to the documentation of postgres docker image you did everything correct.
If you would like to do additional initialization in an image derived from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under /docker-entrypoint-initdb.d (creating the directory if necessary). After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files, run any executable *.sh scripts, and source any non-executable *.sh scripts found in that directory to do further initialization before starting the service.
But, there is a catch which I think you missed based on log that you posted above.
Warning: scripts in /docker-entrypoint-initdb.d are only run if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
So, I would give it a try to empty database_data directory and run again docker-compose up.