You will have to do the following:
Make the content of the SQL file as a ConfigMap like this, for example:
apiVersion: v1 kind: ConfigMap metadata: name: mariadb-config data: mariadb-schema: "DROP DATABASE IF EXISTS test;\n\nCREATE DATABASE IF NOT EXISTS test;Make a volume from this config map in your deployment yaml like this:
volumes: - name: mariadb-schema-config-vol configMap: name: mariadb-config defaultMode: 420 items: - key: mariadb-schema path: mariadb-schema.sqlAnd volume mount like this:
volumeMounts: - mountPath: /var/db/config name: mariadb-schema-config-vol
Then your init container command will be like:
['sh', '-c', 'psql -a -f /var/db/config/mariadb-config.sql']
For your second question, make a shell script that reads the env variables (The db credentials - I am presuming that you are having them in secrets and using them as env variables) and then invoke this command:
psql -a -f /var/db/config/mariadb-config.sql
So to make this happen the content of this script should be in a config map and execute the script from a volume mount, just like the above example.
Hope this helps.
Answer from Amit on Stack Overflowpostgresql - How to mount a sql file in a Init Container in order to bootstrap Postgres Database - Stack Overflow
postgresql - Create or update existing postgres db container through kubernetes job - Stack Overflow
Running a shell script to initialize pods in kubernetes (initializing my db cluster) - Stack Overflow
postgresql - Kubernetes - Postgres - Create Tables - Stack Overflow
You will have to do the following:
Make the content of the SQL file as a ConfigMap like this, for example:
apiVersion: v1 kind: ConfigMap metadata: name: mariadb-config data: mariadb-schema: "DROP DATABASE IF EXISTS test;\n\nCREATE DATABASE IF NOT EXISTS test;Make a volume from this config map in your deployment yaml like this:
volumes: - name: mariadb-schema-config-vol configMap: name: mariadb-config defaultMode: 420 items: - key: mariadb-schema path: mariadb-schema.sqlAnd volume mount like this:
volumeMounts: - mountPath: /var/db/config name: mariadb-schema-config-vol
Then your init container command will be like:
['sh', '-c', 'psql -a -f /var/db/config/mariadb-config.sql']
For your second question, make a shell script that reads the env variables (The db credentials - I am presuming that you are having them in secrets and using them as env variables) and then invoke this command:
psql -a -f /var/db/config/mariadb-config.sql
So to make this happen the content of this script should be in a config map and execute the script from a volume mount, just like the above example.
Hope this helps.
Was looking into something similar and found the following approach as of 11.5.24 (postgres:16.2 docker image and kubernetes v1.31):
- I had an existing SQL init file
init.sql. - I ran
kubectl create configmap initsql --from-file=init.sqlto generate aconfigMapfile. I renamed the file something likepostgres-init.yaml. - I
kubectl apply -f postgres-init.yaml. - I have a standard-fare
postgres-deployment.yaml- basically copied from the official examples with a few modification. - In
volumes, I added:volumes: - ... - name: sql-init-mount configMap: name: initsql items: - key: init.sql path: init.sql - In the same
postgres-deployment.yamlI added to thepostgres containeritself:volumeMounts: - mountPath: /docker-entrypoint-initdb.d name: sql-init-mount - The official
postgresDocker image supports automatic initialization of any scripts dropped intodocker-entrypoint-initdb.d.
I had a suspicion that using a configMap might accomplish the same thing without using containerInit or any commands. Turns out I was correct, you may need to wait moment but if you:
minikube dashboardthe deployedpostgres podcan be found and you can easilyexecin (if you're doing this locally) to verify the existing of the file (created corrected) and run:psql -U postgres -d postgresto login and query.
You have to mount the SQL file as a volumen from a configmap and use the psql cli to execute the commands from mounted file.
To execute commands from file you can change the command parameter on the yaml by this:
psql -a -f sqlCommand.sql
The configmap needs to be created using the file you pretend to mount more info here
kubectl create configmap sqlCommands.sql --from-file=sqlCommands.sql
Then you have to add the configmap and the mount statement on your job yaml and modify the command to use the mounted file.
apiVersion: batch/v1
kind: Job
metadata:
name: init-db
spec:
template:
metadata:
name: init-db
labels:
app: init-postgresdb
spec:
containers:
- image: "docker.io/bitnami/postgresql:11.5.0-debian-9-r60"
name: init-db
command: [ "bin/sh", "-c", "psql -a -f /sqlCommand.sql" ]
volumeMounts:
- name: sqlCommand
mountPath: /sqlCommand.sql
env:
- name: DB_HOST
value: "knotted-iguana-postgresql"
- name: DB_DATABASE
value: "postgres"
volumes:
- name: sqlCommand
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: sqlCommand.sql
restartPolicy: OnFailure
You should make a docker file for the same first, execute it and map the same working docker image to the kubernetes job yaml file.
You can add an entrypoint.sh in docker file, where you can place your scripts to be executed
I finally decided to take the approach of creating a config file with the script we want to run and then call this configMap from inside the volume.
this is a short explanation:
In my pod.yaml file there is a VolumeMount called "/pgconf" which is the directory that the docker image reads any SQL script that you put there and run it when the pod is starting. And inside Volumes I will put the configMap name (postgres-init-script-configmap) which is the name of the config defined inside the configmap.yaml file.
There is no need to create the configMap using kubernetes, The pod will take the configuration from the configMap file as long as you place it in the same directory as the pod.yaml .
my POD yaml file:
apiVersion: v1
kind: Pod
metadata:
name: "{{.Values.container.name.primary}}"
labels:
name: "{{.Values.container.name.primary}}"
spec:
securityContext:
fsGroup: 26
restartPolicy: {{default "Always" .Values.restartPolicy}}
containers:
- name: {{.Values.container.name.primary}}
image: "{{.Values.image.repository}}/{{.Values.image.container}}:{{.Values.image.tag}}"
ports:
- containerPort: {{.Values.container.port}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: primary
resources:
requests:
cpu: {{ .Values.resources.cpu }}
memory: {{ .Values.resources.memory }}
volumeMounts:
- mountPath: /pgconf
name: init-script
readOnly: true
volumes:
- name: init-script
configMap:
name: postgres-init-script-configmap
my configmap.yaml (Which contains the SQL script that will initial the DB):
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init-script-configmap
data:
setup.sql: |-
CREATE USER david WITH PASSWORD 'david';
It depends what exactly does your init script do. But the InitContainers should be helpful in such cases. Init containers are run before the main application container is started and can do some preparation work such as create configuration files etc.
You would still need your own Docker image, but it doesn't have to be the same image as the database one.
this may help (here I have added configmap, persistent volume, persistent volume-claim, and Postgres deployment yaml. This yaml will automatically create a table named users in the Postgres database inside the Postgres-container. Thanks
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
Postgres_DB: postgresdb
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
data:
Postgres_User: postgresadmin
Postgres_Password: admin123
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres-container
image: postgres:latest
imagePullPolicy: "IfNotPresent"
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","sleep 20 && PGPASSWORD=$POSTGRES_PASSWORD psql -w -d $POSTGRES_DB -U $POSTGRES_USER -c 'CREATE TABLE IF NOT EXISTS users (userid SERIAL PRIMARY KEY,username TEXT,password TEXT,token TEXT,type TEXT);'"]
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: Postgres_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: Postgres_User
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: Postgres_Password
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
If you find this helpful, please mark it as answer.
You have multiple way:
The official postgresql docker image state (the 'Initialization scripts' section):
If you would like to do additional initialization in an image derived from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under /docker-entrypoint-initdb.d
Those scripts are only run if the database is created. eg if you start/restart a pod with a data volume containing an already existing database, those scripts will not be launched.
With kubernetes, you can provide a configmap with the needed file (if file sizes below 1Mb) or provide a volume with your initialization file.
An another option can be the application itself. For instance you may use flywayDB or liquibase embeded in your application (springboot do that transparently).
I have a standard postgres container running, with the pg_data volume mapped to a directory on the host machine.
I want to be able to run an init script everytime I build or re-build the container, to run migrations and other such things. However, any script or '.sql' file placed in /docker-entrypoint-initdb.d/ only gets executed if the pg_data volume is empty.
What is the easiest solution to this – at the moment I could make a pg_dump pf the pg_data directory, then remove it’s content, and restore from the pg_dump, but it seems pointlessly convoluted and open to errors with potential data loss.
According to stable/postgresql helm chart, initdbScripts is a dictionary of init script names which are multi-line variables:
## initdb scripts ## Specify dictionary of scripts to be run at first boot ## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory ## # initdbScripts: # my_init_script.sh:| # #!/bin/sh # echo "Do something."
Let's assume that we have the following init.sql script:
CREATE USER helm;
CREATE DATABASE helm;
GRANT ALL PRIVILEGES ON DATABASE helm TO helm;
When we are going to inject a multi-line text into values we need to deal with indentation in YAML.
For above particular case it is:
helm install stable/postgresql -n testpg \
--set global.postgresql.postgresqlDatabase=testpg \
--set global.postgresql.postgresqlUsername=testpg \
--set global.postgresql.postgresqlPassword=testpg \
--set global.postgresql.servicePort=5432 \
--set initdbScripts."init\.sql"="CREATE USER helm;
CREATE DATABASE helm;
GRANT ALL PRIVILEGES ON DATABASE helm TO helm;" \
--set service.type=LoadBalancer
There is some explanation to above example:
- If script's name has
.it should be escaped, like"init\.sql". - Script's content is in double quotes, because it's multi-line string variable.
Here's what @nickgryg's answer looks like with values.yaml instead of command line switches.
primary:
initdb:
scripts:
init.sql: |
CREATE USER helm;
CREATE DATABASE helm;
GRANT ALL PRIVILEGES ON DATABASE helm TO helm;