in volumes section you specify path to dirs not path to files
change
volumes:
./postgresql/init.sql:/docker-entrypoint-initdb.d/init.sql
to
volumes:
./postgresql:/docker-entrypoint-initdb.d
or to
volumes:
<path to dir with init.sql>:/docker-entrypoint-initdb.d
where <path to dir with init.sql> is propper local path
in volumes section you specify path to dirs not path to files
change
volumes:
./postgresql/init.sql:/docker-entrypoint-initdb.d/init.sql
to
volumes:
./postgresql:/docker-entrypoint-initdb.d
or to
volumes:
<path to dir with init.sql>:/docker-entrypoint-initdb.d
where <path to dir with init.sql> is propper local path
Looks like the following things to be double-checked.
ports:
- 5444:5444
The default port number is 5432. Please confirm if 5444 is the correct port number.
CREATE DATABASE my_datbase;
Please confirm if the correct name for the database is my_database.
Actually, the container I created on my local machine seems to be working well after confirming the above information.
person ancient cough retire squash paint books ruthless jeans provide
This post was mass deleted and anonymized with Redact
postgresql - Create table and scheme postgres using docker compose - Stack Overflow
Initial scripts not being copied to docker-entrypoint-initdb.d folder
Postgress init file cannot be found when run in codefresh
python - Simple Docker Postgres Can't Connect to Database - Stack Overflow
I could reproduce your issue by creating and actual folder:
mkdir ./scripts/postgres/0-schema.sql
docker-compose up
Then I get exactly your error:
postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/0-schema.sql
postgres | psql:/docker-entrypoint-initdb.d/0-schema.sql:0: could not read from input file: Is a directory
Double check your local setup because you might have a folder called 1-schema.sql in your "scripts/postgres" folder. Check this by running in your project folder:
ls -lart ./scripts/postgres
I faced a similar issue today, and this is how I solved it.
Run
docker-compose -f <file-name.yml> configto view the actual paths being mounted.Check if the scripts are accessible from those paths. If you are running a second docker inside a docker (for example you are building something using Jenkins), use its bash to see if the files are accessible.
In my case, they were not accessible. I changed the path and it worked.
Had the same problem with postgres 11.
Some points that helped me:
- run:
docker-compose rmdocker-compose builddocker-compose up
- The obvious: don't run compose in detached mode. You want to see the logs.
After adding the step docker-compose rm to the mix it worked, finally.
This is how I use postgres on my projects and preload the database.
file: docker-compose.yml
db:
container_name: db_service
build:
context: .
dockerfile: ./Dockerfile.postgres
ports:
- "5432:5432"
volumes:
- /var/lib/postgresql/data/
This Dockerfile load the file named pg_dump.backup(binary dump) or psql_dump.sql(plain text dump) if exist on root folder of the project.
file: Dockerfile.postgres
FROM postgres:9.6-alpine
ENV POSTGRES_DB DatabaseName
COPY pg_dump.backup .
COPY pg_dump.sql .
RUN [[ -e "pg_dump.backup" ]] && pg_restore pg_dump.backup > pg_dump.sql
# Preload database on init
RUN [[ -e "pg_dump.sql" ]] && cp pg_dump.sql /docker-entrypoint-initdb.d/
In case of need retry the loading of the dump, you can remove the current database with the command:
docker-compose rm db
Then you can run docker-compose up to retry load the database.
Initialize Postgres container with Data
Create a docker-compose.yml
services:
postgress-postgresql:
image: postgres:bullseye
container_name: postgres
volumes:
- postgresql_data:/var/lib/postgresql/data/
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
environment:
- POSTGRES_USER=postgress
- POSTGRES_PASSWORD=MyStrongPassword123
ports:
- 5432:5432
networks:
- postgres
networks:
postgres:
volumes:
postgresql_data:
Create a init.sql with the script
CREATE USER vbv WITH PASSWORD 'vbv';
CREATE DATABASE vbvdb;
GRANT ALL PRIVILEGES ON DATABASE vbvdb TO vbv;
\connect vbvdb;
CREATE TABLE employees (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
position VARCHAR(50),
salary DECIMAL(10, 2)
);
INSERT INTO employees (name, position, salary) VALUES
('Bhuvi', 'Manager', 675000.00),
('Vibhu', 'Developer', 555000.00),
('Rudra', 'Analyst', 460000.00);
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO vbv;
RUN with docker-compose up -d
Connect to the container psql client - docker exec -it postgres bash. Check the data exists.
➜ ~ docker exec -it postgres bash
root@96a599122941:/# psql -U vbv -d vbvdb
psql (17.0 (Debian 17.0-1.pgdg110+1))
Type "help" for help.
vbvdb=> SELECT * FROM employees;
id | name | position | salary
----+-------+-----------+----------
1 | Bhuvi | Manager | 675000.00
2 | Vibhu | Developer | 555000.00
3 | Rudra | Analyst | 460000.00
(3 rows)
vbvdb=>
You can also validate docker logs -f postgres contains the following.
2024-10-04 08:04:03.810 UTC [48] LOG: database system is ready to accept connections
done
server started
CREATE DATABASE
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init.sql
CREATE ROLE
CREATE DATABASE
GRANT
You are now connected to database "vbvdb" as user "postgress".
CREATE TABLE
INSERT 0 3
GRANT
Added the activity logs into the gist https://gist.github.com/jinnabaalu/89bd8eeba3b8845cf337b85a807748f1
So from this Dockerfile I assume the user is postgress.
Try with this Dockerfile
FROM postgres:11.5
USER postgres
RUN whoami
ADD ./scripts/init.sql /docker-entrypoint-initdb.d/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
update:
Seems like the file not owned by Postgres user.
Try to set permission
ADD ./scripts/init.sql /docker-entrypoint-initdb.d/
RUN chown postgres:postgres /docker-entrypoint-initdb.d/init.sql
For me problem was in my machine. enabled SELinux access control, which did not allow for containers to expand files.
Solution, disable SELinux:
echo SELINUX=disabled > /etc/selinux/config
SELINUXTYPE=targeted >> /etc/selinux/config
setenforce 0
From this
I had a similar problem when using the ADD command.
When using ADD (eg to download a file) the default chmod value is 711. When using the COPY command the chmod will match the hosts chmod of the file where you copy it from. The solution is to set the permission before copying, or change them in your Dockerfile after they've been copied.
It looks like there will finally be a "COPY --chmod 775" flag available in the upcoming docker 20 which will make this easier.
https://github.com/moby/moby/issues/34819
When the sql file in /docker-entrypoint-initdb.d/ has a permission of 775 then the file is run correctly.
You can test this within the image (override entrypoint to /bin/bash) using: docker-entrypoint.sh postgres
You really have two questions here. First you have a corrupted tar file. I'd blame that on your usage of docker. If you run tar -tf ... on the docker side before moving the file, was it corrupted there?
Second, you don't want to restore data, you want to merge it. pg_dump/pg_restore doesn't do that. You will have to figure out what you want to do (overwrite, aggregate, assign new PKs and just insert them as new rows) and then implement that. You likely want to use COPY or \copy to get the data out of the first system so the second one can see it. After that, it is really up to you what you do with it. You can implement the merging in python or Perl or SQL (after \copy into staging tables) or whatever your favorite tool is.
@jjanes was right. Docker was making the causing the errors
Docker was causing the error. File, that I have created using Docker was corrupted. Once I have exported it straight from Docker, using shell & run command:
tar -tf backup.tar
the file has been defined as a valid one.
Moreover, @jjanes was also right about the second case. It was pointless to use pg_dump. In my case, the best solution was to use built-it functions of Django:
dumpdata
loaddata
It has worked great!
Postgres only initializes the database if no database is found, when the container starts. Since you have a volume mapping on the database directory, chances are that a database already exists.
If you delete the db_data volume and start the container, postgres will see that there isn't a database and then it'll initialize one for you using the scripts in docker-entrypoint-initdb.d.
The accepted answer was correct (when it was written)
There was a subsequent discussion in github with the maintainer of the postgres docker image about supporting a mechanism similar to the mysql /always-init.d/ etc.
The link to that discussion: https://github.com/docker-library/postgres/pull/496
The solution with docker is to provide a custom entry-point, there is a bit of a bug on version shared in issue 496 so I'm posting a more updated version here in hopes others will find it useful:
#!/usr/bin/env bash
## copied from: https://github.com/docker-library/postgres/pull/496#issue-358838955
set -Eeo pipefail
echo " custom-entry-point"
# Example using the functions of the postgres entrypoint to customize startup to always run files in /always-initdb.d/
source "$(which docker-entrypoint.sh)"
docker_setup_env
docker_create_db_directories
# assumption: we are already running as the owner of PGDATA
# This is needed if the container is started as `root`
#if [ "$1" = 'postgres' ] && [ "$(id -u)" = '0' ]; then
if [ "$(id -u)" = '0' ]; then
exec gosu postgres "$BASH_SOURCE" "$@"
fi
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
echo " db is missing"
docker_verify_minimum_env
docker_init_database_dir
pg_setup_hba_conf
# only required for '--auth[-local]=md5' on POSTGRES_INITDB_ARGS
export PGPASSWORD="${PGPASSWORD:-$POSTGRES_PASSWORD}"
docker_temp_server_start "$@" -c max_locks_per_transaction=256
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
docker_temp_server_stop
else
echo " db already exists"
docker_temp_server_start "$@"
docker_process_init_files /always-initdb.d/*
docker_temp_server_stop
fi
echo " .. starting!"
exec postgres "$@"
Solution
You should clear data_volume BEFORE running the container and the SQL files will be executed.
This volume data_volume can be removed by using command:
docker volume rm data_volume
The root cause of your problem can be found in docker-entrypoint.sh. When you run a MySQL container, it checks if MySQL directory /var/lib/mysql exist or not.
If the directory doesn't exist (running it the first time), it will run your SQL files.
if [ ! -d "$DATADIR/mysql" ]; then
// Some other logic here
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
You can get more details from MySQL docker-entrypoint.sh source file.
So I had the same issue for hours, and then decided to look into docker-entrypoint.sh. It turns out that the script checks for $DATADIR/mysql, typical /var/lib/mysql and skips the rest of the code if the datadir exists, incl. docker-entrypoint-initdb.d
So what I did was make a simple init.sh file to remove the datadir then start docker.
docker-compose.yml:
volumes:
- ./docker/mysql/scripts:/docker-entrypoint-initdb.d
- ./mysql_data:/var/lib/mysql
init.sh:
#!/bin/bash
rm -rf mysql_data
docker-compose up --force-recreate
And of course add -d to docker-compose once I see it works as expected.