Docker is not principally a developer tool, and the main goal of Compose is not to spin up a developer environment.
In fact, Docker has a couple of core features that seem contrary to using it as a developer environment. Each container has an isolated filesystem, so the container can't normally see code on the host system, and the host system can't see tools that are only installed in the container. Furthermore, a container is based on an immutable image: you can't normally change the code a container is running without rebuilding the image and recreating the container. This is a familiar workflow for developers using compiled languages (C++, Java, Go, Rust), where even without Docker you still need to recompile and restart the application after every change.
A good example of this more typical immutable-image setup is running a database container. You'll very frequently run
docker run -d -p 5432:5432 -v pgdata:/var/lib/postgresql/data postgres:14
but to run this you do not need to download PostgreSQL's source code, and in normal use you can interact with it entirely through the published port. The image itself does not contain any source code or build tools.
On top of this you can use Compose to build up larger applications built out of multiple containers; for example
version: '3.8'
volumes:
pgdata:
services:
db:
image: postgres:14
ports: ['5432:5432']
volumes: ['pgdata:/var/lib/postgresql/data']
app:
image: registry.example.com/myapp:${MYAPP_TAG:-latest}
ports: ['8000:8000']
depends_on: [db]
environment: {PGHOST: db}
Again, this setup doesn't depend on having any of the source code available; so long as you have a prebuilt image you can just run this.
Especially if you do have an interpreted language, it's possible to inject your local source code over the code in the image using a bind mount. If you do this, the container is running the code you have on your host system. You see this fairly often for Node-based applications
services:
app:
build: .
ports: ['3000:3000']
volumes:
- .:/app # replace everything in the image with local code
- /app/node_modules # hack: use an anonymous volume rather than host library tree
However, even with this, you can't directly interact with the tools inside the container. I see several SO questions about wanting to use a Docker-based setup instead of host-based tools, and you sort of can, provided you don't mind wrapping everything in a docker invocation of some sort
# doesn't work if yarn isn't installed locally
yarn add somelib
# uses the yarn in the `node` image, but long-winded
docker-compose run app \
yarn add somelib
This does lead to a style of Docker image that's just a collection of tools without any particular application in it. The unmodified golang or node images can be used this way, for example. You might plug these into a CI tool like Jenkins that knows how to do all of the bind mounts and make it look like the container tools are available in the context of a normal pipeline working on the repository being built.
Dev containers are not unlike this. Specifically if you're using Visual Studio Code they will let you use tools out of containers for applications you're developing. You can start multiple containers using a devcontainer.json file but it's not necessarily the primary use case; often the expectation is that you'll have a container-of-tools but not embed the application in the image. And to my knowledge it's tied to VSCode and not neccesarily usable in other contexts, where you could use Compose or plain Docker for your production deployments too.
Videos
I was wondering about these today. I have been using them on and off for a few years now for personal stuff, and they work pretty well. Integration with VScode is pretty good too, as a Microsoft backed spec, but I have had some stuff break on me in VScodium.
I was wondering if they have genuine widespread adoption, especially in professional settings, or if they are somewhat relegated to obscurity. The spec has ~4000 github stars, which is a lot but not as much as I would expect for something that could be relevant to every dev, especially if you are bought into the Microsoft development stack (Azure Devops, Github. Visual Studio, etc.)
So do you guys use these? I am always going back and forth on just rolling my own containers, but some of the built in stuff to VScode are great for quickly rolling these. I would be interested to hear what other people do.
Hi everyone,
I want to build a devcontainer so that we have a standard development environment for our team.
In our project, we are using an Nx monorepo that contains two apps: a NextJS frontend and a NestJS backend. We are also intending on using a Postgres Database.
We intend to have everything be deployed as containers in the end, by using Docker and Docker-Compose to help set up our services. But we would like to be able to use Docker while we are developing, for example, we want to test with dummy data from our Postgres DB that is in a container, and also we want to have the sense of what the production environment is like while we are in development.
I would also need to set up a script that once I run Docker Compose that I would have Nx watching for any changes, and then trigger a rebuild of the respective Docker container so that the containers have the latest changes or something, or perhaps I would use a shared volume.
I was going over the given templates in VS Code, and I noticed two options: "Docker in Docker" and "Docker outside of Docker."
I get the idea of what each of them mean, but I want to ask which one is better for my use case? And I also want to ask if the approach I am using for hot-reloading my Docker containers is fine or what does the community recommend?
Thanks! :)
I am getting started with devcontainer on a project that has two containers, (1) node app and (2) postgres. I have a docker-compose file working for the setup and now trying devcontainers. When I run the docker-compose file within vscode, I cannot manage it with the docker-compose CLI the way I usually do. I don't seem to have as much control over things. But, I can run the containers using the docker-compose CLI and just use vscode to attach to a running container.
What are the pros/cons to these two setups? (1) devcontainer running docker-compose vs (2) docker-compose and vscode attach to running container?
Hello everyone!
Sorry to bother you but I have some questions regarding Dockerfiles and devcontainers.
Usually, when you are writing a Dockerfile, from my understanding as a hobbyist, your goal is to replicate each neccesary step in order for your programm/app/whatever to run, so you end up ussing the CMDcommand or any other way to 'execute' whatever you need.
For example, a simple Python Dockerfile might be:
FROM python:whatever COPY main.py mapin.py CMD ['python3.12','main.py']
Now, If I were to build a devcontainer, what would be my goal? Should I aim to build the environment to the language and dependencies that I'm using and that's all? Is there any 'last' command that I need to include?
At the moment I'm trying to build a Dockerfile for a devcontainer with Scala and Spark and I'm basically replicating the installation steps from internet posts, but I don't know if this is a good approach to a devcontainer or if they need something else. I've found some post where, for my use case, they expose the Spark Web UI port, but is there something to considering when tinkering with the Dockerfile?
How do you guys build your devcontainers?
Thanks!