Actualy you might want to try this: docker context create my-context --description "some description" --docker "host=tcp:/my.remote.docker:2376,ca=C:\Users.docker\RootCA.pem,cert=C:\Users.docker\cert.pem,key=C:\Users.docker\key.pem" The help text of docker context create --help s… Answer from meyay on forums.docker.com
🌐
Docker Docs
docs.docker.com › manuals › docker engine › docker contexts
Docker contexts | Docker Docs
As an example, a single Docker client might be configured with two contexts: A default context running locally · A remote, shared context · Once these contexts are configured, you can use the docker context use <context-name> command to switch between them.
🌐
The New Stack
thenewstack.io › home › connect to remote docker machines with docker context
Connect to Remote Docker Machines with Docker Context - The New Stack
November 12, 2022 - One such tool is Docker Context. What this does is make it possible to export and import contexts from different machines that have Docker installed. The best feature found with Context is the ability to connect to remote Docker instances.
🌐
Medium
medium.com › @rajaravivarman › extending-docker-using-docker-context-to-manage-remote-containers-66b8abc5d245
Extending Docker — Using Docker Context to Manage Remote Containers | by Raja Ravi Varman | Medium
November 23, 2024 - Docker installed on both local and remote machines ... This creates a new context named “my-remote-server” that connects to your remote machine via SSH.
🌐
Docker
docker.com › blog › how-to-deploy-on-remote-docker-hosts-with-docker-compose
How to deploy on remote Docker hosts with docker-compose | Docker
April 23, 2021 - This is a better approach than the manual deployment. But it gets quite annoying as it requires to set/export the remote host endpoint on every application change or host change. $ docker context ls NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR …
🌐
OneUptime
oneuptime.com › home › blog › how to set up docker contexts for remote container management
How to Set Up Docker Contexts for Remote Container Management
January 16, 2026 - # Export context docker context export staging > staging-context.tar # Import on another machine docker context import staging staging-context.tar · # Generate dedicated key for Docker access ssh-keygen -t ed25519 -f ~/.ssh/docker_remote -C "docker-remote-access" # On remote, restrict key to Docker commands only # In ~/.ssh/authorized_keys: # command="/usr/bin/docker system dial-stdio" ssh-ed25519 AAAA...
🌐
GitHub
gist.github.com › dnaprawa › d3cfd6e444891c84846e099157fd51ef
Using Docker on remote Docker Host with docker context · GitHub
In order to use remote Docker host, as a prerequisite you need SSH enabled (required login using SSH keys).
Find elsewhere
🌐
DEV Community
dev.to › cod3mason › docker-remote-context-via-ssh-over-proxy-268l
Docker Remote Context via SSH over Proxy - DEV Community
December 30, 2025 - By configuring SSH to use a proxy and defining a Docker context that targets the remote host via SSH, Docker commands executed locally are transparently forwarded to the remote Docker daemon.
🌐
Reddit
reddit.com › r/docker › speeding up docker contexts with mounted sockets
r/docker on Reddit: Speeding up docker contexts with mounted sockets
July 18, 2023 -

Hey there! I wrote a blog post about speeding up docker context (remote docker development), but this sub doesn't allow me to post links directly. So here you can either go to the blog post, or read it underneath.

https://krystex.github.io/posts/faster-docker-contexts/

So if you are like me, who likes to develop with an remote docker host, you likely used docker contexts for connecting to a remote endpoint:

docker context create remote --docker "host=ssh://user@myhost.org"
docker context use remote

With the docker context use command, your remote endpoint is now the default docker endpoint on your system. All commands you execute on your machine are actually executed on your specified ssh host. You can take a look at your contexts with the command docker context ls.

However, depending on your internet connection, you might notice that a command is always a little slow. This happens because on every docker command, a new SSH session to your server is started.

Becoming tedious of this, I wrote a small script to make it faster:

#! /bin/bash
MYHOST=yourwebsite.org
SSHREMOTE=user@$MYHOST
SOCKET=/tmp/docker.remote.sock

# Cleanup proxy'ed socket
function onexit() {
  echo "Deleting old socket ..."
  rm $SOCKET
}
# Cleanup when Ctrl-C is pressed
trap onexit EXIT
# Delete old socket file if it exists
if [ -f "$SOCKET" ]; then  
  onexit
fi

echo "Proxying docker from '$YOURWEBSITE' on '$SOCKET' ..."
# Forward remote docker socket to your host
ssh -nNT -L $SOCKET:/var/run/docker.sock $SSHREMOTE

With this command, you take your remote docker socket (/var/run/docker.sock), and mount it at your local system with a temporary file (/tmp/docker.remote.sock). You can configure your remote host and the local socket location. The following lines are just for cleaning up an old socket. The last line is for mounting the remote socket at your host. Let's break it down:

  • -n: so ssh can be run in background

  • -N: don't execute a remote command, just forward the socket

  • -T: don't start a terminal session, just forward the socket

  • -L <localsocket>:<remotesocket>: forward a remote socket to your host

The advantage of this command: it holds a long-running connection to your server of choice. I execute this script in a tmux session so it's always running in the background.

Now you just have to create a context for your forwarded socket:

docker context rm remote
docker context create remote --docker "unix:///tmp/docker.remote.sock"
docker context use remote

That's it!

Now you can use test your connection:

docker ps

You should see a noticable improvement if don't have the best server or internet connection.

🌐
Visual Studio Code
code.visualstudio.com › remote › advancedcontainers › develop-remote-host
Develop on a remote Docker host
November 3, 2021 - The Container Tools extension comes ... or DOCKER_CONTEXT can be set that are also honored by the Dev Containers extension. Note: The above settings are only visible when the Container Tools extension is installed. Without the Container Tools extension, Dev Containers will use the current context. To convert an existing or pre-defined, local devcontainer.json into a remote one, follow ...
🌐
GitHub
github.com › cssnr › docker-context-action
GitHub - cssnr/docker-context-action: Set up a Docker Remote Context over SSH using Password or Keyfile Authentication
steps: - name: 'Docker Context' uses: cssnr/docker-context-action@v1 with: host: ${{ secrets.DOCKER_HOST }} user: ${{ secrets.DOCKER_USER }} port: 22 # 22 is the default value - optional pass: ${{ secrets.DOCKER_PASS }} # or ssh_key - optional ssh_key: ${{ secrets.DOCKER_SSH_KEY }} # or pass - optional - name: 'Inspect Docker' run: | docker context ls docker context inspect docker info docker ps · Make sure to review the Inputs. Stack Deploy: If you only need to deploy a swarm or compose stack use: cssnr/stack-deploy-action Portainer Users: You can deploy directly to Portainer with: cssnr/portainer-stack-deploy-action · Configure SSH using keyfile or password: src/ssh.sh · Creates and uses a remote docker context: src/context.sh
Author   cssnr
🌐
Visual Studio Code
code.visualstudio.com › docs › containers › ssh
Connect to remote Docker over SSH
November 3, 2021 - Create a Docker context that points to the remote machine running Docker. Use ssh://username@host:port as the Docker endpoint (replace "host" with your remote machine name, or the remote machine IP address).
🌐
Ruan Bekker's Blog
ruan.dev › blog › 2022 › 07 › 14 › remote-builds-with-docker-contexts
Remote Builds with Docker Contexts | Ruan Bekker's Blog
July 14, 2022 - The same way can be used to do ... locally, but when you build, you point the context to the remote host, and your context (dockerfile and files referenced in your dockerfile) will be sent to the remote host....
🌐
Play-with-docker
birthday.play-with-docker.com › context
Docker Context
In this tutorial we will learn about the Context feature of the Docker CLI. The feature allows you to connect to remote docker instances.
🌐
Collabnix
collabnix.com › how-to-connect-to-remote-docker-using-docker-context-cli
How to Connect to Remote Docker using docker context CLI - Collabnix
May 4, 2022 - Docker will use the DOCKER_HOST variable to identify a remote host to connect to. Let’s compare what happens when listing containers locally and on a remote host. ... This will list the containers running on the target node. ... With the release of 19.03, docker now supports managing the ...
🌐
GitHub
gist.github.com › kekru › 4e6d49b4290a4eebc7b597c07eaf61f2
Connect to another host with your docker client, without modifying your local Docker installation · GitHub
you will be able to connect to remote instance and see running (or other commands) containers: docker --context=ssh-box ps
Top answer
1 of 3
5

Docker contexts are kind of strange and, in my opinion, seem a bit half-baked. I'm aware that his is an old question, but after spending my afternoon tearing out my hair trying to get this to work for myself, I just want to put this info out here for everyone.

Interpolation is always local

All that docker compose (and docker stack, etc.) is is just a fancy frontend to the underlying backend API (the docker.sock protocol). This allows docker-compose to have a bunch of fancy features that docker engine itself doesn't have. One of those is the ability to use relative paths or expand variables like ${HOME} as you do in your example.

When you use docker-compose in a context, all that happens is that after docker-compose has done all its fancy magic locally on your machine, it then sends the low-level API calls over ssh to the remote engine. This means that whenver you use a fancy magic feature like putting $HOME or ./ in a path, docker compose expands those locally using your system's HOME, i.e. your own local home directory and then sends those requests to the remote engine, where the deployment subsequently fails because those paths don't exist there.

However, using absolute paths works just fine. If you use /home/user/run/nginx.conf instead of ${HOME}/run/nginx.conf, then, in my experience, everything should be fine.

Use named volumes instead

In your question, the file you were trying to mount was a config file, which means that you probably needed full manual control over it, and that's a perfectly valid reason to use a bind mount.

However, if, for example, you're running a database, you won't need to manually edit the database's data directory. As long as the database itself can interact with it, it's fine. In this case, you should use a regular volume instead. To do this, add a volumes: directive to the end of the file and list your needed volume names:

services:
  postgres:
    restart: always
    environment:
      POSTGRES_PASSWORD: example
    volumes:
      - psql-data:/var/lib/postgresql/data

volumes:
  psql-data:

If you're using docker swarm, please notice that volumes are not replicated by default, but there are third-party drivers out there and AFAIK the NFS driver option may also be of use, but I can't seem to find much documentation about it.

Use connection persistence

If you, as I did, have a strange issue where running any docker command via ssh takes an obscene amount of time, the solution seems to be setting up ssh to persist your connections. To do this, create a file ~/.ssh/config and add the following lines:

Host example.com
  ControlMaster auto
  ControlPath ~/.ssh/sockets/%C
  ControlPersist 600

Replace example.com with the hostname or IP address of your remote host.

If you get any errors mentioning something like "can't bind to path" or "file not fount", you may need to create the ~/.ssh/sockets directory:

mkdir ~/.ssh/sockets

This is by far no conclusive list of the idiosyncrasies of remote docker contexts, but hopefully this can save some people some time.

2 of 3
1

You need to explicitly set DOCKER_HOST to access your remote docker host from docker-compose.

From the compose documentation

Compose CLI environment variables

DOCKER_HOST

Sets the URL of the docker daemon. As with the Docker client, defaults to unix:///var/run/docker.sock.

In your given case, docker context use remote sets current context to remote only for your docker command. docker-compose still uses your default (local) context. In order for docker-compose to detect it, you must pass it via the DOCKER_HOST environment variable.

Example:

$ export DOCKER_HOST=ssh://[email protected]
$ docker-compose up
🌐
Mikesir87
blog.mikesir87.io › 2019 › 08 › using-ssh-connections-in-docker-contexts
Using SSH Connections in Docker Contexts – mikesir87's blog
To connect over SSH, create the context by doing the following: docker context create ssh-box --docker "host=ssh://user@my-box"
🌐
Qmacro
qmacro.org › blog › posts › 2024 › 08 › 24 › using-lazydocker-with-ssh-based-remote-contexts
Using lazydocker with SSH-based remote contexts - DJ Adams
August 24, 2024 - NAME DESCRIPTION DOCKER ENDPOINT default Current DOCKER_HOST based configuration unix:///var/run/docker.sock docker * Docker Host on PVE LXC ssh://dj@docker homeops Docker Host on homeops ssh://dj@homeops kkhw42xrfy M2 Air ssh://user@kkhw42xrfy synology Docker Host on Synology NAS ssh://dj@synology · If you're interested in finding out how to define and use these contexts, see the post Remote access to Docker on my Synology NAS.
🌐
TechSparx
techsparx.com › software-development › docker › damp › remote-control.html
Using SSH to remotely control a Docker Engine or Docker Swarm in two easy steps
Starting in Docker 18.09 it became possible to create a Docker Context with an SSH URL. Using this, the docker command on your laptop can interact with the Docker API of a remote Docker instance, over SSH, without opening a public Docker TCP port.