Show HN: Unregistry – “docker push” directly to servers without a registry

2025-06-1823:17655151github.com

Push docker images directly to remote servers without an external registry - psviderski/unregistry

Unregistry logo

▸ Push docker images directly to remote servers without an external registry ◂

Join Discord Follow on X

Unregistry is a lightweight container image registry that stores and serves images directly from your Docker daemon's storage.

The included docker pussh command (extra 's' for SSH) lets you push images straight to remote Docker servers over SSH. It transfers only the missing layers, making it fast and efficient.

docker-pussh-demo.mp4

You've built a Docker image locally. Now you need it on your server. Your options suck:

  • Docker Hub / GitHub Container Registry - Your code is now public, or you're paying for private repos
  • Self-hosted registry - Another service to maintain, secure, and pay for storage
  • Save/Load - docker save | ssh | docker load transfers the entire image, even if 90% already exists on the server
  • Rebuild remotely - Wastes time and server resources. Plus now you're debugging why the build fails in production

You just want to move an image from A to B. Why is this so hard?

docker pussh myapp:latest user@server

That's it. Your image is on the remote server. No registry setup, no subscription, no intermediate storage, no exposed ports. Just a direct transfer of the missing layers over SSH.

Here's what happens under the hood:

  1. Establishes SSH tunnel to the remote server
  2. Starts a temporary unregistry container
  3. Forwards a random localhost port to the unregistry port over the tunnel
  4. docker push to unregistry through the forwarded port, transferring only the layers that don't already exist remotely. The transferred image is instantly available on the remote Docker daemon
  5. Stops the unregistry container and closes the SSH tunnel

It's like rsync for Docker images — simple and efficient.

Note

Unregistry was created for Uncloud, a lightweight tool for deploying containers across multiple Docker hosts. We needed something simpler than a full registry but more efficient than save/load.

brew install psviderski/tap/docker-pussh

After installation, to use docker-pussh as a Docker CLI plugin (docker pussh command) you need to create a symlink:

mkdir -p ~/.docker/cli-plugins
ln -sf $(brew --prefix)/bin/docker-pussh ~/.docker/cli-plugins/docker-pussh
# Download the latest version
curl -sSL https://raw.githubusercontent.com/psviderski/unregistry/main/docker-pussh \ -o ~/.docker/cli-plugins/docker-pussh # Make it executable
chmod +x ~/.docker/cli-plugins/docker-pussh

Windows is not currently supported, but you can try using WSL 2 with the above Linux instructions.

Push an image to a remote server. Please make sure the SSH user has permissions to run docker commands (user is root or non-root user is in docker group). If sudo is required, ensure the user can run sudo docker without a password prompt.

docker pussh myapp:latest user@server.example.com

With SSH key authentication if the private key is not added to your SSH agent:

docker pussh myapp:latest ubuntu@192.168.1.100 -i ~/.ssh/id_rsa

Using a custom SSH port:

docker pussh myapp:latest user@server:2222

Push a specific platform for a multi-platform image. The local Docker has to use containerd image store to support multi-platform images.

docker pussh myapp:latest user@server --platform linux/amd64

Build locally and push directly to your production servers. No middleman.

docker build --platform linux/amd64 -t myapp:1.2.3 .
docker pussh myapp:1.2.3 deploy@prod-server
ssh deploy@prod-server docker run -d myapp:1.2.3

Skip the registry complexity in your pipelines. Build and push directly to deployment targets.

- name: Build and deploy run: |
 docker build -t myapp:${{ github.sha }} .
 docker pussh myapp:${{ github.sha }} deploy@staging-server

Distribute images in isolated networks without exposing them to the internet.

docker pussh image:latest user@192.168.1.100
  • Docker CLI with plugin support (Docker 19.03+)
  • OpenSSH client
  • Docker is installed and running
  • SSH user has permissions to run docker commands (user is root or non-root user is in docker group)
  • If sudo is required, ensure the user can run sudo docker without a password prompt

Tip

The remote Docker daemon works best with containerd image store enabled. This allows unregistry to access images more efficiently.

Add the following configuration to /etc/docker/daemon.json on the remote server and restart the docker service:

{ "features": { "containerd-snapshotter": true
  }
}

Sometimes you want a local registry without the overhead. Unregistry works great for this:

# Run unregistry locally and expose it on port 5000
docker run -d -p 5000:5000 --name unregistry \ -v /run/containerd/containerd.sock:/run/containerd/containerd.sock \ ghcr.io/psviderski/unregistry # Use it like any registry
docker tag myapp:latest localhost:5000/myapp:latest
docker push localhost:5000/myapp:latest

Need custom SSH settings? Use the standard SSH config file:

# ~/.ssh/config
Host prod-server HostName server.example.com User deploy Port 2222 IdentityFile ~/.ssh/deploy_key # Now just use
docker pussh myapp:latest prod-server

Found a bug or have a feature idea? We'd love your help!

  • Spegel - P2P container image registry that inspired me to implement a registry that uses containerd image store as a backend.
  • Docker Distribution - the bulletproof Docker registry implementation that unregistry uses as a base.

Built with ❤️ by Pasha Sviderski who just wanted to deploy his images

Read the original article

Comments

  • By shykes 2025-06-1923:371 reply

    Docker creator here. I love this. In my opinion the ideal design would have been:

    1. No distinction between docker engine and docker registry. Just a single server that can store, transfer and run containers as needed. It would have been a much more robust building block, and would have avoided the regrettable drift between how the engine & registry store images.

    2. push-to-cluster deployment. Every production cluster should have a distributed image store, and pushing images to this store should be what triggers a deployment. The current status quo - push image to registry; configure cluster; individual nodes of the cluster pull from registry - is brittle and inefficient. I advocated for a better design, but the inertia was already too great, and the early Kubernetes community was hostile to any idea coming from Docker.

    • By psviderski 2025-06-204:19

      Hey Solomon, thank you for sharing your thoughts, love your work!

      1. Yeah agreed, it's a bit of a mess that we have at least three different file system layouts for images and two image stores in the engine. I believe it's still not too late for Docker to achieve what you described without breaking the current model. Not sure if they care though, they're having hard times

      2. Hm, push-to-cluster deployment sounds clever. I'm definitely thinking about a distributed image store, e.g. embedding unregistry in every node so that they can pull and share images between each other. But triggering a deployment on push is something I need to think through. Thanks for the idea!

  • By richardc323 2025-06-1920:101 reply

    I naively sent the Docker developers a PR[1] to add this functionality into mainline Docker back in 2015. I was rapidly redirected into helping out in other areas - not having to use a registry undermined their business model too much I guess.

    [1]: https://github.com/richardcrichardc/docker2docker

    • By psviderski 2025-06-204:321 reply

      You're the OG! Hats off, mate.

      It's a bummer docker still doesn't have an API to explore image layers. I guess their plans to eventually transition to containerd image store as the default. Once we have containerd image store both locally and remotely we will finally be able to do what you've done without the registry wrapper.

      • By cik 2025-06-2010:41

        You're bang on, but you can do things with dive (https://github.com/wagoodman/dive) and use chunks of the code in other projects... That's what I've been doing. The license is MIT so it's permissive.

        But yes, an API would be ideal. I've wasted far too much time on this.

  • By nine_k 2025-06-190:043 reply

    Nice. And the `pussh` command definitely deserves the distinction of one of the most elegant puns: easy to remember, self-explanatory, and just one letter away from its sister standard command.

    • By gchamonlive 2025-06-191:592 reply

      It's fine, but it wouldn't hurt to have a more formal alias like `docker push-over-ssh`.

      EDIT: why I think it's important because on automations that are developed collaboratively, "pussh" could be seen as a typo by someone unfamiliar with the feature and cause unnecessary confusion, whereas "push-over-ssh" is clearly deliberate. Think of them maybe as short-hand/full flags.

      • By psviderski 2025-06-194:01

        That's a valid concern. You can very easily give it whatever name you like. Docker looks for `docker-COMAND` executables in ~/.docker/cli-plugins directory making COMMAND a `docker` subcommand.

        Rename the file to whatever you like, e.g. to get `docker pushoverssh`:

          mv ~/.docker/cli-plugins/docker-pussh ~/.docker/cli-plugins/docker-pushoverssh
        
        Note that Docker doesn't allow dashes in plugin commands.

      • By whalesalad 2025-06-1917:28

        can easily see an engineer spotting pussh in a ci/cd workflow or something and thinking "this is a mistake" and changing it.

    • By EricRiese 2025-06-191:141 reply

      > The extra 's' is for 'sssh'

      > What's that extra 's' for?

      > That's a typo

    • By someothherguyy 2025-06-192:131 reply

      and prone to collision!

      • By nine_k 2025-06-193:002 reply

        Indeed so! Because it's art, not engineering. The engineering approach would require a recognizably distinct command, eliminating the possibility of such a pun.

HackerNews