I ditched Docker for Podman

2025-09-0511:561044625codesmash.dev

Podman offers better security, uses fewer resources, and integrates seamlessly with Linux and Kubernetes, making it a superior Docker alternative

I'm old enough to remember when Vagrant looked like a promised land where every development environment would look the same. Differences between language versions, as well as some unusual OS version differences, resulted in a few days of unproductive debugging of your development environment. I've had similar excitement when I started my first Docker Swarm (who uses that these days?!) - it felt revolutionary. Docker wasn't just a tool - it fundamentally changed how we thought about application development and deployment. Having a repeatable, separated environment from your local system was refreshing and looked like a superpower. It has become a must-have tool for every engineer. "Just Dockerize it" became my go-to solution for pretty much everything. Sure, architecture or defining a new Docker image could be a bit finicky at times, but hey, that's just how things worked. Is the persistent dockerd daemon eating upresources in the background with root privileges, just the price of doing business? I thought so.

If you are in this industry long enough, there is one pattern that emerges every day. Everybody begins questioning the "that's just how it's done" mentality. Along the way, the quiet Docker daemon running in the background felt less like a comfortable constant and more like a ticking bomb. More and more ways to explore this vulnerability emerged:

2019-02-11 - CVE-2019-5736 (runC container escape): lets a process in a container overwrite the host’s runc binary → full host compromise if exploited.

2022-03-07 - CVE-2022-0847 “Dirty Pipe” (Linux kernel): read-only file overwrite in kernel; practical container-to-host abuse scenarios documented by Docker/Sysdig.

2022-03-07 - CVE-2022-0492 (cgroups v1 release_agent): privilege escalation / container escape via cgroups v1; mitigations via seccomp/AppArmor/SELinux.

2024-01-31 - CVE-2024-21626 (runC “Leaky Vessels”): fd leak + process.cwd issues enabling host FS access and potential escape; fixed in runC 1.1.12 (Docker Engine ≥ 25.0.2).

2024-02-01 - CVE-2024-23651/23652/23653 (BuildKit, “Leaky Vessels”): build-time issues that can affect host files; fixed in BuildKit 0.12.5.

2024-09-23 - In-the-wild cryptojacking campaign: attackers targeted exposed Docker APIs and microservices.

2024-10-01 - Docker API swarm botnet campaign: cryptojacking via exposed Docker Engine API (details).

I had been seeking an alternative (I assumed that someone had already questioned the status quo), and that's how I stumbled into Podman territory. It began as casual curiosity - "Hey, let me check out this thing" - turned into a complete overhaul of my container workflows and pulled me into using Fedora in my home lab. And honestly? I wish I'd made the switch sooner.

Here's the fundamental issue that kept me awake: Docker's entire architecture is built around a persistent background service - the dockerd daemon. Whenever you run a docker command, you're actually talking to this daemon, which then does the heavy lifting. Sounds about right?

Yes?!

Or rather NO, because this daemon runs with root privileges. Always. And if something goes south with a daemon - innocent bug, a crash, or worst case scenario, a security exploit - your entire container ecosystem is potentially compromised. Not just the containers, daemon, or resource that you assigned to it, but the whole host system. It was a huge relief that Podman threw this model out the window. No daemon, no processes running in the background. When you run podman run my-app, the container becomes a direct child of your command. And it is running under your user privileges. Simple architecture change with huge implications:

Remember those late-night security advisories about Docker daemon vulnerabilities (ex., when dockerd was misconfigured to listen on TCP:2375 without TLS, attackers could spin up privileged containers remotely)? With Podman, even if someone somehow escalates privileges inside a container to root level, they're still just an unprivileged user on the actual host. It significantly reduces the surface of an attack.

Usually Docker daemon runs just fine. But when hiccups kick in - oh boy, hold your hats, as it will take down multiple containers at once. With Podman when one container crashed, the other kept running like nothing happened. It makes so much sense, and it's built in the spirit of hermetization.

I had been surprised when my MacBook M2 Pro started to get warmer when left unattended. After a brief investigation (with Activity Monitor), it was obvious - Docker never knows when to stop. No constantly running daemon means less memory usage. Unfortunately, running a container using Podman can be a different story (ekhm: blog.podman.io/2025/06/podman-and-apple-ros..) - yet the thing is getting better: blog.podman.io/2025/08/podman-5-6-released-...

Beyond the obvious daemon advantages, Podman brings some genuinely clever features that make day-to-day container work more pleasant:

Systemd integration that doesn't suck: This one's huge if you're working on Linux servers (most of us are). Podman justgenerates proper systemd unit files. Boom, your container is a first-class citizen in the Linux service ecosystem. Boot dependencies, automatic restarts, resource limits - it all just works. I can run podman generate systemd --name my-app and get a clean service file. Afterwards, I can enable, start, stop, and monitor with standard systemctl commands. Say bye-bye to third-party process managers.

Kubernetes alignment that's not just marketing: Since Red Hat (the folks behind Podman) is also a major Kubernetes contributor, the tool feels like it was designed with K8s in mind from day one. The native pod support isn't just a bolt-on feature - it's central to how Podman works. I do not need to run k3s or any local substitute for Kubernetes. Now, I can prototype multi-container applications as Podman pods locally. Then I just generate Kubernetes YAML directly from those pods with podman generate kube. My local development environment actually looks like what I'm going to deploy. This was revolutionary when I had to take over the responsibility of managing and developing a quite complex cluster.

The Unix philosophy done right: Instead of trying to be everything to everyone, Podman focuses on running containers well and delegates specialized tasks to purpose - built tools. Need to build images with fine - grained control? That's Buildah. Want to inspect or copy images between registries? Skopeo's your friend. I can use the best tool for each job. I'm no longer stuck with whatever image-building quirks Docker decides to implement.

Here's the part that surprised me most: switching from Docker to Podman was almost seamless. The Podman folks clearly understood that creating the next standard would not let them win the market, and they just adhered to the known CLI tool. I literally just aliased docker=podman in my shell and carried on with life. podman run, podman build, podman ps - they all behave exactly like their Docker counterparts. My existing Dockerfiles worked without modification. My muscle memory didn't need retraining.

Though there were a few places where I did hit differences that were actually improvements in disguise:

  • Privileged ports in rootless mode not working? Good! That's security working as intended. A reverse proxy setup is a better architecture anyway.

  • Some volume permission quirks? Yes - but it's a small price, and again - if you do it right, you are limiting the scope of possible attack.

  • A few legacy tools that expected the Docker socket? If there is no support by now, just remember that Podman can expose a Docker-compatible API if needed.

  • If your Docker Compose workflow is overly complex, just convert it to Kubernetes YAML. We all use Kubernetes these days, so why even bother about this? Having the same layout for development and production is a huge bonus of doing so.

After six months of running Podman in production, here's what I've noticed:

I'm sleeping much better. Because I'm personally responsible for security, I do not have to check if every container is running in rootless mode. Something that I did not think I would benefit from is that my monitoring dashboards show cleaner resource usage patterns. Don't get me wrong - Docker isn't going anywhere. It has massive momentum, a mature ecosystem, and plenty of organizational inertia keeping it in place. But for new projects, or if you are able to make technical decisions based on merit rather than legacy, Podman represents a clear evolution in container technology. More secure by design, more aligned with Linux system management practices, and more thoughtfully architected for the way we actually deploy containers in 2025. The best way forward is to question the assumptions you didn't even realize you were making.

Just to prove how easy transition can be, here's a practical walkthrough of migrating a FastAPI application from Docker to Podman.

Your existing FastAPI project with its Dockerfile and requirements.txt

Podman is installed on your system:

  • Ubuntu/Debian: sudo apt update && sudo apt install podman

  • Fedora/RHEL: sudo dnf install podman

  • macOS: Grab Podman Desktop for a GUI experience

  • Windows: If you are not a C# developer - stop doing this to yourself and just use Linux: youtube.com/watch?v=S_RqZG6YR5M

This is the beautiful part—Podman uses the same OCI container format as Docker. Your existing Dockerfile should work without any changes. Here's a typical FastAPI setup:

FROM python:3.10-slim-buster

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir --upgrade -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

Instead of docker build, just run:

podman build -t my-fastapi-app:latest .

That's it. Same flags, same behavior, same output. If you want to ease the transition, create an alias:

alias docker=podman

Now you can use your existing docker build commands without thinking about it.

For development and testing:

podman run --rm -p 8000:8000 --name my-fastapi-container my-fastapi-app:latest

For background services:

podman run -d -p 8000:8000 --name my-fastapi-container my-fastapi-app:latest

Your app should be accessible at localhost:8000 just like before.

Important note: By default, Podman runs in rootless mode. This is a security win, but it means you can't bind directly to privileged ports (below 1024). For production, you'll want a reverse proxy anyway, so this pushes you toward better architecture.

This is where Podman really shines. Instead of wrestling with custom service management, generate a proper systemd unit file:

 podman run -d -p 8000:8000 --name my-fastapi-container my-fastapi-app:latest mkdir -p ~/.config/systemd/user/ podman generate systemd --name my-fastapi-container > ~/.config/systemd/user/my-fastapi-container.service systemctl --user daemon-reload systemctl --user enable my-fastapi-container.service

systemctl --user start my-fastapi-container.service

Now your FastAPI app is managed like any other system service. It'll start on boot, restart on failure, and integrate with standard Linux logging and monitoring tools.

For server deployments where you want the service to persist even when you're not logged in:

loginctl enable-linger $(whoami)

If your FastAPI app needs a database or other services, Podman's pod concept is cleaner than Docker Compose for simple setups:


podman pod create --name my-fastapi-pod -p 8000:8000 -p 5432:5432



podman run -d --pod my-fastapi-pod --name fastapi-app my-fastapi-app:latest



podman run -d --pod my-fastapi-pod --name postgres-db -e POSTGRES_PASSWORD=mysecretpassword postgres:13

Now your FastAPI app can reach PostgreSQL at localhost:5432 because they share the same network namespace.

For existing Docker Compose setups, you have options:

Option 1: Use podman-compose as a drop-in replacement:

pip install podman-compose

podman-compose up -d

Option 2: Convert to Kubernetes YAML for a more cloud-native approach:



kompose convert -f docker-compose.yml -o k8s-manifest.yaml

podman play kube k8s-manifest.yaml

This second option is particularly nice if you're planning to deploy to Kubernetes eventually.

Common Gotchas and Solutions

Volume permissions: If you hit permission issues with mounted volumes, remember that rootless containers run as your user. Make sure your user owns the directories you're mounting:

chown -R $(id -un):$(id -gn) /path/to/your/data

Legacy tooling: Some tools expect the Docker socket at /var/run/docker.sock. Podman can provide a compatible API:

systemctl --user enable podman.socket systemctl --user start podman.socket export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock

Performance tuning: For production workloads, you might want to tune the rootless networking stack or consider running specific containers in rootful mode for maximum performance.

The migration process is usually much smoother than people expect. Start with a development environment, get comfortable with the workflow differences, then gradually move production workloads. The security and operational benefits make it worth the effort.


Read the original article

Comments

  • By ttul 2025-09-0519:095 reply

    Back in 2001/2002, I was charged with building a WiFi hotspot box. I was a fan of OpenBSD and wanted to slim down our deployment, which was running on Python, to avoid having to copy a ton of unnecessary files to the destination systems. I also wanted to avoid dependency-hell. Naturally, I turned to `chroot` and the jails concept.

    My deployment code worked by running the software outside of the jail environment and monitoring the running processes using `ptrace` to see what files it was trying to open. The `ptrace` output generated a list of dependencies, which could then be copied to create a deployment package.

    This worked brilliantly and kept our deployments small and immutable and somewhat immune to attack -- not that being attacked was a huge concern in 2001 as it is today. When Docker came along, I couldn't help but recall that early work and wonder whether anyone has done a similar thing to monitor file usage within Docker containers and trim them down to size after observing actual use.

    • By sroerick 2025-09-0519:3214 reply

      The best CI/CD pipeline I ever used was my first freelance deployment using Django. I didn't have a clue what I was doing and had to phone a friend.

      We set up a git post receive hook which built static files and restarted httpd on a git receive. Deployment was just 'git push live master'.

      While I've used Docker a lot since then, that remains the single easiest deployment I've ever had.

      I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

      Is the reproducibility of docker really worth the added overhead of managing containers, docker compose, and running daemons on your devbox 24/7?

      • By rcv 2025-09-0521:183 reply

        > I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

        Sounds great if you're only running a single web server or whatever. My team builds a fairly complex system that's comprised of ~45 unique services. Those services are managed by different teams with slightly different language/library/etc needs and preferences. Before we containerized everything it was a nightmare keeping everything in sync and making sure different teams didn't step on each others dependencies. Some languages have good tooling to help here (e.g. Python virtual environments) but it's not so great if two services require a different version of Boost.

        With Docker, each team is just responsible for making sure their own containers build and run. Use whatever you need to get your job done. Our containers get built in CI, so there is basically a zero percent chance I'll come in in the morning and not be able to run the latest head of develop because someone else's dev machine is slightly different from mine. And if it runs on my machine, I have very good confidence it will run on production.

        • By sroerick 2025-09-061:473 reply

          OK, this seems like an absolutely valid use case. Big enterprise microservice architecture, I get it. If you have islands of dev teams, and a dedicated CI/CD dev ops team, then this makes more sense.

          But this puts you in a league with some pretty advanced deployment tools, like high level K8, Ansible, cloud orchestration work, and nobody thinks those tools are really that appropriate for the majority of devteams.

          People are out here using docker for like... make install.

          • By rglullis 2025-09-0610:491 reply

            Imagine you have a team of devs, some using macOS, some using Debian, some using NixOS, some on Windows + WSL. Go ahead and try to make sure that everyone's development environment by simply running "git pull" and "make dev"

            • By Fanmade 2025-09-0611:06

              Ha, I've written a lot of these Makefiles and the "make dev" command even became a personal standard that I added to each project. I don't know if I read about that, or if it just developed into that because it just makes sense. In the last few years, these commands very often started a docker container, though. I do tend to work on Windows with WSL and I most of my colleagues use macOS or Linux, so that's definitely one of the reasons why docker is just easier there.

          • By AlphaSite 2025-09-062:122 reply

            Having a reproducible dev environment is great when everyone’s laptop is different and may be running different OSes, libraries, runtimes, etc.

            Also docker has the network effect. If there was a good light weight tool that was better enough people would absolutely use it.

            But it doesn’t exist.

            In an ideal world it wouldn’t exist, but we don’t live there.

            • By a012 2025-09-063:361 reply

              > Having a reproducible dev environment is great when everyone’s laptop is different and may be running different OSes, libraries, runtimes, etc.

              Docker and other containerization solved the “it works on my machine” issue

              • By em-bee 2025-09-065:101 reply

                almost. there is still an issue with selinux. i just had that case. because the client develops with selinux turned off, the docker containers don't run on my machine if i have selinux turned on.

                • By znpy 2025-09-0610:562 reply

                  you miss an intermediate environment (staging, pre-prod, canary, whatever you want to call it) with selinux turned on.

                  • By em-bee 2025-09-0612:52

                    i don't. the customer does. and they don't seem to care. turning selinux off works for them and they are not paying me to fix that or work around it.

                  • By YJfcboaDaJRDw 2025-09-0613:07

                    [dead]

            • By adastra22 2025-09-065:21

              Docker is that lightweight tool, isn’t it? It doesn’t seem that complex to me. Unfamiliar to those who haven’t used it, but not intrinsic complexity.

          • By em-bee 2025-09-065:071 reply

            15 years ago i has a customer who ran a dozen different services on one machine, php, python, and others. a single dev team. upgrading was a nightmare. you upgraded one service, it broke another. we hadn't yet heard about docker, and used proxmox. but the principle is the same. this is definitely not just big enterprise.

            • By johnisgood 2025-09-0617:451 reply

              That is wild. I have been maintaining servers with many services and upgrading never broke anything, funnily enough: on Arch Linux. All the systems where an upgrade broke something were Ubuntu-based ones. So perhaps the issue was not so much about the services themselves, but the underlying Linux distribution and its presumably shitty package manager? I do not know the specifics so I cannot say, but in my case it was always the issue. Since then I do not touch any distribution that is not pacman-based, in fact, I use Arch Linux exclusively, with OpenBSD here and there.

              • By em-bee 2025-09-0619:332 reply

                i used "broken" generously. it basically means that for example for multiple php based services, we had to upgrade them all at once, which lead to a large downtime until everything was up and running again. services in containers meant that we could deal with them one at a time and dramatically reduce the downtime and complexity of the upgrade process.

                • By curt15 2025-09-0620:191 reply

                  Would there still have been a problem if you were able to install multiple php versions side-by-side? HPC systems also have to manage multiple combinations of toolchains and environments and they typically use Modules[1] for that.

                  [1] https://hpc-wiki.info/hpc/Modules

                  • By em-bee 2025-09-0623:56

                    probably not, but it wasn't just php, and also one of the goals was the ability to scale up. and so having each service in its own container meant that we could move them to different machines and add more machines as needed.

                • By johnisgood 2025-09-0620:18

                  Oh, I see what you mean now, okay, that makes sense.

                  I would use containers too, in such cases.

        • By SkiFire13 2025-09-0612:571 reply

          > so there is basically a zero percent chance I'll come in in the morning and not be able to run the latest head of develop because someone else's dev machine is slightly different from mine.

          It seems you never had to deal with timezone-dependent tests.

          • By sroerick 2025-09-0617:30

            What are timezone-dependent tests? Sounds like a bummer

        • By const_cast 2025-09-0522:012 reply

          > My team builds a fairly complex system that's comprised of ~45 unique services.

          Well there's your problem: you have a distributed system. Those are really hard.

          Solution: don't do that. Make it a monolithic system.

          Obviously that ship has sailed for you, but I mean in the general sense. Making distributed systems is really, really hard and you should really think it through.

          Its a lot like parallel programming. So enticing. But probably don't do that.

          • By latentsea 2025-09-061:16

            I like how you didn't even ask for any context that would help you evaluate whether or not their chosen architecture is actually suitable for their environment before just blurting out advice that may or may not be applicable (though you would have no idea, not having enquired).

          • By Bnjoroge 2025-09-0522:163 reply

            what they described is a fairly common set up in damn near most enterprises

            • By fulafel 2025-09-0618:03

              Enterprises are frequently antipattern zoos. If you have many teams you can use the modular monolith pattern instead of microservices, that way you have the separation but not the distributed system.

            • By nazgul17 2025-09-0523:44

              Both can be true

            • By sroerick 2025-09-0617:31

              Wherefore art thou IBM

      • By bolobo 2025-09-0520:181 reply

        > I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

        For me, as an ex-ops, the value proposition is to be able to package a complex stack made of one or more db, several services and tools (ours and external), + describe the interface of these services with the system in a standard way (env vars + mounts points).

        It massively simplify the onboarding experience, make updating the stack trivial, and also allow devs, ci and prod to run the same version of all the libraries and services.

        • By sroerick 2025-09-061:332 reply

          OK, I completely agree with this.

          That said, I'm not a nix guy, but to me, intuitively NixOS wins for this use case. It seems like you could either

          A. Use declarative OS installs across deployments B. Put your app into a container which sometimes deploys it's own kernel and then sometimes doesn't and this gets pushed to a third party cloud registry, or you can set up your own registry, and then this container runs on a random ubuntu container or cloud hosting site where you basically don't administer or do any ops you just kind of use it as an empty vessel which exists to run your Docker container.

          I get that in practice, these are basically the same, and I think that's a testament to the massive infrastructure work Docker, Inc has done. But it just doesn't make any sense to me

          • By sterlind 2025-09-064:41

            you can actually declare containers directly in Nix. they use the same config/services/packages machinery as you'd use to declare a system config. and then you can embed them in the parent machine's config, so they all come online and talk to each other with the right endpoints and volumes and such.

            or you can use the `build(Layered)Image` to declaratively build an oci image with whatever inside it. I think you can mix and match the approaches.

            but yes I'm personally a big fan of Nix's solution to the "works on my machine" problem. all the reproducibility without the clunkiness of having to shell into a special dev container, particularly great for packaging custom tools or weird compilers or other finnicky things that you want to use, not serve.

          • By bolobo 2025-09-0610:48

            The end result will be the same but I can give 3 docker commands to a new hire and they will be able to set up the stack on their MacBook or Linux or Windows system in 10 minutes.

            Nix is, as far as I know, not there and we would probably need weeks of training to get the same result.

            Most of the time the value of a solution is not in its technical perfection but in how many people already know it, documentation, and more important all the dumb tooling that's around it!

      • By Shog9 2025-09-0519:423 reply

        Reproducibility? No.

        Not having to regularly rebuild the whole dev environment because I need to work on one particular Python app once a quarter and its build chain reliably breaks other stuff? Priceless.

        • By janjongboom 2025-09-0520:171 reply

          This false sense of reproducability is why I funded https://docs.stablebuild.com/ some years ago. It lets you pin stuff in dockerfiles that are normally unpinnable like OS package repos, docker hub tags and random files on the internet. So you can go back to a project a year from now and actually get the same container back again.

          • By jselysianeagle 2025-09-0520:403 reply

            Isn't this problem usually solved by building an actual image for your specific application, tagging that and pushing to some docker repo? At least that's how it's been at placec I've worked at that used docker. What am I missing?

            • By lmm 2025-09-061:553 reply

              What do you do when you then actually need to make a change to your application (e.g. a 1-liner fix)? Edit the binary image?

              • By macNchz 2025-09-0613:42

                Build and tag internal base images on a regular cadence that individual projects then use in their FROM. You’ll have `company-debian-python:20250901` as a frozen-in-time version of all your system level dependencies, then the Dockerfile using it handles application-level dependencies with something that supports a lockfile (e.g. uv, npm). The application code itself is COPY’d into the image towards the end, such that everything before it is cached, but you’re not relying on the cache for reproducibility, since you’re starting from a frozen base image.

                The base image building can be pretty easily automated, then individual projects using those base images can expect new base images on a regular basis, and test updating to the latest at their leisure without getting any surprise changes.

              • By xylophile 2025-09-065:271 reply

                You can always edit the file in the container and re-upload it with a different tag. That's not best practice, but it's not exactly sorcery.

                • By lmm 2025-09-0610:40

                  It's not, but at that point you're giving up on most of the things Docker was supposed to get you. What about when you need to upgrade a library dependency (but not all of them, just that one)?

              • By zmmmmm 2025-09-067:291 reply

                you append it to the end of the docker file so that the previous image is still valid with its cached build steps

                • By lmm 2025-09-0610:39

                  And just keep accreting new layers indefinitely?

            • By jamwil 2025-09-0521:31

              Perhaps more focused on docker-based development workflows than final deployment.

            • By fcarraldo 2025-09-0522:32

              Builds typically aren’t retained forever.

        • By northzen 2025-09-0522:27

          Use pixi or uv to maintain this specific environment and separate it from the global one

        • By sroerick 2025-09-061:261 reply

          I know this pain, and Docker absolutely makes sense for this use case, but I feel like we would both agree that this is a duct tape and bubble gum solution? Though a totally justifiable one

          • By Shog9 2025-09-061:46

            Oh sure. 20 years ago I used VMs and that was also a duct tape solution. I'd have hoped for a proper solution by now, but a lighter hack works too

      • By kqr 2025-09-0613:461 reply

        Docker in and of itself does not do you much good. Its strength comes from the massive amounts of generic tooling that is built around the container as the standard deployable unit.

        If you want to handle all your deployments the same way, you can basically only choose between Nix and containers. Unfortunately, containers are far more popular and have more tooling.

        • By sroerick 2025-09-0617:22

          I think this is accurate. It just feels like a lot of "we use Docker because everybody uses Docker. That's just the way we do it.

          But if you actually add up the time we spend using docker, I'm really not sure it saves that many cycles

      • By antihero 2025-09-068:26

        > Is the reproducibility of docker really worth the added overhead of managing containers, docker compose, and running daemons on your devbox 24/7?

        Yes. Everything on my box is ephemeral and can be deleted and recreated or put on another box with little-to-no thought. Infrastructure-as-code means my setup is immutable and self-documented.

        It's a little more time to set up initially, but now I know exactly what is running.

        I don't really understand the 24/7 comment, now that it is set up there's very very little maintenance. Sometimes an upgrade might go askew but that is rare.

        Any change to it is recorded as a git commit, I don't have to worry about logging what I've done ever because it's done for me.

        Changes are handled by a GitHub action, all I have to do to change what is running is commit a file, and the infra will update itself.

        I don't use docker-compose, I use a low-overhead microk8s single-node cluster that I don't think about at all really, I just have changes pushed to it directly with Pulumi (in a real environment I'd use something like ArgoCD) and everything just works nicely. Ingress to services is done through Cloudflare tunnels so I don't even have to port-forward or think about NAT or anything like this.

        To update my personal site, I just do a git commit/push, the it's CI/CD builds builds a container and then updates the Pulumi config in the other repo to point to the latest hash, which then kicks off an action in my infra repo to do a Pulumi apply.

        Currently it runs on Ubuntu but I'm thinking of using Talos (though it's still nice to be able to just SSH to the box and mess around with files).

        I'm not sure why people struggle with this, or the benefits of this approach, so much? It seems like a lot of complexity if you're inexperienced, but if you've been working with computers for a long time, it isn't particularly difficult—there are far more complicated things that computers do.

        I could throw the box (old macbook) in a lake and be up and running with every service on a new box in an hour or so. Or I could run it on the cloud. Or a VPS, or metal, or whatever really, it's a completely portable setup.

      • By roozbeh18 2025-09-0520:08

        Someone wrote a PHP7 script to generate some of our daily reports a while back that nobody wants to touch. Docker happily runs the PHP7 code in the container and generates the reports on any system. its portable, and it doesnt require upkeep.

      • By ctkhn 2025-09-0612:48

        Just for my home server, I have more than 10 containers for home assistant, vpn, library management for movies/tv/music, photos backup, password manager, and a notes server. I started without knowing what docker was, and in less than a year realized running services directly on my OS was more hassle than I wanted both with compatibility between services dependencies, networking setup for them, and configuring reboots and upgrades. I would say the reproducibility and configurability is easily worth the slight overhead and in my experience even reduced it.

      • By tasuki 2025-09-068:06

        > We set up a git post receive hook which built static files and restarted httpd on a git receive. Deployment was just 'git push live master'.

        I still do that for all my personal projects! One of the advantages of docker is that you don't have to rebuild the thing on each deployment target.

      • By twelvedogs 2025-09-0614:18

        > Is the reproducibility of docker really worth the added overhead of managing containers, docker compose, and running daemons on your devbox 24/7?

        Why wouldn't it be, containers are super easy to manage, dockerd uses bugger all resources in dev (on Linux anyway) and docker compose files are the simplest setup scripts I've ever used

        I like docker because it's easy and I'm lazy

      • By ownagefool 2025-09-067:06

        Forget docker for a second.

        Suddenly you're in a team with 2-3 people and one of them likes to git push broken code and walk-off.

        Okay, lets make this less about working with a jack-ass, same setup, but each 5 minutes of downtime cost you millions of dollars. One of your pushes work locally but don't work on the server.

        The point of a more structed / complex CI/CD process is to eliminate failures. As the stakes become higher, and the stack becomes more complex, the need for the automation grows.

        Docker is just a single part of that automation that makes other things / possible / lowers specific class of failures.

      • By bonzini 2025-09-064:54

        QEMU used a similar CI for its website before switching to Gitlab pages:

        https://gist.github.com/bonzini/1abbbdec739e77503945a3605e0e...

      • By strzibny 2025-09-069:05

        I know well what you are talking about since I did something similar, but I finally moved to Docker with Kamal (except one project I still have to move). The advantage of Docker's reproducibility is to have a peace of mind when comes to rollbacks and running exact versions due to system dependencies. If anyone is curious I wrote Kamal Handbook to help people adopt Kamal which I think brings all the niceness to Docker deployment so it's not annoying.

      • By IanCal 2025-09-0521:23

        Managing and running some containers is really easy though. And running daemons? Don’t we all have loads of things running all the time?

        I find it easier to have the same interface for everything, where I can easily swap around ports.

      • By throwmeaway222 2025-09-0522:261 reply

        > I didn't have a clue what I was doing and had to phone a friend.

        > I genuinely don't understand what docker brings to the table.

        I think you invalidated your own opinion here

        • By sroerick 2025-09-061:381 reply

          Sorry, sir, I didn't realize nobody should ever spend any time learning anything or, failing that, describe what happened to them during that time. I'm no neckbeard savant but I do have a dozen years of deploying web apps and also using Docker during that time, so I think I'm allowed to have an opinion. Go drink some warm milk, you will feel better.

          • By throwmeaway222 2025-09-0622:36

            You have 12 years of deployment experience and some of that using docker, would have been a more useful thing to say in your OC. I was literally just pointing out your argument was pretty weak - this context would have made it stronger.

    • By bmgoau 2025-09-0519:511 reply

      First result on Google, 22k stars https://github.com/slimtoolkit/slim

      • By ttul 2025-09-0620:52

        Super cool looking project. I always thought this concept was useful and wondered why base Docker did not incorporate the same idea.

    • By champtar 2025-09-0522:08

      In OpenWrt there is ujail, you give it an ELF (or multiple) to run, it'll parse them to find all the libraries they need, then it creates a tmpfs and mount bind read only the required files. https://github.com/openwrt/procd/blob/dafdf98b03bfa6014cd94f...

    • By kqr 2025-09-0613:36

      I interviewed for a startup that does exactly this, except also for syscalls etc. They're mainly focused on security and not size. https://bifrostsec.com/

      (I ended up taking another offer but I still think they're onto something.)

  • By t43562 2025-09-0512:317 reply

    To provide 1 contrary opinion to all the others saying they have a problem:

    Podman rocks for me!

    I find docker hard to use and full of pitfalls and podman isn't any worse. On the plus side, any company I work for doesn't have to worry about licences. Win win!

    • By nickjj 2025-09-0512:5135 reply

      > On the plus side, any company I work for doesn't have to worry about licences. Win win!

      Was this a deal breaker for any company?

      I ask because the Docker Desktop paid license requirement is quite reasonable. If you have less than 250 employees and make less than $10 million in annual revenue it's free.

      If you have a dev team of 10 people and are extremely profitable to where you need licenses you'd end up paying $9 a year per developer for the license. So $90 / year for everyone, but if you have US developers your all-in payroll is probably going to be over $200,000 per developer or roughly $2 million dollars. In that context $90 is practically nothing. A single lunch for the dev team could cost almost double that.

      To me that is a bargain, you're getting an officially supported tool that "just works" on all operating systems.

      • By csours 2025-09-0514:181 reply

        Companies aren't monoliths, they're made of teams.

        Big companies are made of teams of teams.

        The little teams don't really get to make purchasing decisions.

        If there's a free alternative, little teams just have to suck it up and try to make it work.

        ---

        Also consider that many of these expenses are born by the 'cost center' side of the house, that is, the people who don't make money for the company.

        If you work in a cost center, the name of the game is saving money by cutting expenses.

        If technology goes into the actual product, the cost for that is accounted for differently.

        • By citizenpaul 2025-09-0520:151 reply

          It always amazes me how hostile most large companies are to paying for developer tools that have a trivial cost. Then they will approve the budget for some yay quartly profit party no one cares about that cost $100k for the venue rental alone.

          I do understand that this mostly is because management wants staff to be replaceable and disposable having specialty tools suggests that a person can be unique.

          • By flyinglizard 2025-09-0522:491 reply

            No, it's not because of that. It's because:

            1. You want to control spend - there are budgets. 2. You want to control accounting - minimize the number of vendors you work with. Each billing needs to come with an invoice, these need to be managed, when a developer leaves you need to cancel their seat etc. It's a pain. 3. You want to control compliance - are these tools safe? Are they accessing sensitive data? Are they audited? 4. You want to control interoperability between teams. Can't have it become a zoo of bring-your-own stuff.

            So free tools get around all of these, you can just wing it under the radar and if the tool becomes prominent enough then you go fight the war to have it adopted. Once there's spend, you need to get into line. And that line makes a lot of sense when you're into 30 developers, let alone hundreds.

            • By strken 2025-09-0610:04

              If you've got 30 developers then you've probably got, what, five or six teams? Your tech leads/senior engineers/whoever provides tech leadership at a team level are operating at a scale where they can go to the pub with your head of engineering/CTO/each other/the dude from finance who has a credit card and fit around a table.

              I've worked at companies that size and the "war" involved putting time in the calendar of the head of engineering, asking how his son was, demoing the product we wanted for about two minutes and explaining the pain point it solved, then promising to get our legal team and the one security person to review it after he put the credit card in and before we used it in prod. When I worked somewhere larger it was much more difficult.

      • By akerl_ 2025-09-0512:545 reply

        The problem isn’t generally the cost, it’s the complexity.

        You end up having to track who has it installed. Hired 5 more people this week? How many of them will want docker desktop? Oh, we’ve maxed the licenses we bought? Time to re-open the procurement process and amend the purchase order.

        • By nickjj 2025-09-0513:0411 reply

          A large company who is buying licenses for tools has to deal with this for many different things. Docker is not unique here.

          An IT department for a company of that size should have ironed out workflows and automated ways to keep tabs on who has what and who needs what. They may also be under various compliance requirements that expect due diligence to happen every quarter to make sure everything is legit from a licensing perspective.

          Even if it's not automated, it's normal for a team to email IT / HR with new hire requirements. Having a list of tools that need licenses in that email is something I've seen at plenty of places.

          I would say there's lots of other tools where onboarding is more complicated from a license perspective because it might depend on if a developer wants to use that tool and then keeping tabs on if they are still using it. At least with Docker Desktop it's safe to say if you're on macOS you're using it.

          I guess I'm not on board with this being a major conflict point.

          • By Aurornis 2025-09-0515:272 reply

            > An IT department for a company of that size should have ironed out workflows and automated ways to keep tabs on who has what and who needs what. They may also be under various compliance requirements that expect due diligence to happen every quarter to make sure everything is legit from a licensing perspective.

            Correct, but every additional software package and each additional license adds more to track.

            Every new software license requires legal to review it.

            These centralized departments add up all of the license and SaaS costs and it shows up as one big number, which executives start pushing to decrease. When you let everyone get a license for everything they might need, it gets out of control quickly (many startups relearn this lesson in their growth phase)

            Then they start investigating how often people use software packages and realize most people aren't actually using most software they have seats for. This happens because when software feels 'free' people request it for one-time use for a thing or to try it out and then forget about it, so you have low utilization across the board.

            So they start making it harder to add new software. They start auditing usage. They may want reports on why software is still needed and who uses it.

            It all adds up. I understand you don't think it should be this way, but it is at big companies. You're right that that the $24/user per month isn't much, but it's one of dozens of fees that get added, multiplied by every employee in the company, and now they need someone to maintain licenses, get them reviewed, interact with the rep every year, do the negotiation battles, and so on. It adds up fast.

            • By oooyay 2025-09-0517:59

              > Correct, but every additional software package and each additional license adds more to track.

              This is going to differ company to company but since we're narrowing it to large companies I disagree. Usually there's a TPM that tracks license distribution and usage. Most companies provide that kind of information as part of their licensing program (and Docker certainly does.)

              > Every new software license requires legal to review it.

              Yes, but this is like 90% of what legal does - contract review. It's also what managers do but more on the negotiation end. Most average software engineers probably don't realize it but a lot of cloud services, even within a managed cloud provider like AWS, require contract and pricing negotiation.

              > These centralized departments add up all of the license and SaaS costs and it shows up as one big number, which executives start pushing to decrease. When you let everyone get a license for everything they might need, it gets out of control quickly (many startups relearn this lesson in their growth phase)

              As I said earlier, I can't speak for other companies but at large companies I've worked at this just simply isn't true. There's metrics for when the software isn't being used because the corporation is financially incentivized to shrink those numbers or consolidate on software that achieves similar goals. They're certainly individually tracked fairly far up the chain even if they do appear as a big number somewhere.

            • By Eduard 2025-09-0522:51

              that all is most basic bookkeeping, I cannot take the argument "$x/user × every employee adds up" serious.

              Also, latest with 20 employees or computers, someone in charge of IT (sysadmin, IT department) would decide to use a software asset management tool (aka software inventory system) to automatically track, roll out, uninstall, monitor vetted software. Anything else is just unprofessional.

          • By akerl_ 2025-09-0513:07

            Idk what to tell you other than that it is.

            Large companies do have ways to deal with this: they negotiate flat rates or true-up cadences with vendors. But now you’ve raised the bar way higher than “just use podman”.

          • By dec0dedab0de 2025-09-0514:484 reply

            It becomes a pain point when the IT team never heard of docker, all new licenses need to be approved by the legal department, and your manager is afraid to ask for any extra budget.

            Also, I don't want to have to troubleshoot why the docker daemon isn't running every time I need it

            • By regularfry 2025-09-0516:281 reply

              I'll see your "IT team never heard of docker" and raise you "security want to ban local containers because they allow uncontrolled binaries onto corporate hardware.". But that's not something podman solves...

              • By mgkimsal 2025-09-0517:202 reply

                Every single developer is running 'uncontrolled source code' on corporate hardware every single day.

                • By cyberpunk 2025-09-0519:461 reply

                  The defence isn't against malicious developers writing evil code, but some random third party container launched via a curl | bash which mounts ~/ into it and posts all your ssh keys to some server in china... Or whatever.

                  Or so I was told when I made the monumental mistake of trying to fight such a policy once.

                  So now we just have a don't ask don't tell kind of gig going on.

                  I don't really know what the solution is, but dev laptops are goldmines for haxxors, and locking them down stops them from really being dev machines. shrug

                  • By zmmmmm 2025-09-067:32

                    > some random third party container launched via a curl | bash which mounts ~/ into it and posts all your ssh keys to some server in china

                    it's pretty stupid because the same curl | bash that could have done that could have just posted the same contents directly to the internet without the container. The best chance you actually have is to do as much development as possible inside a sealed environment like ... a container where at least you have some way to limit visibility of partially trusted code of your file system.

                • By regularfry 2025-09-0611:26

                  And this is regarded as an existential problem which cannot be permitted to persist by some in the security space.

            • By 0cf8612b2e1e 2025-09-0516:052 reply

              I have personally given up trying to get a $25 product purchased through official channels. The process can make everything painful.

              • By johannes1234321 2025-09-0517:591 reply

                Congrats, the process fulfilled it's purpose. Another small cost saved :)

                • By 0cf8612b2e1e 2025-09-0518:22

                  Trust me, the thought crossed my mind. They definitely beat me.

              • By regularfry 2025-09-0616:42

                It can be easier to spend £100K than £100.

            • By reaperducer 2025-09-0515:35

              It becomes a pain point when the IT team never heard of docker

              Or when your IT department is prohibited from purchasing anything that doesn't come from Microsoft or CDW.

            • By axlee 2025-09-0515:494 reply

              >It becomes a pain point when the IT team never heard of docker

              Where do you work ? Is that even possible in 2025?

              • By cyberpunk 2025-09-0519:49

                'corp IT' in a huge org typically all outsourced MCSEs who are seemingly ignorant of every piece of technology outside of azure.

                Or so it seems to me whenever I have to deal with them. We ended up with Microsoft defender on our corp Macs even.. :|

              • By anakaine 2025-09-0520:08

                Its absolutely possible. Weve also had them unaware of github, and had them label Amazon S3 as a risk since it specifically wasn't Microsoft.

                There is no bottom to the barrel, and incompetence and insensitivity can rise quite high in some cases.

              • By dec0dedab0de 2025-09-0520:34

                I work at a cool place now that is well aware of it, but in 2023 I worked at a very large insurance company with over a thousand people in IT. Some of the gatekeepers were not aware of docker. Luckily another team had set up Openshift, but then the approval process for using it was a nightmare.

              • By tracker1 2025-09-0516:58

                Apparently they work in the past...

          • By Dennip 2025-09-0514:151 reply

            Not sure on docker desktops specifics but usually large companies have enterprise/business licencing available and specifically do not deal with this, and do not want to manually deal with this, because they can use SSO & dynamically assign licenses to user groups etc.

            • By nullify88 2025-09-065:38

              Or use Microsoft MyAccess to have the users allocate a license themselves.

          • By stronglikedan 2025-09-0513:591 reply

            Or just use Podman and don't worry about licenses, since it's just as good but sooo much easier.

            • By reaperducer 2025-09-0515:371 reply

              Some day I hope to work for a company small enough that I can "just" use any software I feel like for whatever reasons I want.

              But I have to feed my family.

              • By worik 2025-09-0520:201 reply

                > I can "just" use any software I feel like for whatever reasons I want.

                What could possibly go wrong?

                • By nullify88 2025-09-065:43

                  For my day job, installing software / admin access is reserved to those who work in IT / software development. Rest of the business need to go through a vetted software library.

          • By eptcyka 2025-09-068:24

            A large company has to deal with many different things, some of the things are intrinsic to the business, some are not. When push comes to shove, business will try to relieve itself of the latter so it can focus on the former.

          • By unethical_ban 2025-09-0516:06

            >An IT department for a company of that size should have ironed out workflows

            I'm in IT consulting. If most companies could even get the basic best practices of the field implemented, I wouldn't have a job.

          • By itsdrewmiller 2025-09-0514:14

            You're arguing against a straw man here - no one but you used the term "dealbreaker" or "major" conflict point. It can be true that it is not a dealbreaker but still a downside.

          • By zbrozek 2025-09-0515:23

            Yeah all of that is a huge pain and fantastic to avoid.

          • By reaperducer 2025-09-0515:34

            An IT department for a company of that size should have ironed out workflows

            The business world is full of things that "should" be a certain way, but aren't.

            For the technology world, double the number.

            We'd all like to live in some magical imaginary HN "should" world, but none of us do. We all work in companies that are flawed, and sometimes those flaws get in the way of our work.

            If you've never run into this, buy a lottery ticket.

          • By worik 2025-09-0520:15

            Not just large companies

            OT because not docker

            In the realm of artistic software (thinking Alberton Live and Adobe suites) licensing hell is a real thing. In my recent experience it sorts the amateurs from the pros, in favour of amateurs

            The time spent learning the closed system includes hours and dollars wrestling licenses. Pain++. Not just the unaffordable price, but time that could be spent creating

            But for an aspiring professional it is the cost of entry. These tools must be mastered (if not paid for, ripping is common) as they have become a key part of the mandated tool chains, to the point of enshittification

            The amateur is able to just get on with it, and produce what they want when they want with a dizzying array of possible tools

        • By weberc2 2025-09-0518:373 reply

          I'm of the opinion that large companies should be paying for the software they use regardless of whether it's open source or not, because software isn't free to develop. So assuming you're paying for the software you use, you still have the problem that you are subject to your internal procurement processes. If your internal procurement processes make it really painful to add a new seat, then maybe the processes need to be reformed. Open source only "fixes" the problem insofar as there's no enforcement mechanism, so it makes it really easy for companies to stiff the open source contributors.

          • By akerl_ 2025-09-0519:18

            So, I'm of two thoughts here:

            1. As parallel commenters have pointed out, no. Plenty of open source developers exist who aren't interested in getting paid for their open source projects. You can tell this because some open source projects sell support or have donation links or outright sell their open source software and some do not. This line of thinking seems to come out of some utopian theoretical world where open source developers shouldn't sell their software because that makes them sell-outs but users are expected to pay them anyways.

            2. I do love the idea of large companies paying for open source software they use because it tends to set up all kinds of good incentives for the long term health of software projects. That said, paying open source projects tends to be comically difficult. Large companies are optimized for negotiating enterprise software agreements with a counterparty that is primed to engage in that process. They often don't have a smooth way to like, just feed money into a Donate form, or make a really big Github or Patreon Sponsorship, etc. So even people in large companies that really want to give money to open source devs struggle to do so.

          • By bityard 2025-09-0518:561 reply

            "stiff the open source contributors"

            I'm not sure you realize that "open source" means anyone anywhere is free to use, modify, and redistribute the software in any way they see fit? Maybe you're thinking of freeware or shareware which often _do_ come with exceptions for commercial use?

            But anyway, as an open source contributor, I have never felt I was being "stiffed" just because a company uses some software that I helped write or improve. I contribute back to projects because I find them useful and want to fix the problems that I run into so I don't have to maintain my own local patches, help others avoid the same problems, and because making the software better is how I give back to the open source community.

            • By pferde 2025-09-0612:54

              Several hundreds of Sillicon Valley "techbros" just threw up in their mouths a little. "Doing things without monetizing them? Eww, how pedestrian!"

          • By rlpb 2025-09-0518:44

            > so it makes it really easy for companies to stiff the open source contributors

            I don't think there's any stiffing going on, since the open source contributors knowingly contributed with a license that specifically says that payment isn't required. It is not reasonable for them to take the benefits of doing that but then expect payment anyway.

        • By devjab 2025-09-0513:353 reply

          > You end up having to track who has it installed. Hired 5 more people this week? How many of them will want docker desktop? Oh, we’ve maxed the licenses we bought? Time to re-open the procurement process and amend the purchase order.

          I don't quite get this argument. How is that different from any piece of software that an employee will want in any sort of enterprise setting? From an IT operations perspective it is true that Docker Desktop on Windows is a little more annoying than something like an Adobe product, because Docker Desktop users need their local user to be part of their local docker security group on their specific machine. Aside from that I would argue that Docker Desktop is by far one of the easiest developer tools (and do note that I said developer tools) to track licenses for.

          In non-enterprise setups I can see why it would be annoying but I suspect that's why it's free for companies with fewer than 250 people and 10 million in revenue.

          • By akerl_ 2025-09-0514:04

            I touched on this in my parallel reply, but to expand on it:

            The usual way that procurement is handled, for the sake of everybody's sanity, is to sign a flat-rate / tiered contract, often with some kind of true-up window. That way the team that's trying to buy software licenses doesn't have their invoices swinging up/down every time headcount or usage patterns shifts, and they don't have to go back to the well every time they need more seats.

            This is a reasonably well-oiled machine, but it does take fuel: setting up a new enterprise agreement like that takes humans and time, both of which are not free. So companies are incentivized to be selective in when they do it. If there's an option that requires negotiating a license deal, and an option that does not, there's decent inertia towards the latter.

            All of which is a long way to say: many large enterprises are "good" at knowing how many of their endpoints are running what software, either by making getting software a paperwork process or by tracking with some kind of endpoint management (though it's noteworthy that there are also large enterprises that suck at endpoint management and have no clue what's running in their fleet). The "hard" part (where "hard" means "requires the business to expend energy they'd rather not) is getting a deal that doesn't involve the license seat counter / invoice details having to flex for each individual.

          • By Aurornis 2025-09-0515:31

            You're right that it's no different than other software, but when you reach the point where the average employee has 20-30 different licenses for all the different things they might use, managing it all becomes a job for multiple people.

            Costs and management grow in an O(n*m) manner where n is employees and m is numbers of licenses per employee. It seems like nothing when you're small and people only need a couple licenses, but a few years in the aggregate bills are eye-popping and you realize the majority of people don't use most of the licenses they've requested (it really happens).

            Contrast this with what it takes for an engineer to use a common, free tool: They can just use it. No approval process. No extra management steps for anyone. Nothing to argue that you need to use it every year at license audit time. Just run with it.

          • By maigret 2025-09-0513:561 reply

            > How is that different from any piece of software that an employee will want in any sort of enterprise setting?

            Open source is different in exactly that, no procurement.

            Finance makes procurement annoying so people are not motivated to go through it.

            • By mgkimsal 2025-09-0517:241 reply

              That assumes that you can, in fact, install that software in the first place. "Developers" sometimes get a bit of a pass, but I've been inside more than a few companies where... no one could install anything at all, regardless of whether there was a cost. Requesting some software would usually get someone with too much time on their hands (who would also complain about being overworked) asking what you need, why you need it, why you didn't try something else, do you really need it, etc. In some scenarios the 'free' works against, because "there's no support". I was seeing this as late as 2019 at a company - it felt like being back in 1997.

              • By nightpool 2025-09-0517:35

                Cool. Then keep using Docker Desktop if you want to. That's not the situation most of the people in this thread are talking about though.

        • By thinkingtoilet 2025-09-0513:36

          Are you complaining about buying 5 licenses? It seems extremely easy to handle. It feels like sometimes people just want to complain.

        • By almosthere 2025-09-0513:453 reply

          Everything is hard in a large company and they have hired teams to manage procurement so this is just you over thinking.

          • By malnourish 2025-09-0514:13

            How often have you dealt with large org procurement processes? I've spent weeks waiting on the one person needed to approve something that cost less than something I could readily buy on my T&E card.

          • By akerl_ 2025-09-0514:05

            What a strangely hostile reply.

          • By dboreham 2025-09-0514:151 reply

            Typically the team they hired is focused on you not procuring things.

            • By akerl_ 2025-09-0514:33

              I think a lot of this boils down to Procurement's good outcome generally being quite different than the good outcome for each team that wants a purchase.

              To draw a parallel: imagine a large open source project with a large userbase. The users interact with the project and a bunch of them have ideas for how to make it better! So they each cut feature requests against the project. The maintainers look at them. Some of the feature requests they'll work on, some of them they'll take well-formed pull requests. But some they'll say "look, we get that this is helpful for you, but we don't think this aligns with the direction we want the project to go".

              A good procurement team realizes that every time the business inks a purchase agreement with a vendor, the company's portfolio has become incrementally more costly. For massive deals, most of that cost is paid in dollars. For cheaper software, the sticker price is low but there's still the cost of having one more plate to juggle for renewals / negotiations / tracking / etc.

              So they're incentivized to be polite but firm and push back on whether there's a way to get the outcome in another way.

              (this isn't to suggest that all or even most procurement teams are good, but there is a kernel of sanity in the concept even though it's often painful for the person who wants to buy something)

      • By ejoso 2025-09-0514:07

        This math sounds really simple until you work for a company that is “profitable” yet constantly turning over every sofa cushion for spare change. Whuch describes most publicly traded companies.

        It can be quite difficult to get this kind of money for such a nominal tool that has a lot of free competition. Docker was very critical a few years ago, but “why not use podman or containerd or…” makes it harder to stand up for.

      • By wiether 2025-09-0520:211 reply

        > If you have a dev team of 10 people and are extremely profitable to where you need licenses you'd end up paying $9 a year per developer for the license.

        It doesn't quite change your argument, but where have you seen $9/year/dev?

        The only way I see a $9 figure is the $9/month for Docker Pro with a yearly sub, so it's 12*$9=$108/year/dev or $1080/year for your 10 devs team.

        Also it should be noted that Docker Pro is intended for individual professionals, so you don't have collaboration features on private repos and you have to manage each licence individually, which, even for only 10 licences, implies a big overhead.

        If you want to work as a team you need to take the Docker Team licence, at $15/month/dev on a yearly sub, so now you are at $1800/year for your 10 devs team.

        Twenty times more than your initial figure of $90/year. Still, $1800 is not that much in the grand scheme of things, but then you still have to add a usual Atlassian sub, an Office365/GWorkspace sub, an AI sub... You can end-up paying +$200/month/dev just in software licences, without counting the overhead of managing them.

        • By nickjj 2025-09-0610:02

          I can't speak for all companies but a few I've dealt with bought licenses exclusively for Docker Desktop access. They're not using private repos since they were invested in private registries through their cloud provider.

      • By dice 2025-09-0513:481 reply

        > Was this a deal breaker for any company?

        It is at the company I currently work for. We moved to Rancher Desktop or Podman (individual choice, both are Apache licensed) and blocked Docker Desktop on IT's device management software. Much easier than going through finance and trying to keep up with licenses.

        • By regularfry 2025-09-0516:381 reply

          Deal breaker for us too, now in my second org where that's been true.

          It's not just that you need a licence now, it's that even if we took it to procurement, until it actually got done we'd be at risk of them turning up with a list of IP addresses and saying "are you going to pay for all of these installs, then?". It's just a stupid position to get into. The Docker of today might not have a record of doing that, but I wouldn't rule out them getting bought by someone like Oracle who absolutely, definitely would.

          • By SushiMon 2025-09-0517:431 reply

            Were there any missing/worse functional capabilities that drove you over to Podman/alternatives? Or just the licensing / pricing?

            • By regularfry 2025-09-0611:25

              No, it was entirely a business decision in both cases.

      • By orochimaaru 2025-09-0514:54

        If your enterprise with a large engineering team that isn’t a software company, you are a cost center. So anything related to developer tools is rarely funded. It will mostly be - use the free stuff and suck it up.

        Either that or you have a massive process to acquire said licenses with multiple reporting requirements. So, you manager doesn’t need the headache and says just use the free stuff and move on.

        I used to use docker. I use podman now. Are there teams in my enterprise who have docker licenses - maybe. But tracking them down and dealing with the process of adding myself to that “list” isn’t worth the trouble.

      • By troyvit 2025-09-0515:36

        > I ask because the Docker Desktop paid license requirement is quite reasonable. If you have less than 250 employees and make less than $10 million in annual revenue it's free.

        It is for now, but I can't think of a player as large as Docker that hasn't pulled the rug out from under deals like this. And for good reason, that deal is probably a loss leader and if they want to continue they need to convert those free customers into paying.

      • By codesmash 2025-09-0514:59

        The problem is not the cost. It's complexity. From a buyer perspective literally fighting with the procurement team is a nightmare.

        And usually the need is coming from someone below C-level. So you have to: convince your manager and his manager convince procurement team it has to be in a budget (and usually it's much easier to convince to pay for the dinner) than you have a procurement team than you need to go through vendor review process (or at least chase execution)

        This is reality in all big companies that this rule applies to. It's at least a quarter project.

        Once I tried to buy a $5k/yr software license. The Sidekiq founder told me (after two months of back and forth) that he's done and I have to pay by CC (which I didn't had as miserable team lead).

      • By taormina 2025-09-0517:161 reply

        Yep! What startup has the goal of making less than $10 million in annual revenue? That sentence was absolutely a deal breaker for the CEO and CTO of our last company.

        And since when has Docker Desktop "just worked"?

        • By nickjj 2025-09-0610:13

          I've been using Docker since before Docker Desktop.

          Never really had any major problems with Docker Desktop on Windows. I run it and it allows me to run containers through WSL 2. Volume performance is near native Linux speeds and the software itself doesn't crash, even on my 10 year old machine.

          I also use it on macOS on a work laptop for a lot of different projects and it works. There's more issues around volume mount performance here but it's not something that's unusably slow. Also given the volume performance is mostly due to OS level file system things I'm skeptical Podman would resolve that. I remember trying Colima for something and it made no difference there.

      • By DerArzt 2025-09-0517:14

        I work at a fortune 250 and cost of the licence was the given reason for moving to podman for the whole org.

      • By firesteelrain 2025-09-0512:53

        We only run Podman Desktop if ever because for large companies it is cost prohibitive. We also found that most people don’t need *Desktop at all. Command line works fine

      • By jandrese 2025-09-0517:36

        > Was this a deal breaker for any company?

        It's not the money, it's the bureaucracy. You can't just buy software, you need a justification, a review board meeting, marketplace survey with explanations of why this particular vendor was chosen over others with similar products, sign off from the management chain, yearly re-reviews for the support contract, etc...

        And then you need to work with the vendor to do whatever licensing hoops they need to do to make the software work in an offline environment that will never see the Internet, something that more often than not blows the minds of smaller vendors these days. Half the time they only think in the cloud and situations like this seem like they come from Mars.

        The actual cost of the product is almost nothing compared to the cost of justifying its purchase. It can be cheaper to hire a full time engineer to maintain the open source solutions just to avoid these headaches. But then of course you get pushback from someone in management that goes "we want a support contract and a paid vendor because that's best practices". You just can't win sometimes.

      • By t43562 2025-09-0513:16

        I don't particularly care if it's worth it or not. I don't need to do it. Getting money for things is not easy in all companies.

      • By k4rli 2025-09-0513:565 reply

        Docker Desktop is also (imo) useless and helps be ignorant.

        Most Mac users I see using it struggle to see the difference between "image" and "container". Complete lack of understanding.

        All the same stuff can easily be done from cli.

        • By com2kid 2025-09-0517:05

          > Most Mac users I see using it struggle to see the difference between "image" and "container". Complete lack of understanding.

          Because they just want their software package to run and they have been given some magic docker incantation that, if they are lucky, actually launches everything correctly.

          The first time I used Docker I had so many damn issues getting anything to work I was put off of it for a long time. Heck even now I am having issues getting GPU pass through working, but only for certain containers, other containers it is working fine for. No idea what I am even supposed to do about that particular bit of joy in my life.

          > All the same stuff can easily be done from cli.

          If a piece of technology is being forced down a user's throat, users just wants it to work and go out of their way so they can get back to doing their actual job.

        • By johnmaguire 2025-09-0515:321 reply

          I don't believe it's possible to run Docker on macOS without Docker Desktop (at least not without something like lima.) AFAIUI, Docker Desktop contains not just the GUI, but also the hypervisor layer. Is my understanding mistaken?

          • By cduzz 2025-09-0516:542 reply

            It's pretty easy to run docker on macos -- colima[1] is just a brew command away...

            It runs qemu under the hood if you want to run x86 (or sparc or mips!) instead of arm on a newer mac.

            [1]https://formulae.brew.sh/formula/colima

            • By mdaniel 2025-09-0617:32

              As hair splitting, one can choose to use qemu or Virtualization.framework https://lima-vm.io/docs/config/vmtype/vz/ (I'm aware that's a link to Lima docs but ... <https://github.com/abiosoft/colima/blob/v0.8.4/config/config...>)

            • By lmm 2025-09-061:591 reply

              > colima[1] is just a brew command away...

              Which would be great if it worked reliably, or had any documentation at all for when it breaks. But it doesn't and it doesn't.

              • By cduzz 2025-09-0616:23

                First, I guess I'll just invoke Sturgeon's law[1] -- almost all software, especially if you don't really understand it, is crap, and probably the software you understand is also crap, you're just used to it. Good software is pretty tricky to make.

                But second -- I use colima lots, on my home macs and my work macs, and it mostly just works. The profiles stuff is kinda annoying and I find myself accidentally running arm when I want x86, or other tedious config issues crop up. But it actually has been easier to live with than docker desktop where I'd run out of space and things would fall apart.

                Docker on MacOS is broadly going work poorly relative to it on linux, just from having to run the docker stuff in a linux vm that's hiding somewhere behind the scenes.

                If you find too much friction with any of these, probably it's easier to just run a linux vm on the mac and interact with docker in the 'native' environment. I've found UTM to be quite a bit easier to live with than virtualbox.

                [1] https://en.wikipedia.org/wiki/Sturgeon%27s_law

        • By dakiol 2025-09-0516:49

          I cannot run docker in macos without docker desktop. I use the cli to manage images, containers, and everything else.

        • By j45 2025-09-0513:58

          Not everyone uses software the same way.

          Not everyone becomes a beginner to using software the same way or the one way we see.

      • By racecar789 2025-09-0618:26

        > you'd end up paying $9 a year per developer for the license

        Correction: Docker Desktop is $9/month (not $9/year).

      • By lucyjojo 2025-09-0514:201 reply

        for reference a jp dev will be paid around $50,000. most of the world will probably be in the 10k-50k range except a few places (switzerland, luxembourg, usa?).

        atlassian and google and okta and ghe and this and that (claude code?). that eventually starts to stack up.

        • By throwaway0236 2025-09-0520:54

          I think you are underestimating the salaries in other "developed" countries, but you are right that US salaries are much higher than any other country (especially in Silicon Valley)

          You have a valid point in that many HN commentators seem to live in a bubble where spending thousands of dollars on a developer for "convenience" is seen as a no-brainer. They often work in companies that don't make a profit, but are funded by huge VC investments. I don't blame them, as it is a valid choice given the circumstances. If you have the money, why not? But they may start thinking differently if the flow of VC money slows down.

          It's similar to how some wealthy people buy a private jet. Their time is valuable, and the cost seems justified (at least if you don’t care about the environmental impact).

          I believe that frugality is actually the default mode of business, but many companies in SV are protected from the consequences by the VCs.

      • By maxprimer 2025-09-0518:49

        Even large companies with thousands of developers have budgets to manage and often times when the CT/IO sees free as an option that's all that matters.

      • By arunc 2025-09-0515:50

        $90 is also like 1.5 hours of work that I would've spent debugging podman anyway. And I've spent more than a few hours every time podman breaks, it to be honest.

      • By tecleandor 2025-09-0515:231 reply

        I've seen a weird thing on their service agreement:

        Use Restrictions. Customer and its Users may not and may not allow any third party to: [...] 10. Access the Service for the purpose of developing or operating products or services intended to be offered to third parties in competition with the Services[...]

        Emphasis mine on 'operating'.

        So I cannot use Docker Desktop to operate, for example: ECR, GCR or Harbor?

        • By chuckadams 2025-09-0518:16

          I think the Service in question is services like Docker Hub that they don't let you use as the infrastructure for your competing site.

      • By fkyoureadthedoc 2025-09-0513:40

        At my job going through procurement for something like Docker Desktop when there are free alternatives is not worth it.

        It takes forever, so long that I'll forget that I asked for something. Then later when they do get around to it, they'll take up more of my time than it's worth on documentation, meetings, and other bullshit (well to me it's bullshit, I'm sure they have their reasons). Then when they are finally convinced that yes a Webstorm license is acceptable, they'll spend another inordinate amount of time trying to negotiate some deal with Jetbrains. Meanwhile I gave up 6 months ago and have been paying the $5 a month myself.

      • By bongodongobob 2025-09-0514:30

        I work for a $2 billion/yr company and we need three levels of approval for a Visio license. I've never been at a large corp where you could just order shit like that. You'll have to fill out forms , have a few meetings about it, business justification spreadsheets etc, then get told it's not in the budget.

      • By smileysteve 2025-09-0514:37

        To bring up AI, and the eventual un-subsidizing of costs; if $9 a year is too much for docker... Then even the $20/mo (June) price tag is too high for AI, much less $200 (August), or $2000? (post subsidizing)

      • By papageek 2025-09-0520:07

        You need a compliance department and attorneys to look over licenses and agreements. It's a real hassle and not really related to cost of the license itself.

      • By tclancy 2025-09-0520:01

        Yes. I worked for a company with a few thousand developers and we swapped away from Docker one week with almost no warning. It was a memorable experience.

      • By pmontra 2025-09-0514:073 reply

        I think that I never saw somebody using Docker Desktop. I saw running containers with the command line everywhere, but I maybe I did not notice. No licenses for the command line tools, right?

        • By akerl_ 2025-09-0514:10

          On a Mac or Windows machine, you generally need something to get you a Linux environment on which to run the containers.

          You can run your own VM via any number of tools, or you can use WSL now on Windows, etc etc. But Docker Desktop was one of the first push-button ways to say "I have a Mac and I want to run Docker containers and I don't want to have to moonlight as a VM gardener to do it.

        • By chuckadams 2025-09-0518:27

          The command-line tools on a Mac usually come from Docker Desktop. The homebrew version of docker is bare-bones and requires the virtualbox-based docker-machine package, whereas Desktop is using Apple's Virtualization Framework. Nobody runs the homebrew version as far as I can tell.

          On Windows, you can use the docker that's built in to the default WSL2 image (ubuntu), and Docker Desktop will use it if available, otherwise it uses its own backend (probably also Hyper-V based).

          I use Orbstack myself, but that's also a paid product.

        • By throwaway0236 2025-09-0521:11

          I sometimes use Docker Desktop on my Mac to view logs. It's more convenient.

      • By xyzzy_plugh 2025-09-0512:54

        It's a deal breaker because it was previously free to use, and frankly it's not worth $1 a month given there are better paid alternatives, let alone better free alternatives.

      • By m463 2025-09-0519:44

        I hated the docker desktop telemetry. I remember it happened in the macos installer even before you got any dialog box

      • By smokel 2025-09-0514:002 reply

        Reading through the comments here, it looks like there is an opportunity for a startup to streamline software licensing. Just a free tip.

        • By eehoo 2025-09-0515:22

          There are already software licensing providers such as 10Duke that do exactly that. Pretty much all of the licensing related problems mentioned here would either disappear or at the very least get dramatically simpler if more companies used 10Duke Enterprise as their licensing solution to issue and manage licenses. There is a better way, but sadly most businesses overlook licensing.

          (the company I work for uses them, our licensing used to be a mess similar to what's described here)

        • By adolph 2025-09-0515:36

          Yeah, at a big enterprise the larger challenge ahead of even payment is the legal arrangements. They typically sign some "master license" agreement with an aggregator like CDW. Those places don't seem well set up for software redistribution though. Setting up a Steam or AppStore clone for various utility-ware would go a long way to enabling people to access the software an enterprise doesn't mind paying for if the legal and financial stuff wasn't applying friction.

      • By zer00eyz 2025-09-0518:131 reply

        > you'd end up paying $9 a year per developer for the license

        It's only 9 bucks a year, its only 5 bucks a month, its less than a dollar a day.

        Docker, ide, ticking system, GitHub, jira, sales force, email, office suit, Figma.... all of a sudden your spending 1000 bucks a month per staff member for a small 10 person office.

        Meanwhile AWS is charging you .01xxxx for bandwidth, disk space, cpu time, s3 buckets, databases. All so tiencent based AI clients from China hammer your hardware and run up your bill....

        The rent seeking has gotten out of hand.

        • By j45 2025-09-0519:14

          The loaded cost is truly something else, and most understood by people who had to find a way to pay for it all, or paid for it all for others.

          The majority of businesses in the world, (and the majority of jobs) are created and delivered by small business, not big.

          And then the issues when a service goes down it takes everyone else down with it.

      • By patmcc 2025-09-0517:34

        It's not the cost, it's the headache. Do I need to worry about setting up SSO, do I need to work with procurement, do I need to do something in our SOC2 audit, do I need to get it approved as an allowed tool, etc.

        Whether it's $100/year or $10k/year it's all the same headache. Yes, this is dumb, but it's how the process works at a lot of companies.

        Whereas if it's a free tool that just magically goes away. Yes, this is also dumb.

      • By bastardoperator 2025-09-0517:34

        Docker has persuaded several big shops to purchase site licenses.

      • By phaedrix 2025-09-0523:291 reply

        You are off by a factor of 12.

        It's $9 per month not year.

        • By nickjj 2025-09-0610:17

          Thanks, I can't believe I missed that!

          $90 vs $1,080 would be the difference anually.

      • By debarshri 2025-09-0513:511 reply

        You can always negotiate the price

        • By johannes1234321 2025-09-0522:51

          In other words: you can always make the buying process more complex and expensive.

          For some products that might be worth it. For other not.

          But whatever the outcome: you still got to track license compliance afterwards and renew licenses. (Which also works better when tracking internal usage as you know your need)

      • By secondcoming 2025-09-0520:21

        Yes. Our company no longer allows use of Docker Desktop

      • By flerchin 2025-09-0512:59

        "officially supported" is not a value.

        It's not the price, it's that there is one. 1 penny would be too much because it prevents compose-ability of dev workstations.

    • By Izmaki 2025-09-0512:374 reply

      None of your companies need to worry about licenses. Docker ENGINE is free and open source. Docker DESKTOP is a software suite that requires you to purchase a license to use in a company.

      But Docker Engine, the core component which works on Linux, Mac and Windows through WSL2, that is completely and 1000% free to use.

      • By xhrpost 2025-09-0512:552 reply

        From the official docs:

        >This section describes how to install Docker Engine on Linux, also known as Docker CE. Docker Engine is also available for Windows, macOS, and Linux, through Docker Desktop.

        https://docs.docker.com/engine/install/

        I'm not an expert but everything I read online says that Docker runs on Linux so with Mac you need a virtual environment like Docker Desktop, Colima, or Podman to run it.

        • By LelouBil 2025-09-0512:582 reply

          Docker desktop will run a virtual machine for you. But you can simply install docker engine in wsl or in a VM on mac exactly like you would on linux (you give up maybe automatic port forwarding from the VM to your host)

          • By rovr138 2025-09-0515:161 reply

            > But you can simply install docker engine in wsl or in a VM on mac exactly like you would on linux (you give up maybe automatic port forwarding from the VM to your host)

            and sharing files from the host, ide integration, etc.

            Not that it can't be done. But doing it is not just, 'run it'. Now you manage a vm, change your workflow, etc.

            • By mpyne 2025-09-0523:14

              Of course, but that's the value-add of Docker Desktop. But you don't have to tie yourself to it, or even if you do use it for a bit to get going faster, you have a migration path open to doing it yourself should you need it.

          • By linuxftw 2025-09-0513:141 reply

            This. I run docker in WSL. I also do 100% of my development in WSL (for work, anyway). Windows is basically just my web browser.

            • By CuriouslyC 2025-09-0513:343 reply

              Ironic username. As a die hard, WSL aint bad though. I just can't deal with an OS that automatically quarantines bittorrent clients, decides to override local administrator policies via windows updates and pops up ad notifications.

              • By mmcnl 2025-09-0519:18

                I personally use Windows + WSL2 and for work use macOS. I prefer Windows + WSL2 by a longshot. It just "works". macOS never "just works" for me. Colima is fine but requires a static memory allocation for the VM, it doesn't have the level of polish that WSL2 has. Brew is awful compared to apt (which you get with WSL2 because it's just Linux).

                And then there's the windowing system of macOS that feels like it's straight from the 90s. "System tray" icons that accumulate over time and are distracting, awful window management with clunky animations, the near useless dock (clicking on VS Code shows all my 6 IDEs, why?). Windows and Linux are much modern in that regard.

                The Mac hardware is amazing, well worth its price, but the OS feels like it's from a decade ago.

              • By croon 2025-09-0513:56

                +1

                I use WSL for work because we have no linux client options. It's generally fine, but both forced windows update reboots as well as seemingly random wsl reboots (assuming because of some component update?) can really bite you if you're in the middle of something.

              • By linuxftw 2025-09-0513:511 reply

                All my personal machines run linux. At work my choices are Mac or Windows. If Macs were still x86_64 I might choose that and run a VM, but I have no interest in learning the pitfalls of cross arch emulation or dealing with arm64 linux distro for a development machine.

                • By chuckadams 2025-09-0518:35

                  I never notice the difference between arm64 and x86 environments, since I'm flipping between them all the time just because the arm boxes are so much cheaper. The only time it matters to me is building containers, and then it's just a matter of passing `--platform=linux/amd64,linux/arm64` to `docker buildx`.

                  If you're building really arch-specific stuff, then I could see not wanting to go there, but Rosetta support is pretty much seamless. It's just slower.

        • By iainmerrick 2025-09-0513:371 reply

          If you're already paying for Macs, is paying for Docker Desktop really a big problem?

          • By chrisweekly 2025-09-0515:462 reply

            I think the point is that Docker Desktop for macOS is bad.

            • By chuckadams 2025-09-0518:40

              It's not all that bad these days ever since they added virtio support. Orbstack is well worth paying for as an alternative, but that won't solve anyone's procurement headaches either.

            • By iainmerrick 2025-09-067:27

              Oh! I wasn’t trying to make a big point except that paying for software isn’t necessarily a bad thing, and if you’re already invested in Macs you’re presumably OK with paying good money for good products.

              Having used Docker Desktop on a Mac myself, it seems... fine? It does the job well enough, and it’s part of the development rather than production flow so it doesn’t need to be perfect, just unobtrusive.

      • By matsemann 2025-09-0512:414 reply

        If you've installed Docker on Windows you've most likely done that by using Docker Desktop, though.

      • By t43562 2025-09-0512:382 reply

        Those companies use docker desktop on their dev's machines.

        • By connicpu 2025-09-0512:421 reply

          There's no need if all your devs use desktop Linux as their primary devices like we do where I work :)

          • By t43562 2025-09-0512:444 reply

            On Mac we just switched to podman and didn't have anything to worry about.

            • By krferriter 2025-09-0516:59

              I am using MacOS and like a year ago I uninstalled docker and docker desktop, installed podman and podman-compose, and have changed literally nothing else about how I use containers and docker image building/running locally. It was a drop-in replacement for me.

            • By nickthegreek 2025-09-0513:225 reply

              Anyone have opinions on OrbStack for mac over these other alternatives?

              • By elliottr1234 2025-09-0514:191 reply

                It's well worth it its much more than a gui for it supports running k8s locally, managing custom vm instances, resource monitoring of containers, built in local domain name support with ssl mycontainer.orb, a debug shell that gives you ability to install packages that are not available in the image by default, much better and automated volume mounting and view every container in finder, ability to query logs, an amazing ui, plus it is much, much faster and more resource efficient.

                The above features really do make it worth it especially when using existing services that have complicated failure logs or are resource intensive like redis, postgres, livekit, etc or you have a lot of ports running and want to call your service without having to worry about remembering port numbers or complicated docker network configuration.

                Check it out https://docs.orbstack.dev/

                • By chuckadams 2025-09-0518:45

                  Docker Desktops also supports a local kubernetes stack, but it takes several minutes to start up, and I think in the end it's just minikube? Haven't tried Orbstack's k8s stack myself since I'm good with k3d. I did have cause though to spin up a VM a while back, and that was buttah.

              • By fernandotakai 2025-09-0516:55

                orbstack is absolutely amazing. not only the docker side works much better than docker desktop but their lightweight linux vms are just beyond great.

                i've been using an archlinux vm for everything development over the past year and a half and i couldn't be happier.

              • By johncoltrane 2025-09-0513:43

                I tried all the DD alternatives (on macOS) and I think OrbStack is the easiest to use and least invasive of them all.

                But it is not cross-platform, so we settled on Podman instead, which came (distant) second in my tests. The UI is horrible, IMO but hey… compromises.

                I use OrbStack for my personal stuff, though.

              • By veidr 2025-09-0514:28

                Yes, Orbstack is significantly better than Docker Desktop, and probably also better than any other Docker replacement out there right now (for macOS), if you aren't bothered by the (reasonable) pricing.

                It costs about $100/year per seat for commercial use, IIRC. But it is significantly faster than Docker Desktop at literally everything, has a way better UI, and a bunch of QoL features that are nice. Plus Linux virtualization that is both better and (repeating on this theme) significantly more performant than Parallels or VMWare Fusion or UTM.

              • By karlshea 2025-09-0513:46

                Been using it for a year or so now and it’s amazing. Noticeably faster than DD and the UI isn’t Electron or whatever’s going on there.

            • By lmm 2025-09-062:00

              Really? We switched 6+ months ago and I'm still dealing with all the little broken corners that keep cropping up.

            • By allovertheworld 2025-09-0516:061 reply

              Cant imagine being forced to use a linux PC for work lmao

              • By connicpu 2025-09-0518:22

                I happily embraced it, to each their own I guess. There are folks who mainly work on their mac/windows laptops and just ssh into their workstation, but IT gives us way more freedom (full sudo access) on Linux so I can customize a lot more which makes me a lot happier.

        • By Almondsetat 2025-09-0512:40

          That's their completely optional prerogative

      • By firesteelrain 2025-09-0512:531 reply

        Podman is inside the Ubuntu WSL image. No need for docker at all

        • By kordlessagain 2025-09-0513:361 reply

          This is not correct, at least when looking at my screen:

          (base) kord@DESKTOP-QPLEI6S:/mnt/wsl/docker-desktop-bind-mounts/Ubuntu/37c7f28..blah..blah$ podman

          Command 'podman' not found, but can be installed with:

          sudo apt install podman

          • By firesteelrain 2025-09-0513:40

            Hmm maybe it’s what our admins provided to us then. I actually have never run it at home only airgapped

    • By goldman7911 2025-09-0514:183 reply

      You only have to worry about licences if you use Docker DESKTOP. Why not use RANCHER Desktop?

      I have been using it by years. Tested it in Win11 and Linux Mint. I can have even a local kubernetes.

      • By lmm 2025-09-062:02

        Low-quality UX (e.g. you have to switch tabs and switch back if you ever want to see the current state of your containers, because it loads it once when you open the tab and never updates, and doesn't even give you a button to refresh it), lack of documentation, behavioural changes that happen silently (e.g. it autoupdates which changes the VM hostname, so the thing that was working yesterday doesn't work today and you have no idea why) and general flakiness.

      • By mpawelski 2025-09-069:18

        I concur. My company is using Rancher Desktop on Windows machines. No problems. As long as you use don't care about GUI, and just use CLI dommands ("docker" , "docker compose" ).

      • By seabrookmx 2025-09-061:15

        Why not use Docker Engine/CE on Linux so you don't have to run a VM?

    • By xedrac 2025-09-0519:28

      I vastly prefer Podman over Docker. No user/group fuss, no security concerns over a root process. No having to send data to a daemon.

    • By anakaine 2025-09-0519:59

      On a few machines now ive had Podmans windows uninstaller fail to remove all its components and cause errors on start up due to postman not being found. Even manually removing leftover services and start up items didn't fix the issue. Its a constant source of annoyance.

    • By ac130kz 2025-09-0518:161 reply

      It's works great until you need that one option from Docker Compose that is missing in Podman Compose (which is written in Python for whatever reason, yeah...).

  • By xrd 2025-09-0512:255 reply

    I love podman, and, like others have said here, it does not always work with every container.

    I often try to run something using podman, then find strange errors, then switch back to docker. Typically this is with some large container, like gitlab, which probably relies on the entirety of the history of docker and its quirks. When I build something myself, most of the time I can get it working under podman.

    This situation where any random container does not work has forced me to spin up a VM under incus and run certain troublesome containers inside that. This isn't optimal, but keeps my sanity. I know incus now permits running docker containers and I wonder if you can swap in podman as a replacement. If I could run both at the same time, that would be magical and solve a lot of problems.

    There definitely is no consistency regarding GPU access in the podman and docker commands and that is frustrating.

    But, all in all, I would say I do prefer podman over docker and this article is worth reading. Rootless is a big deal.

    • By nunez 2025-09-0514:472 reply

      I presume that the bulk of your issues are with container images that start their PID 1s as root. Podman is rootless by default, so this causes problems.

      What you can do if you don't want to use Docker and don't want to maintain these images yourself is have two Podman machines running: one in rootful mode and another in rootless mode. You can, then, use the `--connection` global flag to specify the machine you want your container to run in. Podman can also create those VMs for you if you want it to (I use lima and spin them myself). I recommend using --capabilities to set limits on these containers namespaces out of caution.

      Podman Desktop also installs a Docker compatibility layer to smooth over these incompatibilities.

      • By xrd 2025-09-0515:14

        This is terrific advice and I would happily upvote a blog post on this! I'll look into exactly this.

      • By bsder 2025-09-0521:02

        Is there a blog post on this somewhere? I'd really love to read more about it beyond just the official documentation.

    • By gorjusborg 2025-09-0512:54

      > I love podman, and, like others have said here, it does not always work with every container.

      Which is probably one of the motivations for the blog post. Compatibility will only be there once a large enough share of users use podman that it becomes something that is checked before publish.

    • By firesteelrain 2025-09-0512:541 reply

      Weird, we run GitLab server and runners all on podman. Honestly I wish we would switch to putting the runners in k8s. But it works well. We use Traefik.

      • By xrd 2025-09-0515:081 reply

        Yeah, I had it running using podman, but then had some weird container restarts. I switched back to docker and those all went away. I am sure the solution is me learning more and troubleshooting podman, but I just didn't spend the time, and things are running well in an isolated VM under docker.

        That's good to know it works well for you, because I would prefer not to use docker.

        • By dathinab 2025-09-0516:19

          in my experience (at least rootless) podman does enforce resource limits much better/stricter

          we had some similar issues and it was due to containers running out of resources (mainly RAM/memory, by a lot, but only for a small amount of time). And it happens that in rootless this was correctly detected and enforced, but on non rootless docker (in that case on a Mac dev laptop) it didn't detect this resource spikes and hence "happened to work" even through it shouldn't have.

    • By k_roy 2025-09-0514:49

      I use a lot of `buildx` stuff. It ostensibly works in podman, but in practice, I haven't had much luck

HackerNews