10 years ago, I made a small reverse proxy project public. Fast forward to today and Traefik has 3.4B downloads and 56k GitHub stars. See how it unfolded.
I use Traefik for local development on daily basis, where I have to run double digit https services. It works, but it was a pain to set up. The documentation sucks ** and the config is confusing AF. I would never recommend this to anyone. If i will have to reinstall my computer one day, Traefik will not be welcomed back.
Traefik maintainer here.
Documentation quality has been a common complaint. Previously, we only provided reference documentation and relied on the community to create tutorials and guides.
Based on feedback like yours, we've completed documentation rewrite. Have you had a chance to review the new version? Your feedback is taken very seriously, so we'd greatly value your thoughts on these improvements.
OMG yes. I want to like Traefik, but the thought having to set it up again is not something i look forward to. Why cant it just work out of the box?
Caddy is probably my new favorite. It works out of the box, its super low resource, handles a ton of traffic, and the docs are decent.
Oh, the static/dynamic split is brutal (and I believe some options have been moved around)...
Once you referenced routers, middleware and services simply by name, but that changed into per-source scoped versions (e.g. service1@file, middleware@docker).
I kept bumping into those edge cases (custom SSL cert set-up was really confusing), but thanks to chatgpt, I at least ended up with workable solutions.
I really like how it can be easily configured from Docker labels (from Portainer for example), or from your big production Consul cluster alike. But yeah, the docs need a lot of work, it’s difficult to figure out the format many times, it lacks examples, and things that need to be enabled together have their docs at different places.
Your point about having to enable two different things at the same time in two different places is a concise way of expressing my extreme frustration with that project.
I burned the better part of a Saturday trying to figure out why a relatively straightforward configuration wasn't applying and it's because one half of the configuration that I was trying to apply has to be done in the static manner and not the dynamic manner.
Documentation doesn't really spell this out and after quite a bit of frustrated googling I found a few other people complaining about effectively the same problem. It's only a few lines of code to spit out something along the lines of "hey, you're trying to configure ¢thing in an inappropriate way. Did you mean to configure ¢thing over in €thisLocation?" message... But nope, that would make things so much simpler and easier to use and probably cut into their support contract sales...
Now I basically just stick with nginx because it's documentation isn't crap, useful and applicable examples are all over the Internet.
Yeah, kinda have to agree. I like traefik fine but getting mTLS working with it was a serious pain and the docs for doing so were _terrible_, had to keep searching around and piecing together bits from various third party blogs. Coming from haproxy where the documentation is _so_ _much_ better and things like e.g. mTLS are vastly easier, it was not a fun experience but we did finally get traefik to work as we needed.
I wonder how you are using it. I am mainly using Traefik with docker compose labels and it was not that hard to set up once you understand the concepts of routers, middlewares and services. I would use it for any homelab that has to host more than one service.
I also recently started playing around with web UI layer that generates traefik json config. Currently it is quite basic since it was initially made to provide limited time access to development instances but it could in theory manage most important aspects of proxy config and replace something like nginx-proxy-manager. https://github.com/Janhouse/traefik-proxy-admin
I was once tasked with looking into using Traefik and yeah the documentation at the time was so bad I couldn't figure it out. Ended up using Envoy IIRC.
As a self-hosting noob, I never got traefik to work properly, then caddy just worked and has been working since.
Yeah same here. Caddy is so well designed. I hated traefik from the start and even though it works now I still hate it. The moment I used caddy everything was clear and just worked. Basically what nginx used to be 15 years earlier. But it didn't really keep up with the times and they care more about the commercial thing now.
Yeah I deal with it because it's part of the ansible matrix playbook. But I hate it, I always have issues with it. Complex configs, things not quite working right.
Nginx which they used before works much better. And these days I use caddy on everything else. That really shines.
I have completely opposite experience. To me Traefik is the easiest thing to work with on the market. It should be even easier now to setup using Agentic AI.
Congratulations on the 10 year anniversary. Having used Traefik for multiple years in a large Micro-Service Setup (200+ services) I must say I have made mixed experiences. If your requirements match the very opinionated way Traefik does things then it's great. But as soon as they don't you're going to have a hard time getting things to work. That's why shortly after migrating to Traefik I started to maintain an internal fork to add support for unique request ID headers which I maintained for two years until we migrated to HaProxy. The GitHub issue I opened for this in 2019 is still open.
To be fair I used Traefik back when it was still version 1.7 so maybe things have improved by now.
Quite a bold claim there about being "standard" :D
At one point I was using nginx for my local RPi deployment handling of various services with docker-compose but ultimatelly switched to Caddy and it made everything so simple :)
Ok, we've made the title non-standard by switching to the HTML doc title above.
It’s modern day age of aura farming/seo hacking/clout chasing.
Just claim you are standard and then LLM crawlers pick up on it. The next generation is trained to just ask ChatGPT/Claude/Gemini/{w/e dogshit LLM} and they will unfortunately believe it.
Throw in some more keywords and signals like GH stars, docker container downloads to sell it.
Might not work now but it’s a small gamble that may pay off in the future.
To be fair this predates LLMs, SEO crowd was doing this even before to try to get into Google Answers, and before that to have a favorable-looking summary under their blue link.
The entire industry is full of tricks that may or may not work, seems closer to magic rituals than anything else. It's genuinely pretty difficult to analyze how well SEO tricks perform, so there's a lot of "wow, this site is doing well, let's try to copy the success by emulating its patterns randomly" going around.
> Just claim you are standard and then LLM crawlers pick up on it
That's very interesting. Hadn't thought about this PoV. LLMs definitely /can/ empower the wrong kind of behaviour, just like SEO did... and they amplify it a lot by not really showing sources.
Thanks for sharing the thought
LLM rizzmaxxing is crazy