Every UniFi device phones home to its controller on port 8080. The payload is AES-encrypted, but the header is plaintext, and that's enough to build multi-tenant routing.
A few years ago I ran a small UniFi hosting service. Managed cloud controllers for MSPs and IT shops who didn't want to run their own. Every customer got their own VPS running a dedicated controller.
The product worked. People wanted hosted controllers, mostly so they didn't have to deal with hardware, port forwarding, backups. The problem was the economics.
Each customer needed their own VPS. DigitalOcean droplets ran $4-6/month. I was charging $7-8. That's $1-2 of margin per customer, and any support request at all wiped it out. I was essentially volunteering.
The obvious fix is multi-tenancy: put multiple controllers on shared infrastructure instead of giving every customer their own VM. But UniFi controllers aren't multi-tenant. Each one is its own isolated instance with its own database and port bindings. You need a routing layer, something in front that can look at incoming traffic and figure out which customer it belongs to.
For the web UI on port 8443, that's easy. Subdomain per customer behind a reverse proxy, nothing special. But the inform protocol on port 8080 is where things get interesting.
Every UniFi device (access points, switches, gateways) phones home to its controller. An HTTP POST to port 8080 every 10 seconds. This is how the controller keeps track of everything: device stats, config sync, firmware versions, client counts.
The payload is AES-128-CBC encrypted. So I assumed you'd need per-device encryption keys to do anything useful with the traffic, which would mean you'd need the controller's database, which would mean you're back to one instance per customer.
Then I looked at the raw bytes.
The first 40 bytes of every inform packet are unencrypted:
Offset Size Field
────── ───── ──────────────────────────
0 4B Magic: "TNBU" (0x544E4255)
4 4B Packet version (currently 0)
8 6B Device MAC address
14 2B Flags (encrypted, compressed, etc.)
16 2B AES IV length
18 16B AES IV
34 4B Data version
38 4B Payload length
42+ var Encrypted payload (AES-128-CBC)
Byte offset 8 is the device's MAC address, completely unencrypted.
On the wire it looks like this:
54 4E 42 55 # Magic: "TNBU"
00 00 00 00 # Version: 0
FC EC DA A1 # MAC: fc:ec:da:a1:b2:c3
B2 C3
01 00 # Flags
...
("TNBU" is "UNBT" backwards, presumably UniFi Broadcast Technology.)
The MAC is in the header because the controller needs to identify the device before decrypting. Encryption keys are per-device, assigned during adoption, so the controller has to know which device is talking before it can look up the right key. Not a security oversight, just a practical requirement. But it means you can route inform traffic without touching the encryption at all.
Extracting it is almost nothing:
header := make([]byte, 40)
if _, err := io.ReadFull(conn, header); err != nil {
return err
}
if string(header[0:4]) != "TNBU" {
return fmt.Errorf("not an inform packet")
}
mac := fmt.Sprintf("%02x:%02x:%02x:%02x:%02x:%02x",
header[8], header[9], header[10],
header[11], header[12], header[13])Read 14 bytes and you know which device is talking. No decryption needed.
With the MAC in hand, routing is simple. Keep a table of which MAC belongs to which tenant, forward the whole packet (header and encrypted payload, untouched) to the right backend.
Device (MAC: aa:bb:cc:dd:ee:ff)
|
v
+-----------------------------------+
| |
| Inform Proxy |
| |
| Read MAC from bytes 8-13 |
| |
| Lookup: |
| aa:bb:cc:... -> tenant-7 |
| 11:22:33:... -> tenant-3 |
| fe:dc:ba:... -> tenant-12 |
| |
| Forward to correct backend |
| |
+-----------------------------------+
| | |
v v v
Tenant 7 Tenant 3 Tenant 12
The whole proxy is maybe 200 lines of Go with an in-memory MAC-to-tenant lookup table.
In practice, the proxy is mostly a fallback. Once a device is adopted, you point it at its tenant's subdomain (set-inform http://acme.tamarack.cloud:8080/inform) and after that, standard Host header routing handles it through normal ingress. The MAC-based routing catches edge cases like devices that haven't been reconfigured yet, or factory-reset devices re-adopting.
Inform is the hard one. The rest of the controller's ports are more straightforward:
| Port | Protocol | Purpose |
|---|---|---|
| 8080 | TCP/HTTP | Inform (device phone-home) |
| 8443 | TCP/HTTPS | Web UI and API |
| 3478 | UDP | STUN |
| 6789 | TCP | Speed test (internal) |
| 27117 | TCP | MongoDB (internal) |
| 10001 | UDP | L2 discovery (local only) |
Once I figured out inform, the rest was almost anticlimactic. 8443 is the web UI, so that's just subdomain-per-tenant with standard HTTPS ingress. 3478 (STUN) is stateless so a single shared coturn instance covers every tenant. The rest are either internal to the container or L2-only, so they never leave the host.
For the curious: the payload after byte 42 is AES-128-CBC. Freshly adopted devices use a default key (ba86f2bbe107c7c57eb5f2690775c712) which is publicly documented by Ubiquiti and ships in the controller source code. After adoption, the controller assigns a unique per-device key.
The decrypted payload contains device stats and configuration data. Interesting if you're building controller software, but irrelevant for routing.
Every tenant still gets their own dedicated controller, but you're not paying for a whole VM per customer anymore. What was a volunteering operation at $1-2 margin becomes something you can actually make money on.
None of it works if the MAC is inside the encrypted payload. You'd need per-device keys at the proxy layer, which means you'd need access to every controller's database, which puts you right back at one instance per customer. Six plaintext bytes in a packet header make the whole thing possible.
I don't think Ubiquiti designed it this way for third parties to build on. The MAC is there because the controller genuinely needs it before decryption. But the happy side effect is that the inform protocol is routable by anyone who can read 14 bytes off a TCP connection.
If you've poked at the inform protocol yourself, I'd like to hear about it. [email protected]
Nice trick. Just a heads up that I had to whitelist your domain as NextDNS blocked it for being newly registered.
Given this thread will probably attract other Unifi users... has anyone had success migrating from MongoDB to something like FerretDB?
I played around with getting this to work a few weeks ago and found that day-to-day it works without issue, but restoring a backup will error since it relies on some unsupported Mongo semantics (renaming collections iirc).
What does an admin do about NextDNS blocks?
If you subscribe to the mindset of "new domains are likely to be bad" you just deal with a steady stream of allowlist requests from your users until the end of time. There will be new domains until the end of time, and site owners shouldn't be doing anything extra (imo) to justify their existence to admins. If you use a firewall voluntarily and that firewall blocks sites that are legitimate, that's on you, not the site owner.
We get this a lot at my job, where many customers' admins block s3 buckets by default. We give our customers a list of hostnames to allowlist and if they can't figure it out, that's on them.
>If you subscribe to the mindset of "new domains are likely to be bad" you just deal with a steady stream of allowlist requests from your users until the end of time.
Newly-registered domains are not generally an issue with enterprise users. However, they are overrepresented in malicious traffic due to domain-generation algorithms (DGAs).
> Newly-registered domains are not generally an issue with enterprise users.
I take it this means enterprise users are not generally needing to do anything legit-for-work on a newly registered domain.
Enterprise clicks on newly registered domains tend to be (a) being phished or smished or cryptomined or whatever, or (b) someone reading X or Bsky or HN or ProductHunt's vibe code of the date -- things the enterprise would also like to have blocked.
Consider the CloudFlare/Proofpoint/NextDNS/etc. domain block on new domains much like updating one's HN home page to https://news.ycombinator.com/classic …
Sounds like a massive waste of your time for NextDNS admins and a poor UX for end users. If your security relies on trusting old domains, then you need to rethink your security. Also, I bet it's just as easy for you to accidentally whitelist a bad actors as to blacklist a good one. What am I missing here?
I don't disagree. The idea seems to be that newly registered domains are far more likely to be malicious (and not present on domain blocklists yet).
How are you performing backup of FerratDB? Are you using MongoDB tools, or are you using PostgreSQL-specific tools?
It seems like a pretty tall order, but I really want an open source access point controller daemon that knows how to provision and manage a wide variety of APs from different manufacturers.
So you'd have one services that can provision Ubiquity, MikroTik, TPLink and other APs and manage the clients.
Alternately, run OpenWRT on the APs themselves, and then you just need one provisioning protocol.
Does it support seamless roaming of clients between group of APs?
Last time I've tried, it was not supported by any open source solution.
Seamless WiFi roaming is mostly a client decision. The best you can do on AP is to:
a) optimize signal strength for coverage (stronger signals aren't always better in multi-AP deployment);
b) provide hints via 802.11k/v/r to help clients make, hopefully, better decisions;
c) forcefully drop and disassociate clients when signal is weak enough.
But if the client has bad WiFi implementation, there's nothing much you could do.
OpenWRT currently supports 802.11k/v/r, but optimizing coverage by adjusting signal strength and channels is left for experienced users to deal with manually. There is the are where some commercial offerings will do, but the result greatly varies. AFAIK there's no ideal system anyway coz physics is hard.
Well AFAIK the core seamless roaming in Unifi is using hostapd, which is the same AP software you use on OpenWrt. See 802.11r Fast Transition.
I think it should even be possible to get seamless roaming between Unifi and OpenWrt with correct configuration of hostapd.
OpenWISP or Ansible-based orchestration might get you part of the way there, but the real challenge is staying ahead of vendor firmware changes and locked-down protocols. Even if you write shims for a bunch of models, manufacturers often break things with updates or remove local management entirely, so it's a constant catch-up game.
Now that would be interesting! Multi-vendor support is on the radar, but haven't started looking into it much yet.
I would love to have some way to configure the USW Flex Minis without the controller software. I can't find any other small PoE powered switches for a similar cost.
> ("TNBU" is "UNBT" backwards, presumably UniFi Broadcast Technology.)
This seems like an odd misunderstanding, especially because the correct inversion “UBNT” is the default login name for most UniFi web UIs.
You might have a bit of dyslexia, OP!
You might be onto something there! But yes, good catch, I'll get that updated.
ubnt has been the ubiquiti default login at least back to 2010 when I started using their products, before UniFi was a brand. I always assumed it was short for Ubiquiti Networks.
Sure, but the parent was saying this part was odd:
> "TNBU" is "UNBT" backwards
TNBU is clearly NOT uNbt backwards.
Using the network byte ordering (big endian) of UBNT as the magic number in the protocol is a nice touch.
I believe they used MIPS processors in their early gear, so that makes sense.
A lot of companies in that space did then. I was at a robotics company at the time and we experimented with mikrotik routerboards + the various long-range Ubiquiti wifi modules, some of which are even still listed on the website: https://techspecs.ui.com/uisp/accessory-tech/xr (though not the 900 MHz XR9, which was arguably one of the most interesting for long range comms)
UBNT stands for UBiquiti NeTworks.