Ask HN: How do you play synchronized audio across multiple receivers in 2024?

2024-05-311:40811

I last explored this in 2010, and there weren't any great solutions other than doing it yourself from first principles. Have things changed in 14 years?

I've got two empty pools and a few huge subs laying around...

It's crazy that it's 2024 and there isn't an ESP32 module which does this.

I last explored this in 2010, and there weren't any great solutions other than doing it yourself from first principles. Have things changed in 14 years?

I've got two empty pools and a few huge subs laying around...

It's crazy that it's 2024 and there isn't an ESP32 module which does this.


Comments

  • By tgittos 2024-05-3119:33

    I have a bunch of Sonos speakers and hooked them all up to a Mac running Airfoil (https://rogueamoeba.com/airfoil/mac/).

    It isn't perfect and audio occasionally de-syncs or a speaker drops out. On a dedicated wireless channel playing a local source I imagine it would be pretty solid.

    Not really an option if you don't have a Mac to run Airfoil, though if you do you can use the Satellite (https://rogueamoeba.com/airfoil/satellite/) program to sync Mac your other non-Apple devices to the host.

  • By huesatbri 2024-05-315:581 reply

    I’ve had success with Snapcast when looking into this previously https://github.com/badaix/snapcast

  • By salawat 2024-05-3114:30

    Are we talking one speaker say left speaker, playing sample 0, duration 0-5s, then right speaker playing second sample duration 5-10? Except increase number of speakers to X with physical locations completely unknown or unaccounted for anywhere in the underlying software itself?

    Because last I'd heard, and I'm definitely not in the scene, all of that was coordinated by programs, before having the transduction work parceled out to mostly dumb speakers. And even then, I'm not aware as to whether playback software is "smart" enough to train itself w/ regard to latencies between individual playback nodes to ensure things sync up properly... That gets into the nasty world of things like DRAM signal training, because in a sense, each of your signal paths is just a glorified signal trace to the system in question, of unusual length compared to other motherboard traces.

    I know there's ASICs built into most graphics cards nowadays for things like audio mix/encode/decode, but as far as I was aware, getting things relatively sync'd up for any non-trivial network of speakers was still very much an exercise left to the student; and with most programmers having taken the route of "but why would you need that?" to avoid answering what is essentially a really, really, hard network characterization/DSP question.

    Long story short, I dunno, but sounds like an immersive literature deep dive in the making to me!

HackerNews