Multiplayer has been the single most requested feature for Teardown ever since before its initial release. Synchronizing physics over the network is already known to be hard, and on top of that we have a completely dynamic, destructible world with full modding support. For a long time, we considered the whole idea unrealistic.
Despite the scepticism, we did an internal experiment back in 2021, using a naive approach to synchronize moving objects and send altered voxel data as objects were destroyed. It used enormous amounts of bandwidth and completely choked the connection when large objects were destroyed. It was purely a learning project and never reached a usable state, but it taught us where the bottlenecks were.
Around the same time, a community project called TDMP added rudimentary multiplayer support through reverse engineering and DLL injection. Despite being a bit janky, it completely blew my mind. It was an incredible technical achievement by the people involved. The mod mostly synchronized player position and player input, and since the engine isn’t deterministic, it could easily get out of sync, especially with destruction.
A semi-deterministic approach

As we started bringing more people on board, we did a more serious investigation into a proper multiplayer implementation in late 2022. We knew we wanted perfect world sync. Anything else would quickly make simulations diverge in the chaotic world of Teardown. Sending large amounts of voxel data wasn’t an option because of bandwidth, so we had to rely on determinism. Early on, I dismissed the idea of full determinism for the entire engine (a view I have since reevaluated), so it needed to be a hybrid approach: destruction done deterministically, while most other things use state synchronization.
For the longest time (and for good reasons), floating point operations were considered unsafe for deterministic purposes. That is still true to some extent, but the picture is more nuanced than that. I have since learned a lot about floating point determinism, and these days I know it is mostly safe if you know how to navigate around the pitfalls. I won’t cover them all here, but I hope to do that in another post, because there’s a lot of confusion around this topic.
At the time, I decided to rewrite the destruction logic in fixed-point integer math, which is fairly straightforward given that we’re dealing with discrete voxel volumes. But there’s much more to destruction logic than cutting out voxels on a regular grid. Object hierarchies may separate, new objects can be created and joints can be affected or reattached. A lot of this still involves floating point math, so each breakage event is split into a stream of deterministic commands that are replicated on all clients: “cut hole in this shape at voxel coord x,y,z”, “change ownership of that shape”, “reconnect joint to this shape”, etc.
Our implementation does not use dedicated servers. The player hosting a game also acts as server for that session, so all mentions about the server below is really just the player who hosts the session.
Reliable and unreliable
As long as the deterministic commands are applied to the world in exactly the same way, in exacly the same order, the resulting changes will be identical across all machines. The bandwidth requirements are small because commands are the same regardless of object size. Anything that modifies the scene content, such as spawning new objects or recoloring objects, is implemented using the same approach. We put all these commands on a reliable network stream, where everything is guaranteed to arrive in order and nothing is missed, just like a traditional data stream.
For anything that doesn’t affect the structure or contents of the scene, such as object transforms, velocities, and player positions, we use state synchronization with eventual consistency. For every update, the server selects a number of objects that should be synchronized and sends their state to the clients. The server keeps a priority queue to ensure everything is eventually sent, prioritizing objects visible to the player while staying within the allowed data budget (in Teardown, around one Mbit per client). Because nearby objects differ per client depending on player position, the server has to maintain this queue and make these decisions per client.
These packets are sent unreliable, meaning they are not guaranteed to arrive, and those that do arrive are not guaranteed to arrive in order. This is what the messy reality of internet packets looks like. Protocols like TCP are layers on top that, responsible for maintain ordering and resend data that never arrived. With unreliable state synchronization, you have to handle some of that yourself, but in many situations you don’t need strict ordering. If a packet gets lost, a newer packet with more recent state will arrive soon anyway. Many other games use a similar approach, so this aspect of our implementation is fairly traditional.
Each client runs a local simulation, as it normally would, but once new state arrives from the server the affected objects are corrected to keep everything in sync. In many cases, locally simulated objects are nearly identical to what comes from the server and the correction is invisible. But in complex situations with many simulated objects, the priority queue has to work harder, so that more objects will get corrected at a lower frequency, which can cause visible snapping.
Scripting
We knew from the start that the multiplayer version had to support scripting and modding, but scripts now had to be aware of the new architecture, where scene changes happen on the server and are automatically distributed to clients. Some script parts still have to run on clients, especially UI and overlay graphics. We ran experiments to automate this by running the exact same script on both server and clients, while ignoring certain API calls where they weren’t relevant. That turned out rather clunky, and a bad fit for certain use cases, so it was eventually dropped.
We also didn’t want to split everything into separate server and client scripts, so we landed somewhere in the middle: client and server parts exist in the same script, with some machinery in place (shared state table and remote calls) to simplify communication between them. It’s a pretty unusual multiplayer scripting approach, but it has served us well and the modding community seems to get the concept. There’s a tutorial available here with more details.
Terminal and UI

The in-game terminals were tricky to get right, because they don’t follow the conventional flow. Terminals are part of the scene, not just a UI layer on top, and can be controlled by any player, yet the interaction should be visible for everyone and the resulting actions should happen on the server.
We solved this by running terminal scripts entirely on the server, recording draw commands which are transfered to the clients using delta-compression. Hence, we stream the resulting terminal image to the clients, but not in the form of compressed video. Instead we stream the UI draw commands that build up the image, or to be more specific: the draw command delta from the previous frame to the next. The idea is similar to what the X Window System uses, allowing graphical user interfaces for remote applications on thin clients. The draw commands are often similar frame-to-frame even while animating, so the delta is usually tiny.
Once the system was in place, we started using it on the main menu as well, so that everyone can see what the host is doing when selecting level, game modes and mods for a session.
The big merge
Our initial idea was that multiplayer Teardown should be a separate game. So while implementing the first version, we cleaned up the codebase and made reasonable adaptations to suit the new architecture. Meanwhile, our parent company at the time, Saber Interactive, was hard at work on console ports, rewriting many engine internals, adding localization support, reworking the UI framework, and optimizing many subsystems for better performance. On our end, we were also adding support for a third-person camera controller and an animation system. The situation was already messy, but it was about to get much worse.
As time went by, we reevaluated whether multiplayer really should be a separate game after all, and eventually concluded it would be better to retrofit multiplayer into the existing game. If successful, it would keep the community unified around one game and simplify porting existing mods. On the other hand, our multiplayer version had intentially diverged substantially from the main branch with console ports, optimizations and third-person camera support. Merging them would be a herculean task and a long-running effort. The console and DLC release lineup was scheduled for at least another year on the single-player version. To this day, I’m still unsure if it was the right decision to merge, but that’s the road we chose.
The merge itself took almost three months to complete, and for more than a year we had to manually merge changes weekly from the main branch onto our multiplayer branch to keep everything in sync. It was initially done by us, but eventually more people from Saber got involved in the multiplayer development.
For a long time we used Saber’s backend for network transport (shared with other Saber games), but following the switch in ownership from Saber to Coffee Stain, we swapped it out for the Steam Networking back-end.
Backwards compatibility
The fact that Teardown is a released game and a modding platform with tons of existing content is something we couldn’t ignore. Backwards compatibility was a requirement, and it was also the single most time-consuming part of the implementation. Because of the deep conceptual changes it means to support multiple players, there was simply no way to make older scripts automatically support multiplayer, but the game still had to load and run existing mods in single-player mode (we actually do support loading levels from old mods into multiplayer, but scripts are disabled).
We wanted to avoid maintaining multiple script implementations, so we did our best to keep the existing API and add optional player-id parameters where applicable. There are a few exceptions, but for the most part the API is backwards compatible and old scripts can still run on the multiplayer code path.
Join in progress
The next big hurdle was late joins. For a long time we dismissed also this as unrealistic. The scene in Teardown changes constantly, and since parts of the network implementation rely on determinism, it is critical that new clients join on exactly the same scene. There are essentially three ways to solve this:
-
Serialize the entire scene, compress the data, and pass it to the joining client. We already do full scene serialization for quicksave and quickload, so this is possible, but the files are large: 30-50 MB is common, often more, so transfer would take a while.
-
Serialize only the objects that changed since the scene was loaded, then compress and transfer those to the client. This is more complex and requires careful tracking of changes - another potential source of bugs. It would reduce data size, but it can still be quite large depending on the level of mayhem before the join.
-
Record the deterministic command stream, pass it to the joining client, and have that client apply all changes to the loaded scene before joining the game. The amount of data is much smaller than in option 2 since we’re not sending any voxel data, but applying the changes can take a while since it involves a lot computation.
Once we started investigating option 3 we realized it was actually less data than we anticipated, but we still limit the buffer size and disable join-in-progress when it fills up. This allows late joins up to a certain amount of scene changes, beyond which applying the commands would simply take an unreasonably long time.
Multiplayer testing and debugging can be awful for developers. What used to be a single button press to debug the game quickly turns into a repetitive dance involving launching multiple clients and clicking through menus to connect them. On top of that, with multiple processes, debugging gets much more complex. Either being limited to debug a single instance, attach to a running process (and thereby guessing which process an instance is) or set up a more complex workflow where you can debug multiple processes at the same time.
To avoid this, we tried running several game instances in the same process and window, automatically connecting at startup and ticking them sequentially each frame, redirecting input to the active instance. It was a big relief, both because it made debugging simple again and because it gave the content team a reasonable test environment for new game modes. However, due to historical design choices and ongoing merge complexity, at one point we just couldn’t maintain this path and had to fall back to separate processes and multiple windows again. They still connect automatically (using a TCP layer instead of Steam Networking), and we do what we can to keep it coherent, but it pains me to see the single-window implementation go away because it was so much nicer to work with.
Conclusion

Looking back at the complexity of the task and the unfavorable circumstances we had at hand, I’m really proud that we finally pulled it off. That said, I’m not the one to take credit for this achievement. I was involved in the initial design and implementation, as well as the big merge, but for the lion’s share of the work, the really hard and tedious parts that actually made it work, I’ve only been peripherally involved, shifting my focus towards our new engine.
The multiplayer implementation in Teardown isn’t particularly elegant; it’s just a lot of hard work and a lot of code. It’s a mix of many techniques with tons of special cases for backwards compatibility. It has been a useful learning experience, one to make me think deeply about simplified approaches to multiplayer implementations, which we’re now trying out in the new engine. This post is already long, so I’ll save the details for the next one. Meanwhile, please enjoy a game of Teardown multiplayer and reflect on the hurdles we’ve gone through to make it happen!
