https://github.com/matklad
I used to do frontend development ([1], [2]), now I am more of a database person ([3]), though it mostly is just prompting llvm to generate the right code!
[1] https://github.com/intellij-rust/intellij-rust
To add to what aphyr says, you generally need three components for generative testing of distributed systems:
1. Some sort of environment, which can run the system. The simplest environment is to spin up a real cluster of machines, but ideally you want something fancier, to improve performance, control over responses of external APIs, determinism, reproducibility, etc. 2. Some sort of load generator, which makes the system in the environment do interesting thing 3. Some sort of auditor, which observes the behavior of the system under load and decides whether the system behaves according to the specification.
Antithesis mostly tackles problem #1, providing a deterministic simulation environment as a virtual machine. The same problem is talked by jepsen (by using real machines, but injecting faults at the OS level), and by TigerBeetle's own VOPR (which is co-designed with the database, and for that reason can run the whole cluster on just a single thread). There there approaches are complimentary and are good at different things.
For this bug, the critical part was #2, #3 --- writing workload verifier and auditor that actually can trigger the bug. Here, it was aphyr's 1600 lines of TigerBeetle-specfic Clojure code that triggred and detected the bug (and then we patched _our_ equivalent to also trigger it. Really, what's buggy here is not the database, but the VOPR. Database having bugs is par of course, you can't just avoid bugs through the sheer force of will. So you need testing strategy that can trigger most bugs, and any bug that slips through is pointing to the deficiency in the workload generator.)
Oh, important clarification from andrewrk(https://lobste.rs/c/tf6jng), which I totally missed myself: this isn't actually a dereference of uninitialized pointer, it's a defer of a pointer which is explicitly set to a specific, invalid value.
Note that this works because we have strict serializability. With weaker consistency guarantees, there isn't necessarily a single global consistent timeline.
This is an interesting meta pattern where doing something _harder_ actually simplifies the system.
Another example is that, because we assume that the disk can fail and need to include repair protocol, we get state-synchronization for a lagging replica "for free", because it is precisely the same situation as when the entire disk gets corrupted!
From the user's perspective, this doesn't matter at all. Zig is implementation detail, what we actually ship is a fully statically linked native executable for the database, and "links only libc" (because thread locals!) .a/.so native "C" library for clients. Nothing will change, for the user, if we decide to rewrite the thing in Rust, or C, or Hare, nothing Zig-specific leaks out.
Form the developer perspective, the big thing is that we don't have any dependencies, so updating compiler for us is just a small amount of work once in a while, and not your typical ecosystem-wide coordination problem. Otherwise, Zig's pretty much "finished" for our use-case, it more or less just works.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io