
Published by Martin Kleppmann on 08 Dec 2025. Much has been said about the effects that AI will have on software development, but there is an angle I haven’t seen talked about: I believe that AI will…
Published by Martin Kleppmann on 08 Dec 2025.
Much has been said about the effects that AI will have on software development, but there is an angle I haven’t seen talked about: I believe that AI will bring formal verification, which for decades has been a bit of a fringe pursuit, into the software engineering mainstream.
Proof assistants and proof-oriented programming languages such as Rocq, Isabelle, Lean, F*, and Agda have been around for a long time. They make it possible to write a formal specification that some piece of code is supposed to satisfy, and then mathematically prove that the code always satisfies that spec (even on weird edge cases that you didn’t think of testing). These tools have been used to develop some large formally verified software systems, such as an operating system kernel, a C compiler, and a cryptographic protocol stack.
At present, formal verification is mostly used by research projects, and it is uncommon for industrial software engineers to use formal methods (even those working on classic high-assurance software such as medical devices and aircraft). The reason is that writing those proofs is both very difficult (requiring PhD-level training) and very laborious.
For example, as of 2009, the formally verified seL4 microkernel consisted of 8,700 lines of C code, but proving it correct required 20 person-years and 200,000 lines of Isabelle code – or 23 lines of proof and half a person-day for every single line of implementation. Moreover, there are maybe a few hundred people in the world (wild guess) who know how to write such proofs, since it requires a lot of arcane knowledge about the proof system.
To put it in simple economic terms: for most systems, the expected cost of bugs is lower than the expected cost of using the proof techniques that would eliminate those bugs. Part of the reason is perhaps that bugs are a negative externality: it’s not the software developer who bears the cost of the bugs, but the users. But even if the software developer were to bear the cost, formal verification is simply very hard and expensive.
At least, that was the case until recently. Now, LLM-based coding assistants are getting pretty good not only at writing implementation code, but also at writing proof scripts in various languages. At present, a human with specialist expertise still has to guide the process, but it’s not hard to extrapolate and imagine that process becoming fully automated in the next few years. And when that happens, it will totally change the economics of formal verification.
If formal verification becomes vastly cheaper, then we can afford to verify much more software. But on top of that, AI also creates a need to formally verify more software: rather than having humans review AI-generated code, I’d much rather have the AI prove to me that the code it has generated is correct. If it can do that, I’ll take AI-generated code over handcrafted code (with all its artisanal bugs) any day!
In fact, I would argue that writing proof scripts is one of the best applications for LLMs. It doesn’t matter if they hallucinate nonsense, because the proof checker will reject any invalid proof and force the AI agent to retry. The proof checker is a small amount of code that is itself verified, making it virtually impossible to sneak an invalid proof past the checker.
That doesn’t mean software will suddenly be bug-free. As the verification process itself becomes automated, the challenge will move to correctly defining the specification: that is, how do you know that the properties that were proved are actually the properties that you cared about? Reading and writing such formal specifications still requires expertise and careful thought. But writing the spec is vastly easier and quicker than writing the proof by hand, so this is progress.
I could also imagine AI agents helping with the process of writing the specifications, translating between formal language and natural language. Here there is the potential for subtleties to be lost in translation, but this seems like a manageable risk.
I find it exciting to think that we could just specify in a high-level, declarative way the properties that we want some piece of code to have, and then to vibe code the implementation along with a proof that it satisfies the specification. That would totally change the nature of software development: we wouldn’t even need to bother looking at the AI-generated code any more, just like we don’t bother looking at the machine code generated by a compiler.
In summary: 1. formal verification is about to become vastly cheaper; 2. AI-generated code needs formal verification so that we can skip human review and still be sure that it works; 3. the precision of formal verification counteracts the imprecise and probabilistic nature of LLMs. These three things taken together mean formal verification is likely to go mainstream in the foreseeable future. I suspect that soon the limiting factor will not be the technology, but the culture change required for people to realise that formal methods have become viable in practice.
Updates:
If you found this post useful, please support me on Patreon so that I can write more like it!
To get notified when I write something new, follow me on Bluesky or Mastodon, or enter your email address:
I won't give your address to anyone else, won't send you any spam, and you can unsubscribe at any time.
I don't think formal verification really addresses most day-to-day programming problems:
* A user interface is confusing, or the English around it is unclear
* An API you rely on changes, is deprecated, etc.
* Users use something in unexpected ways
* Updates forced by vendors or open source projects cause things to break
* The customer isn't clear what they want
* Complex behavior between interconnected systems, out of the purview of the formal language (OS + database + network + developer + VM + browser + user + web server)
For some mathematically pure task, sure, it's great. Or a low-level library like a regular expression parser or a compression codec. But I don't think that represents a lot of what most of us are tasked with, and those low-level "mathematically pure" libraries are generally pretty well handled by now.In fact, automated regression tests done by ai with visual capabilities may have bigger impact than formal verification has. You can have an army of testers now, painfully going through every corner of your software
In practice ends up being a bit like static analysis though, which is you get a ton of false positives.
All said, I’m now running all commits through Codex (which is the only thing it’s any good at), and it’s really pretty good at code reviews.
Will only work somewhat when customers expect features to work in a standard way. When customer spec things to work in non-standard approaches you'll just end up with a bunch of false positives.
This. When the bugs come streaming in you better have some other AI ready to triage them and more AI to work them, because no human will be able to keep up with it all.
Bug reporting is already about signal vs noise. Imagine how it will be when we hand the megaphone to bots.
TBH most day to day programming problems are barely worth having tests for. But if we had formal specs and even just hand wavy correspondences between the specs and the implementation for the low level things everybody depends on that would be a huge improvement for the reliability of the whole ecosystem.
A limited form of formal verification is already mainstream. It is called type systems. The industry in general has been slowly moving to encode more invariants into the type system, because every invariant that is in the type system is something you can stop thinking about until the type checker yells at you.
A lot of libraries document invariants that are either not checked at all, only at runtime, or somewhere in between. For instance, the requirement that a collection not be modified during interaction. Or that two region of memory do not overlap, or that a variable is not modified without owning a lock. These are all things that, in principle, can be formally verified.
No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.
For LLMs, there is an added benefit. If you can formally specify what you want, you can make that specification your entire program. Then have an LLM driven compiler produce a provably correct implementation. This is a novel programming paradigm that has never before been possible; although every "declarative" language is an attempt to approximate it.
> No one claims that good type systems prevent buggy software.
That's exactly what languages with advanced type systems claim. To be more precise, they claim to eliminate entire classes of bugs. So they reduce bugs, they don't eliminate them completely.
No nulls, no nullability bombs.
Forcing devs to pre-fix/avoid bugs before the compiler will allow the app means the programs are more correct as a group.
Wrong, incomplete, insufficient, unhelpful, unimpressive, and dumb are all still very possible. But more correct than likely in looser systems.
> No nulls, no nullability bombs.
I hate this meme. Null indicates something. If you disallow null that same state gets encoded in some other way. And if you don't properly check for that state you get the exact same class of bug. The desirable type system feature here is the ability to statically verify that such a check has occurred every time a variable is accessed.
Another example is bounds checking. Languages that stash the array length somewhere and verify against it on access eliminate yet another class of bug without introducing any programmer overhead (although there generally is some runtime overhead).
The whole point of "no nullability bombs" is to make it obvious in the type system when the value might be not present, and force that to be handled.
Javascript:
let x = foo();
if (x.bar) { ... } // might blow up
Typescript: let x = foo(); // type of x is Foo | undefined
if (x === undefined) { ...; return; } // I am forced to handle this
if (x.bar) { ... } // this is now safe, as Typescript knows x can only be a Foo now
(Of course, languages like Rust do that cleaner, since they don't have to be backwards-compatible with old Javascript. But I'm using Typescript in hopes of a larger audience.)If you eliminate the odd integers from consideration, you've eliminated an entire class of integers. yet, the set of remaining integers is of the same size as the original.
Peak HN gnomism. While the set of possible errors may be infinite, their distribution is not uniform.
No, because integers in computing are generally finite.
Programs are not limited; the number of Turing machines is countably infinite.
When you say things like "eliminate a class of bugs", that is played out in the abstraction: an infinite subset of that infinity of machines is eliminated, leaving an infinity.
How you then sample from that infinity in order to have something which fits on your actual machine is a separate question.
How do you count how many bugs a program has? If I replace the Clang code base by a program that always outputs a binary that prints hello world, how many bugs is that? Or if I replace it with a program that exits immediately?
Maybe another example is compiler optimisations: if we say that an optimising compiler is correct if it outputs the most efficient (in number of executed CPU instructions) output program for the every input program, then every optimising compiler is buggy. You can always make it less buggy by making more of the outputs correct, but you can never satisfy the specification on ALL inputs because of undecidability.
Because the number of state where a program can be is so huge (when you consider everything that can influence how a program runs and the context where and when it runs) it is for the current computation power practically infinite but yes it is theoretically finite and can even be calculated.
> For LLMs, there is an added benefit. If you can formally specify what you want, you can make that specification your entire program. Then have an LLM driven compiler produce a provably correct implementation. This is a novel programming paradigm that has never before been possible; although every "declarative" language is an attempt to approximate it.
The problem is there is always some chance a coding agent will get stuck and be unable to produce a conforming implementation in a reasonable amount of time. And then you are back in a similar place to what you were with those pre-LLM solutions - needing a human expert to work out how to make further progress.
With the added issue that now the expert is working with code they didn't write, and that could be in general be harder to understand than human-written code. So they could find it easier to just throw it away and start from scratch.
Some type systems (e.g, Haskell) are closing in in becoming formal verification languages themselves.
Given that it's the AI doing the coding, it would be pretty quickly so long as it's decent at Haskell. Which it already is, surprisingly so actually for such a niche language. It doesn't necessarily write great code, but it's good enough, and the straightjacket type system makes it very hard for the model to sneak in creative hacks like using globals, or trip itself with mutable state.
I think that’s because the barrier to entry for a beginner is much higher than say python.
IMHO, these strong type systems are just not worth it for most tasks.
As an example, I currently mostly write GUI applications for mobile and desktop as a solo dev. 90% of my time is spent on figuring out API calls and arranging layouts. Most of the data I deal with are strings with their own validation and formatting rules that are complicated and at the same time usually need to be permissive. Even at the backend all the data is in the end converted to strings and integers when it is put into a database. Over-the-wire serialization also discards with most typing (although I prefer protocol buffers to alleviate this problem a bit).
Strong typing can be used in between those steps but the added complexity from data conversions introduces additional sources of error, so in the end the advantages are mostly nullified.
> Most of the data I deal with are strings with their own validation and formatting rules that are complicated and at the same time usually need to be permissive
this is exactly where a good type system helps: you have an unvalidated string and a validated string which you make incompatible at the type level, thus eliminating a whole class of possible mistakes. same with object ids, etc.
don't need haskell for this, either: https://brightinventions.pl/blog/branding-flavoring/
That's neat, I was about to ask which languages support that since the vast majority don't. I didn't know that you can do that in Typescript.
Any language with an type system really...
Even OOP : if you have a string class, you can have a String_Formated_For_API subtype.
Just extends String, and add some checking.
But now the type checker "knows" it can print() a String_Formated_For_API just fine but not call_API(string).
I would argue that the barrier to entry is on par with python for a person with no experience, but you need much more time with Haskell to become proficient in it. In python, on the other hand, you can learn the basics and these will get you pretty far
Python has a reputation for being good for beginners so it's taught to beginners so it has a reputation for being good for beginners.
I blame syntax. It's too unorthodox nowadays. Historical reasons don't matter all that much, everything mainstream is a C-family memember
Piggybacking off your comment, I just completed a detailed research paper where I compared Haskell to C# with an automated trading strategy. I have many years of OOP and automated trading experience, but struggled a bit at first implementing in Haskell syntax. I attempted to stay away from LLMs, but ended up using them here and there to get the syntax right.
Haskell is actually a pretty fun language, although it doesn't fly off my fingers like C# or C++ does. I think a really great example of the differences is displayed in the recursive Fibonacci sequence.
In C#:
public int Fib(int n)
{
if (n <= 1)
return n;
else
return Fib(n - 1) + Fib(n - 2);
}
In Haskell: fib :: Integer -> Integer
fib n
| n <= 1 = n
| otherwise = fib (n - 1) + fib (n - 2)
As you might know, this isn't even scratching the surface of the Haskell language, but it does a good job highlighting the syntax differences.When using switch expression in C#, they are a lot more similar:
public int Fib(int n) => n switch
{
<= 1 => n,
_ => Fib(n - 1) + Fib(n - 2)
};> No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.
To me it seems they reduce productivity. In fact, for Rust, which seems to match the examples you gave about locks or regions of memory the common wisdom is that it takes longer to start a project, but one reaps the benefits later thanks to more confidence when refactoring or adding code.
However, even that weaker claim hasn’t been proven.
In my experience, the more information is encoded in the type system, the more effort is required to change code. My initial enthusiasm for the idea of Ada and Spark evaporated when I saw how much ceremony the code required.
> In my experience, the more information is encoded in the type system, the more effort is required to change code.
I would tend to disagree. All that information encoded in the type system makes explicit what is needed in any case and is otherwise only carried informally in peoples' heads by convention. Maybe in some poorly updated doc or code comment where nobody finds it. Making it explicit and compiler-enforced is a good thing. It might feel like a burden at first, but you're otherwise just closing your eyes and ignoring what can end up important. Changed assumptions are immediately visible. Formal verification just pushes the boundary of that.
In practice it would be encoded in comments, automated tests and docs, with varying levels of success.
It’s actually similar to tests in a way: they provide additional confidence in the code, but at the same time ossify it and make some changes potentially more difficult. Interestingly, they also make some changes easier, as long as not too many types/tests have to be adapted.
This reads to me like an argument for better refactoring tools, not necessarily for looser type systems. Those tools could range from mass editing tools, IDEs changing signatures in definitions when changing the callers and vice versa, to compiler modes where the language rules are relaxed.
I was thinking about C++ and if you change your mind about whether some member function or parameter should be const, it can be quite the pain to manually refactor. And good refactoring tools can make this go away. Maybe they already have, I haven’t programmed C++ for several years.
Constraints Liberate, Liberties Constrain. (I also recommend watching the presentation with the same title)
> All that information encoded in the type system makes explicit what is needed in any case and is otherwise only carried informally in peoples' heads by convention
this is, in fact better for llms, they are better at carrying information and convention in their kv cache than they are in having to figure out the actual types by jumping between files and burning tokens in context/risking losing it on compaction (or getting it wrong and having to do a compilation cycle).
if a typed language lets a developer fearlessly build a semantically inconsistent or confusing private API, then llms will perform poorer at them even though correctness is more guaranteed.
It is definitely harder to refactor Haskell than it is Typescript. Both are "safe" but one is slightly safer, and much harder to work with.
Capturing invariants in the type system is a two-edged sword.
At one end of the spectrum, the weakest type systems limit the ability of an IDE to do basic maintenance tasks (e.g. refactoring).
At the other end of the spectrum, dependent type and especially sigma types capture arbitrary properties that can be expressed in the logic. But then constructing values in such types requires providing proofs of these properties, and the code and proofs are inextricably mixed in an unmaintainable mess. This does not scale well: you cannot easily add a new proof on top of existing self-sufficient code without temporarily breaking it.
Like other engineering domains, proof engineering has tradeoffs that require expertise to navigate.
> but one reaps the benefits later thanks to more confidence when refactoring or adding code.
To be honest, I believe it makes refactoring/maintenance take longer. Sure, safer, but this is not a one-time only price.
E.g. you decide to optimize this part of the code and only return a reference or change the lifetime - this is an API-breaking change and you have to potentially recursively fix it. Meanwhile GC languages can mostly get away with a local-only change.
Don't get me wrong, in many cases this is more than worthwhile, but I would probably not choose rust for the n+1th backend crud app for this and similar reasons.
The choice of whether to use GC is completely orthogonal to that of a type system. On the contrary, being pointed at all the places that need to be recursively fixed during a refactoring is a huge saving in time and effort.
I was talking about a type system with affine types, as per the topic was Rust specifically.
I compared it to a statically typed language with a GC - where the runtime takes care of a property that Rust has to do statically, requiring more complexity.
In my opinion, programming languages with a loose type system or no explicit type system only appear to foster productivity, because it is way easier to end up with undetected mistakes that can bite later, sometimes much later. Maybe some people argue that then it is someone else's problem, but even in that case we can agree that the overall quality suffers.
"In my experience, the more information is encoded in the type system, the more effort is required to change code."
Have you seen large js codebases? Good luck changing anything in it, unless they are really, really well written, which is very rare. (My own js code is often a mess)
When you can change types on the fly somewhere hidden in code ... then this leads to the opposite of clarity for me. And so lots of effort required to change something in a proper way, that does not lead to more mess.
There’s two types of slowdown at play:
a) It’s fast to change the code, but now I have failures in some apparently unrelated part of the code base. (Javascript) and fixing that slows me down.
b) It’s slow to change the code because I have to re-encode all the relationships and semantic content in the type system (Rust), but once that’s done it will likely function as expected.
Depending on project, one or the other is preferable.
Or: I’m not going to do this refactor at all, even though it would improve the codebase, because it will be near impossible to ensure everything is correct after making so many changes.
To me, this has been one of the biggest advantages of both tests and types. They provide confidence to make changes without needing to be scared of unintended breakages.
There's a tradeoff point somewhere where it makes sense to go with one or another. You can write a lot of codes in bash and Elisp without having to care about the type of whatever you're manipulating. Because you're handling one type and encoding the actual values in a typesytem would be very cumbersome. But then there are other domain which are fairly known, so the investment in encoding it in a type system does pay off.
Soon a lot of people will go out of the way and try to convince you that Rust is most productive language, functions having longer signatures than their bodies is actually a virtue, and putting .clone(), Rc<> or Arc<> everywhere to avoid borrow-checker complaints makes Rust easier and faster to write than languages that doesn't force you to do so.
Of course it is a hyperbole, but sadly not that large.
> For LLMs, there is an added benefit. If you can formally specify what you want, you can make that specification your entire program. Then have an LLM driven compiler produce a provably correct implementation. This is a novel programming paradigm that has never before been possible; although every "declarative" language is an attempt to approximate it.
That is not novel and every declarative language precisely embodies it.
I think most existing declarative languages still require the programmer to specify too many details to get something usable. For instance, Prolog often requires the use of 'cut' to get reasonable performance for some problems.
[dead]
> No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.
They really don’t. How did you arrive at such a conclusion?
Not that I can answer for OP but as a personal anecdote; I've never been more productive than writing in Rust, it's a goddamn delight. Every codebase feels like it would've been my own and you can get to speed from 0 to 100 in no time.
Yeah, I’ve been working mainly in rust for the last few years. The compile time checks are so effective that run time bugs are rare. Like you can refactor half the codebase and not run the app for a week, and when you do it just works. I’ve never had that experience in other languages.
Through empirical evidence? Do you think that the vast majority of software devs moved to typing for no reason?
> Do you think that the vast majority of software devs moved to typing for no reason?
It is quite clear that this industry is mostly driven by hype and fades, not by empirical studies.
Empirical evidence in favor of a claim that static typing and complex type systems reduce bugs or improve productivity is highly inconclusive at best
It's a bad reason. A lot of best practices are temporary blindnesses, comparable, in some sense, with supposed love to BASIC before or despite Dijkstra. So, yes, it's possible there is no good reason. Though I don't think it's the case here.
We don't actually have empirical evidence on the topic, surprisingly.
It's just people's hunches.
I feel like the terms logical, empirical, rational and objective are used interchangeably by the general public, with one being in vogue at a time.
> Complex behavior between interconnected systems, out of the purview of the formal language (OS + database + network + developer + VM + browser + user + web server)
Isn't this what TLA+ was meant to deal with?
Not really, some components like components have a lot of properties that’s very difficult to modelize. Take latency in network, or storage performance in OS.
Actually, formal verification could help massively with four of those problems — all but the first (UI/UX) and fifth (requirements will always be hard).
A change in the API of a dependency should be detected immediately and handled silently.
Reliance on unspecified behavior shouldn't happen in the first place; the client's verification would fail.
Detecting breakage caused by library changes should be where verification really shines; when you get the update, you try to re-run your verification, and if that fails, it tells you what the problem is.
As for interconnected systems, again, that's pretty much the whole point. Obviously, achieving this dream will require formalizing pretty much everything, which is well beyond our capabilities now. But eventually, with advances in AI, I think it will be possible. It will take something fundamentally better than today's LLMs, though.
That has been the problem with unit and integration tests all the time. Especially for systems that tend to be distributed.
AI makes creating mock objects much easier in some cases, but it still creates a lot of busy work and makes configuration more difficult. At at this points it often is difficult configuration management that cause the issues in the first place. Putting everything in some container doesn't help either, on the contrary.
> But I don't think that represents a lot of what most of us are tasked with
Give me a list of all the libraries you work with that don't have some sort of "okay but not that bit" rule in the business logic, or "all of those function are f(src, dst) but the one you use most is f(dst,src) and we can't change it now".
I bet it's a very short list.
Really we need to scrap every piece of software ever written and start again from scratch with all these weirdities written down so we don't do it again, but we never will.
Scrapping everything wouldn't help. 15 years ago the project I'm on did that - for a billion dollars. We fixed the old mistakes but made plenty of new ones along the way. We are trying to fix those now and I can't help but wonder what new mistakes we are making the in 15 years we will regret.
Computers are terrible and software is terrible and we should just go back to tilling the fields with horses and drinking beer.
Yeah, there were about 5 or 10 videos about this "complexity" and unpredictability of 3rd parties and wheels involved that AI doesn't control and even forget - small context window - in like past few weeks. I am sure you have seen at least one of them ;)
But it's true. AI is still super narrow and dumb. Don't understand basic prompts even.
Look at the computer games now - they still don't look real despite almost 30 years since Half-life 1 started the revolution - I would claim. Damn, I think I ran it on 166 Mhz computer on some lowest details even.
Yes, it's just better and better but still looking super uncanny - at least to me. And it's been basically 30 years of constant improvements. Heck, Roomba is going bankrupt.
I am not saying things don't improve but the hype and AI bubble is insane and the reality doesn't match the expectation and predictions at all.
> An API you rely on changes, is deprecated, etc
Formal verification will eventually lead to good, stable API design.
> Users use something in unexpected ways
> Complex behavior between interconnected systems
It happens when there's no formal verification during the design stage.
Formal verification literally means cover 100% state changes and for every possible input/output, every execution branch should be tested.
Formal verification has nothing to do with the quality of the API.
Given the spec, formal verification can tell you if your implementation follows the spec. It cannot tell you if the spec if good
Thats something I agree with.
I am right now working on an offline api client: https://voiden.md/. I wonder if this can be a feature.
> It cannot tell you if the spec if good
I beg to differ, if a spec is hard to verify, then it's a bad sign.
All non-trivial specs, like the one for seL4, are hard to verify. Lots of that complexity comes from interacting with the rest of the world which is a huge shared mutable global state you can't afford to ignore.
Of course, you can declare that the world itself is inherently sinful and imperfect, and is not ready for your beautiful theories but seriously.
> Of course, you can declare that the world itself is inherently sinful and imperfect, and is not ready for your beautiful theories
i see we are both familiar with haskellers (friendly joke!)
it can tell you if your spec is bad, but it can't tell you if your spec is good
That is one problem of many solved, isn't that good?
That the spec solves the problem is called validation in my domain and treated explicitly with different methods.
We use formal validation to check for invariants, but also "it must return a value xor an error, but never just hang".
> Formal verification will eventually lead to good, stable API design.
Why? Has it ever happened like this? Because to me it would seem that if the system verified to work, then it works no matter how API is shaped, so there is no incentive to change it to something better.
> if the system verified to work, then it works no matter how API is shaped
That's the case for one-off integrations, but the messy part always comes when system goal changes
Let's say formal verification could help to avoid some anti-patterns.
> Let's say formal verification could help to avoid some anti-patterns.
I'd still like to hear about the actual mechanism of this happening. Because I personally find it much easier to believe that the moment keeping the formal verification up to date becomes untenable for whatever reason (specs changing too fast, external APIs to use are too baroque, etc) people would rather say "okay, guess we ditch the formal verification and just keep maintaining the integration tests" instead of "let's change everything about the external world so we could keep our methodology".
> I'd still like to hear about the actual mechanism of this happening
I am not an expert on this, but the worst API I've seen is those with hidden states.
e.g. .toggle() API. Call it old number of times, it goes to one state, call it even number of times, it goes back.
And there's call A before you call B types of APIs, the client has to keep a strict call order (which itself is a state machine of some kind)
> I am not an expert on this, but the worst API I've seen is those with hidden states.
> e.g. .toggle() API. Call it old number of times, it goes to one state, call it even number of times, it goes back.
This is literally a dumb light switch. If you have trouble proving that, starting from lights off, flicking a simple switch twice will still keep lights off then, well, I have bad news to tell you about the feasibility of using the formal methods for anything more complex than a dumb light switch. Because the rest of the world is a very complex and stateful place.
> (which itself is a state machine of some kind)
Yes? That's pretty much the raison d'être of the formal methods: for anything pure and immutable, normal intuition is usually more than enough; it's tracking the paths through enormous configuration spaces that our intuition has problem with. If the formal methods can't help with that with comparable amount of effort, then they are just not worth it.
At that point you create an entirely new API, fully versioned, and backwardly compatible (if you want it to be). The point the article is making is that AI, in theory, entirely removes the person from the coding process so there's no longer any need to maintain software. You can just make the part you're changing from scratch every time because the cost of writing bug-free code (effectively) goes to zero.
The theory is entirely correct. If a machine can write provably perfect code there is absolutely no reason to have people write code. The problem is that the 'If' is so big it can be seen from space.
Isn’t this where the Eiffel design by contract people speak up about code reuse?
100% of state changes in business software is unknowable on a long horizon, and relies on thoroughly understanding business logic that is often fuzzy, not discrete and certain.
Formal verification does not gurantee business logic works as everybody expected, nor its future proof, however, it does provide a workable path towards:
Things can only happen if only you allow it to happen.
It other words, your software may come to a stage where it's no longer applicable, but it never crashes.
Formal verification had little adoption only because it costs 23x of your original code with "PhD-level training"
The reason it doesn't work is businesses change faster than you can model every detail AND keep it all up to date. Unless you have something tying your model directly to every business decision and transaction that happens, your model will never be accurate. And if we're talking about formal verification, that makes it useless.
I think formal verification shines in areas where implementation is much more complex than the spec, like when you’re writing incomprehensible bit-level optimizations in a cryptography implementation or compiler optimization phases. I’m not sure that most of us, day-to-day, write code (or have AI write code) that would benefit from formal verification, since to me it seems like high-level programming languages are already close to a specification language. I’m not sure how much easier to read a specification format that didn’t concern itself with implementation could be, especially when we currently use all kinds of frameworks and libraries that already abstract away implementation details.
Sure, formal verification might give stronger guarantees about various levels of the stack, but I don’t think most of us care about having such strong guarantees now and I don’t think AI really introduces a need for new guarantees at that level.
> to me it seems like high-level programming languages are already close to a specification language
They are not. The power of rich and succinct specification languages (like TLA+) comes from the ability to succinctly express things that cannot be efficiently computed, or at all. That is because a description of what a program does is necessarily at a higher level of abstraction than the program (i.e. there are many possible programs or even magical oracles that can do what a program does).
To give a contrived example, let's say you want to state that a particular computation terminates. To do it in a clear and concise manner, you want to express the property of termination (and prove that the computation satisfies it), but that property is not, itself, computable. There are some ways around it, but as a rule, a specification language is more convenient when it can describe things that cannot be executed.
TLA+ is not a silver bullet, and like all temporal logic, has constraints.
You really have to be able to reduce your models to: “at some point in the future, this will happen," or "it will always be true from now on”
Have probabilistic outcomes? Or even floats [0] and it becomes challenging and strings are a mess.
> Note there is not a float type. Floats have complex semantics that are extremely hard to represent. Usually you can abstract them out, but if you absolutely need floats then TLA+ is the wrong tool for the job.
TLA+ works for the problems it is suitable for, try and extend past that and it simply fails.
> You really have to be able to reduce your models to: “at some point in the future, this will happen," or "it will always be true from now on”
You really don't. It's not LTL. Abstraction/refinement relations are at the core of TLA.
> Or even floats [0] and it becomes challenging and strings are a mess.
No problem with floats or strings as far as specification goes. The particular verification tools you choose to run on your TLA+ spec may or may not have limitations in these areas, though.
> TLA+ works for the problems it is suitable for, try and extend past that and it simply fails.
TLA+ can specify anything that could be specified in mathematics. That there is no predefined set of floats is no more a problem than the one physicists face because mathematics has no "built-in" concept for metal or temperature. TLA+ doesn't even have any built in notions of procedures, memory, instructions, threads, IO, variables in the programming sense, or, indeed programs. It is a mathematical framework for describing the behaviour of discrete or hybrid continuous-discrete dynamical systems, just as ODEs describe continuous dynamical systems.
But you're talking about the verfication tools you can run on TLA+ spec, and like all verification tools, they have their limitations. I never claimed otherwise.
You are, however, absolutely right that it's difficult to specify probabilistic properties in TLA+.
> No problem with floats or strings as far as specification goes. The particular verification tools you choose to run on your TLA+ spec may or may not have limitations in these areas, though.
I think it's disingenuous to say that TLA+ verifiers "may or may not have limitations" wrt floats when none of the available tools support floats. People should know going in that they won't be able to verify specs with floats!
I'm not sure how a "spec with floats" differs from a spec with networks, RAM, 64-bit integers, multi-level cache, or any computing concept, none of which exists as a primitive in mathematics. A floating point number is a pair of integers, or sometimes we think about it as a real number plus some error, and TLAPS can check theorems about specifications that describe floating-point operations.
Of course, things can become more involved if you want to account for overflow, but overflow can get complicated even with integers.
Those things, unlike floats, have approximable-enough facsimiles that you can verify instead. No tools support even fixed point decimals.
This has burned me before when I e.g needed to take the mean of a sequence.
You say no tools but you can "verify floats" with TLAPS. I don't think that RAM or 64-bit integers have facsimiles in TLA+. They can be described mathematically in TLA+ to whatever level of detail you're interested in (e.g. you have to be pretty detailed when describing RAM when specifying a GC, and even more when specifying a CPU's memory-access subsystem), but so can floating point numbers. The least detailed description - say, RAM is just data - is not all that different from representing floats as reals (but that also requires TLAPS for verification).
The complications in describing machine-representable numbers also apply to integers, but these complications can be important, and the level of detail matters just as it matters when representing RAM or any other computing concept. Unlike, say, strings, there is no single "natural" mathematical representation of floating point numbers, just as there isn't one for software integers (integers work differently in C, Java, JS, and Zig; in some situations you may wish to ignore these differences, in others - not). You may want to think about floating point numbers as a real + error, or you may want to think about them as a mantissa-exponent pair, perhaps with overflow or perhaps without. The "right" representation of a floating point number highly depends on the properties you wish to examine, just like any other computing construct. These complications are essential, and they exist, pretty much in the same form, in other languages for formal mathematics.
P.S. for the case of computing a mean, I would use Real rather than try to model floating point if the idiosyncracies of a particular FP implementation were important. That means you can't use TLC. In some situations it could suffice to represent the mean as any number (even an integer) that is ≥ min and ≤ max, but TLC isn't very effective even for algorithms involving non-tiny sets of integers when there's "interesting" arithmetic involved.
I don't know the state of contemporary model checkers that work with theories of reals and/or FP, and I'm sure you're much more familar with that than me, but I believe that when it comes to numeric computation, deductive proofs or "sampling tests" (such as property-based testing) are still more common than model-checking. It could be interesting to add a random sampling mode to TLC that could simulate many operations on reals using BigDecimal internally.
> TLA+ can specify anything that could be specified in mathematics.
You are talking about the logic of TLA+, that is, its mathematical definition. No tool for TLA+ can handle all of mathematics at the moment. The language was designed for specifying systems, not all of mathematics.
There are excellent probabilistic extensions to temporal logic out there that are very useful to uncover subtle performance bugs in protocol specifications, see e.g. what PRISM [1] and Storm [2] implement. That is not within the scope of TLA+.
Formal methods are really broad, ranging from lightweight type systems to theorem proving. Some techniques are fantastic for one type of problem but fail at others. This is quite natural, the same thing happens with different programming paradigms.
For example, what is adequate for a hard real-time system (timed automata) is useless for a typical CRUD application.
I really do wish that PRISM can one day add some quality of life features like "strings" and "functions"
(Then again, AIUI it's basically a thin wrapper over stochastic matrices, so maybe that's asking too much...)
> TLA+ is not a silver bullet, and like all temporal logic, has constraints. > > You really have to be able to reduce your models to: “at some point in the future, this will happen," or "it will always be true from now on”
I think people get confused by the word "temporal" in the name of TLA+. Yes, it has temporal operators. If you throw them away, TLA+ (minus the temporal operators) would be still extremely useful for specifying the behavior of concurrent and distributed systems. I have been using TLA+ for writing specifications of distributed algorithms (e.g., distributed consensus) and checking them for about 6 years now. The question of liveness comes the last, and even then, the standard temporal logics are barely suitable for expressing liveness under partial synchrony. The value of temporal properties in TLA+ is overrated.
The "temporal" in TLA+ isn't about □ and ⬦. It's about ' and the abstraction-refinment relation with stuttering at its core (contrasted with o and its non-stuttering meaning). Of course, you can't really specify anything in TLA+ without □ (unless you rely on TLC, which inserts the □ for you).
You cannot specify much in TLA+ without ' and □, andtThe "temporal" part of TLA+ - i.e. the TLA logic - is essential; but saying it's like "all temporal logics" is ignoring the abstraction-refinement relation, which is the heart of TLA+ (that's what ⇒, basic implication, in TLA+ means) and other temporal logics miss.
Of course, you could hypothetically use the + part of TLA+, the formalised set theory, to specify everything, but that would be very inconvenient.
What you said certainly works, but I'm not sure computability is actually the biggest issue here?
Have a look at how SAT solvers or Mixed Integer Linear Programming solvers are used.
There you specify a clear goal (with your code), and then you let the solvers run. You can, but you don't need to, let the solvers run all the way to optimality. And the solvers are also allowed to use all kinds of heuristics to find their answers, but that doesn't impact the statement of your objective.
Compare that to how many people write code without solvers: the objective of what your code is trying to achieve is seldom clearly spelled out, and is instead mixed up with the how-to-compute bits, including all the compromises and heuristics you make to get a reasonable runtime or to accommodate some changes in the spec your boss asked for at the last minute.
Using a solver ain't formal verification, but it shows the same separation between spec and implementation.
Another benefit of formal verification, that you already imply: your formal verification doesn't have to determine the behaviour of your software, and you can have multiple specs simultaneously. But you can only have a single implementation active at a time (even if you use a high level implementation language.)
So you can add 'handling a user request must terminate in finite time' as a (partial) spec. It's an important property, but it tells you almost nothing about the required business logic. In addition you can add "users shouldn't be able to withdraw more than they deposited" (and other more complicated rules), and you only have to review these rules once, and don't have to touch them again, even when you implement a clever new money transfer routine.
Peter Norvig once proposed to consider a really large grammar, with trillion rules, which could simulate some practically small applications of more complex systems. Many programs in practice don't need to be written in Turing-complete languages, and can be proven to terminate.
Writing in a language that guarantees termination is not very interesting in itself, as every existing program could automatically be translated into a non-Turing-complete language where the program is proven to terminate, yet behaves exactly the same: the language is the same as the original, only loops/rectursion ends the program after, say, 2^64 iterations. This, in itself, does not make programs any easier to analyse. In fact, a language that only has boolean variables, no arrays, no recursion, and loops of depth 2 at most is already instractable to verify. It is true that programs in Turing-complete languages cannot generally be verified in efficiently, but most non-Turing-complete languages also have this property.
Usually, when we're interested in termination proofs, what we're really interested in is a proof that the algorithm makes constant progress that converges on a solution.
I think the interesting progress in programs can generally be achieved for many programs, which take input and produce output and then terminate. For servers, which wait for requests, the situation seem to be different.
This sounds very interesting. Do you have a reference?
Saw something like that once, couldn't find recently, sorry. Ask Peter?..
Can TLA+ prove anything about something you specify but don't execute?
TLA+ is just a language for writing specifications (syntax + semantics). If you want to prove anything about it, at various degrees of confidence and effort, there are three tools:
- TLAPS is the interactive proof system that can automate some proof steps by delegating to SMT solvers: https://proofs.tlapl.us/doc/web/content/Home.html
- Apalache is the symbolic model checker that delegates verification to Z3. It can prove properties without executing anything, or rather, executing specs symbolically. For instance, it can do proofs via inductive invariants but only for bounded data structures and unbounded integers. https://apalache-mc.org/
- Finally, TLC is an enumerative model checker and simulator. It simply produces states and enumerates them. So it terminates only if the specification produces a finite number of states. It may sound like executing your specification, but it is a bit smarter, e.g., when checking invariants it will never visit the same state twice. This gives TLC the ability to reason about infinite executions. Confusingly, TLC does not have its own page, as it was the first working tool for TLA+. Many people believe that TLA+ is TLC: https://github.com/tlaplus/tlaplus
> It may sound like executing your specification, but it is a bit smarter,
It's more than just "a bit smarter" I would say, and explicit state enumeration is nothing at all like executing a spec/program. For example, TLC will check in virtually zero time a spec that describes a nondeterministic choice of a single variable x being either 0 or 1 at every step (as there are only two states). The important aspect here isn't that each execution is of infinite length, but that there are an uncountable infinity of behaviours (executions) here. This is a completely different concept from execution, and it is more similar to abstract interpretation (where the meaning of a step isn't the next state but the set of all possible next states) than to concrete interpretation.
You can write proofs in TLA+ about things you don't exectute and have them checked by the TLA+ proof assistant. But the most common aspect of that, which pretty much every TLA+ spec contains is nondeterminism, which is basically the ability to describe a system with details you don't know or care about. For example, you can describe "a program that sorts an array" without specifying how and then prove, say, that the median value ends up in the middle. The ability to specify what a program or a subroutine does without specifying how is what separates the expressive power of specification from programming. This extends not only to the program itself but to its environment. For example, it's very common in TLA+ to specify a network that can drop or reorder messages nondeterministically, and then prove that the system doesn't lose data despite that.
> To give a contrived example, let's say you want to state that a particular computation terminates. To do it in a clear and concise manner, you want to express the property of termination (and prove that the computation satisfies it), but that property is not, itself, computable. There are some ways around it, but as a rule, a specification language is more convenient when it can describe things that cannot be executed.
Do you really think it is going to be easier for the average developer to write a specification for their program that does not terminate
vs
Giving them a framework or a language that does not have for loop?
Edit: If by formal verification you mean type checking. That I very much agree.
Maybe it's difficult for the average developer to write a formal specification, but the point of the article is that an AI can do it for them.
[dead]
Yes. I feel like people who are trying to push software verification have never worked on typical real-world software projects where the spec is like 100 pages long and still doesn't fully cover all the requirements and you still have to read between the lines and then requirements keep changing mid-way through the project... Implementing software to meet the spec takes a very long time and then you have to invest a lot of effort and deep thought to ensure that what you've produced fits within the spec so that the stakeholder will be satisfied. You need to be a mind-reader.
It's hard even for a human who understands the full business, social and political context to disambiguate the meaning and intent of the spec; to try to express it mathematically would be an absolute nightmare... and extremely unwise. You would literally need some kind of super intelligence... And the amount of stream-of-thought tokens which would have to be generated to arrive at a correct, consistent, unambiguous formal spec is probably going to cost more than just hiring top software engineers to build the thing with 100% test coverage of all main cases and edge cases.
Worst part is; after you do all the expensive work of formal verification; you end up proving the 'correctness' of a solution that the client doesn't want.
The refactoring required will invalidate the entire proof from the beginning. We haven't even figured out the optimal way to formally architect software that is resilient to requirement changes; in fact, the industry is REALLY BAD at this. Almost nobody is even thinking about it. I am, but I sometimes feel like I may be the only person in the world who cares about designing optimal architectures to minimize line count and refactoring diff size. We'd have to solve this problem first before we even think about formal verification of 'most software'.
Without a hypothetical super-intelligence which understands everything about the world; the risk of misinterpreting any given 'typical' requirement is almost 100%... And once we have such super-intelligence, we won't need formal verification because the super-intelligence will be able to code perfectly on the first attempt; no need to verify.
And then there's the fact that most software can tolerate bugs... If operationally important big tech software which literally has millions of concurrent users can tolerate bugs, then most software can tolerate bugs.
Software verification has gotten some use for smart contracts. The code is fairly simple, it's certain to be attacked by sophisticated hackers who know the source, and the consequence of failure is theft of funds, possibly in large amounts. 100% test coverage is no guarantee that an attack can't be found.
People spend gobs of money on human security auditors who don't necessarily catch everything either, so verification easily fits in the budget. And once deployed, the code can't be changed.
Verification has also been used in embedded safety-critical code.
If the requirements you have to satisfy arise out of a fixed, deterministic contract (as opposed to a human being), I can see how that's possible in this case.
I think the root problem may be that most software has to adapt to a constantly changing reality. There aren't many businesses which can stay afloat without ever changing anything.
The whole perspective of this argument is hard for me to grasp. I don't think anyone is suggesting that formal specs are an alternative to code, they are just an alternative to informal specs. And actually with AI the new spin is that they aren't even a mutually exclusive alternative.
A bidirectional bridge that spans multiple representations from informal spec to semiformal spec to code seems ideal. You change the most relevant layer that you're interested in and then see updates propagating semi-automatically to other layers. I'd say the jury is out on whether this uses extra tokens or saves them, but a few things we do know. Chain of code works better than chain of thought, and chain-of-spec seems like a simple generalization. Markdown-based planning and task-tracking agent workflows work better than just YOLOing one-shot changes everywhere, and so intermediate representations are useful.
It seems to me that you can't actually get rid of specs, right? So to shoot down the idea of productive cooperation between formal methods and LLM-style AI, one really must successfully argue that informal specs are inherently better than formal ones. Or even stronger: having only informal specs is better than having informal+formal.
> A bidirectional bridge that spans multiple representations from informal spec
Amusingly, what I'm hearing is literally "I have a bridge to sell you."
There's always a bridge, dude. The only question is whether you want to buy one that's described as "a pretty good one, not too old, sold as is" or if you'd maybe prefer "spans X, holds Y, money back guarantee".
I get it. Sometimes complexity is justified. I just don't feel this particular bridge is justified for 'mainstream software' which is what the article is about.
I agree that trying to produce this sort of spec for the entire project is probably a fool's errand, but I still see the value for critical components of the system. Formally verifying the correctness of balance calculation from a ledger, or that database writes are always persisted to the write ahead log, for example.
I used to work adjacent to a team who worked from closely-defined specs for web sites, and it used to infuriate the living hell out of me. The specs had all sorts of horrible UI choices and bugs and stuff that just plain wouldn't work when coded. I tried my best to get them to implement the intent of the spec, not the actual spec, but they had been trained in one method only and would not deviate at any cost.
Yeah, IMO, the spec almost always needs refinement. I've worked for some companies where they tried to write specs with precision down to every word; but what happened is; if the spec was too detailed, it usually had to be adjusted later once it started to conflict with reality (efficiency, costs, security/access restrictions, resource limits, AI limitations)... If it wasn't detailed enough, then we had to read between the lines and fill in a lot of gaps... And usually had to iterate with the stakeholder to get it right.
At most other companies, it's like the stakeholder doesn't even know what they want until they start seeing things on a screen... Trying to write a formal spec when literally nobody in the universe even knows what is required; that's physically impossible.
In my view, 'Correct code' means code that does what the client needs it to do. This is downstream from it doing what the client thinks they want; which is itself downstream from it doing what the client asked for. Reminds me of this meme: https://www.reddit.com/r/funny/comments/105v2h/what_the_cust...
Software engineers don't get nearly enough credit for how difficult their job is.
How do you or the client know that the software is doing what they want?
What formal verification system did they use? Did they even execute it?
There are many really important properties to enforce even on the most basic CRUD system. You can easily say things like "an anonymous user must never edit any data, except for the create account form", or "every user authorized to see a page must be listed on the admin page that lists what users can see a page".
People don't verify those because it's hard, not for lack of value.
Yes, in fact there is research on type systems to ensure information flow control, avoiding unauthorized data access by construction.
Concrete Semantics [1] has a little example in §9.2.
Yeah fair enough. I can definitely see the value of property-based verification like this and agree that useful properties could be easy to express and that LLMs could feasibly verify them. I think full verification that an implementation implements an entire spec and nothing else seems much less practical even with AI, but of course that is just one flavor of verification.
Even
> "an anonymous user must never edit any data, except for the create account form"
Can quickly end up being
> "an anonymous user must never edit any data, except for the create account form, and the feedback form"
And a week later go to
> "an anonymous user must never edit any data, except for the create account form, the feedback form, and the error submission form if they end up with a specific type of error"
And then during christmas
> > "an anonymous user must never edit any data, except for the create account form, the feedback form, and the error submission form if they end up with a specific type of error, and the order submission form if they visit it from this magic link. Those visiting from the magic link, should not be able to use the feedback form (marge had a bad experience last christmas going through feedbacks from the promotional campaign)"
It is still a small rule, with plenty of value. It's nowhere near the size of the access control for the entire site. And it's also not written down by construction.
It changing with time doesn't make any of that change.
Yes, except their cookie preferences to comply with european law. Oh, and they should be able to change their theme from light/dark but only that. Oh and maybe this other thing. Except in situations where it would conflict with current sales promotions. Unless they're referred by a reseller partner. Unless it's during a demo, of course. etc, etc, etc.
This is the sort of reality that a lot of developers in the business world deals with.
Compare the spec with the application here: https://concerningquality.com/model-based-testing/
I think we've become used to the complexity in typical web applications, but there's a difference between familiar and simple (simple vs. easy, as it were). The behavior of most business software can be very simply expressed using simple data structures (sets, lists, maps) and simple logic.
No matter how much we simply it, via frameworks and libraries or whatever have you, things like serialization, persistence, asynchrony, concurrency, and performance end up complicating the implementation. Comparing this against a simpler spec is quite nice in practice - and a huge benefit is now you can consult a simple in-memory spec vs. worrying about distributed system deployments.
> especially when we currently use all kinds of frameworks and libraries that already abstract away implementation details.
This is my issue with algorithm driven interviewing. Even the creator of Homebrew got denied by Google because he couldn't do some binary sort or whatever it even was. He made a tool used by millions of developers, but apparently that's not good enough.
Google denies qualified people all the time. They would much rather reject a great hire than take a risk on accepting a mediocre one. I feel for him but it's just the nature of the beast. Not everyone will get in.
This language sounds like chauvinism leading to closed-mindedness and efficiency. Of course there are tradeoffs to chauvinism, as Googlers possess the mind to notice. But a Googler does not need to worry about saying ambiguous truths without understanding their emotions to the masses, for they have Google behind them. With the might of the G stick, they can hammer out words with confidence.
I've heard this before. Why do you think algorithm questions are effective for finding "good" hires? Are they?
The intent isn't to find good hires per se, but to whittle down the list of applicants to a manageable number in a way that doesn't invite discrimination lawsuits.
Same as why companies in the past used to reject anyone without a degree. But then everyone got a degree, leaving it to no longer be an effective filter, hence things like algorithm tests showing up to fill the void.
Once you've narrowed the list, then you can worry about figuring out who is "good" through giving the remaining individuals additional attention.
> Same as why companies in the past used to reject anyone without a degree.
They still do, and its a shame some of the smartest most capable developers I know have no degree.
They certainly don't filter out toxic people who make others leave companies because they poison the well.
I have a suspicion that "good candidate" is being gerrymandered. What might have been "good" in 1990 might have become irrelevant in 2000+ or perhaps detrimental. I say that as someone who is actually good at algorithm questions himself. I think GP, as well as other Google defenders, are parroting pseudo-science.
I agree. But also if it works to get you jobs there, why wouldn't you defend it? I mean I might be inclined to do so as well, it guarantees me a place even if I lack soft skills for the role.
AWS has said that having formal verification of code lets them be more aggressive in optimization while being confidant it still adheres to the spec. They claim they were able to double the speed of IAM API auth code this way.
I'm convinced now that the key to getting useful results out of coding agents (Claude Code, Codex CLI etc) is having good mechanisms in place to help those agents exercise and validate the code they are writing.
At the most basic level this means making sure they can run commands to execute the code - easiest with languages like Python, with HTML+JavaScript you need to remind them that Playwright exists and they should use it.
The next step up from that is a good automated test suite.
Then we get into quality of code/life improvement tools - automatic code formatters, linters, fuzzing tools etc.
Debuggers are good too. These tend to be less coding-agent friendly due to them often having directly interactive interfaces, but agents can increasingly use them - and there are other options that are a better fit as well.
I'd put formal verification tools like the ones mentioned by Martin on this spectrum too. They're potentially a fantastic unlock for agents - they're effectively just niche programming languages, and models are really good at even niche languages these days.
If you're not finding any value in coding agents but you've also not invested in execution and automated testing environment features, that's probably why.
I very much agree, and believe using languages with powerful types systems could be a big step in this direction. Most people's first experience with Haskell is "wow this is hard to write a program in, but when I do get it to compile, it works". If this works for human developers, it should also work for LLMs (especially if the human doesn't have to worry about the 'hard to write a program' part).
> The next step up from that is a good automated test suite.
And if we're going for a powerful type system, then we can really leverage the power of property tests which are currently grossly underused. Property tests are a perfect match for LLMs because they allow the human to create a small number of tests that cover a very wide surface of possible errors.
The "thinking in types" approach to software development in Haskell allows the human user to keep at a level of abstraction that still allows them to reason about critical parts of the program while not having to worry about the more tedious implementation parts.
Given how much interest there has been in using LLMs to improve Lean code for formal proofs in the math community, maybe there's a world where we make use of an even more powerful type systems than Haskell. If LLMs with the right language can help prove complex mathematical theorems, they it should certain be possible to write better software with them.
That's my opinion as well. Some functional language, that can also offer access to imperative features when needed, plus an expressive type system might be the future.
My bet is on refinement types. Dafny fits that bill quite well, it's simple, it offers refinement types, and verification is automated with SAT/SMT.
In fact, there are already serious industrial efforts to generate Dafny using LLMs.
Besides, some of the largest verification efforts have been achieved with this language [1].
[1] https://www.andrew.cmu.edu/user/bparno/papers/ironfleet.pdf
This is why I use Go as much as reasonably possible with vibe coding: types, plus great quality-checking ecosystem, plus adequate training data, plus great distribution story. Even when something has stuff like JS and Python SDKs, I tend to skip them and go straight to the API with Go.
I love ML types, but I've concluded they serve humans more than they do agents. I'm sure it affects the agent, maybe just not as much as other choices.
I've noticed real advantages of functional languages to agents, for disposable code. Which is great, cos we can leverage those without dictating the human's experience.
I think the correct way forward is to choose whatever language the humans on your team agree is most useful. For my personal projects, that means a beautiful language for the bits I'll be touching, and whatever gets the job done elsewhere.
Ada when?
It even lets you separate implementation from specification.
Even going beyond Ada into dependently typed languages like (quoth wiki) "Agda, ATS, Rocq (previously known as Coq), F*, Epigram, Idris, and Lean"
I think there are some interesting things going on if you can really tightly lock down the syntax to some simple subset with extremely straightforward, powerful, and expressive typing mechanisms.
Isn‘t it funny how that’s exactly the kind of stuff that helps a human developer be successful and productive, too?
Or, to put it the other way round, what kind of tech leads would we be if we told our junior engineers „Well, here’s the codebase, that’s all I‘ll give you. No debuggers, linters, or test runners for you. Using a browser on your frontend implementation? Nice try buddy! Now good luck getting those requirements implemented!“
> Isn‘t it funny how that’s exactly the kind of stuff that helps a human developer be successful and productive, too?
I think it's more nuanced than that. As a human, I can manually test code in ways an AI still can't. Sure, maybe it's better to have automated test suites, but I have other options too.
AI can do that too? If you have a web app it can use playwright to test functionality and take screenshots to see if it looks right.
Yeah, but it doesn't work nearly as well. The AI frequently misinterprets what it sees. And it isn't as good at actually using the website (or app, or piece of hardware, etc) as a human would.
I've been using Claude to implement an ISO specification and I have to keep telling it we're not interested if the repl is correct but that the test suite is ensuring the implementation is correctly following the spec. But when we're tracking down why a test is failing then it'll go to town using the repl to narrow down out what code path is causing the issue. The only reason there's even is a repl at this point is so it can do its 'spray and pray' debugging outside the code and Claude constantly tried to use it to debug issues so I gave in and had it write a pretty basic one.
Horses for courses, I suppose. Back in the day, when I wanted to play with some C(++) library, I'd quite often write a Python C-API extension so I could do the same thing using Python's repl.
But then the AI would theoretically have to write the playwright code. How does it verify it's getting the right page to begin with?
The recent models are pretty great at this. They read the source code for e.g. a Python web application and use that to derive what the URLs should be. Then they fire up a localhost development server and write Playwright scripts to interact with those pages at the predicted URLs.
The vision models (Claude Opus 4.5, Gemini 3 Pro, GPT-5.2) can even take screenshots via Playwright and then "look at them" with their vision capabilities.
It's a lot of fun to watch. You can tell them to run Playwright not in headless mode at which point a Chrome window will pop up on your computer and you can see them interact with the site via it.
Claude Code was a big jump for me. Another large-ish jump was multi-agents and following the tips from Anthropic’s long running harnesses post.
I don’t go into Claude without everything already setup. Codex helps me curate the plan, and curate the issue tracker (one instance). Claude gets a command to fire up into context, grab an issue - implements it, and then Codex and Gemini review independently.
I’ve instructed Claude to go back and forth for as many rounds as it takes. Then I close the session (\new) and do it again. These are all the latest frontier models.
This is incredibly expensive, but it’s also the most reliable method I’ve found to get high-quality progress — I suspect it has something to do with ameliorating self-bias, and improving the diversity of viewpoints on the code.
I suspect rigorous static tooling is yet another layer to improve the distribution over program changes, but I do think that there is a big gap in folk knowledge already between “vanilla agents” and something fancy with just raw agents, and I’m not sure if just the addition of more rigorous static tooling (beyond the compiler) closes it.
How expensive is incredibly expensive?
If you're maxing out the plans across the platforms, that's 600 bucks -- but if you think about your usage and optimize, I'm guessing somewhere between 200-600 dollars per month.
It's pretty easy to hit a couple hundred dollars a day filling up Opus's context window with files. This is via Anthropic API and Zed.
Going full speed ahead building a Rails app from scratch it seemed like I was spending $50/hour, but it was worth it because the App was finished in a weekend instead of weeks.
I can't bear to go in circles with Sonnet when Opus can just one shot it.
The $200/month Max plan has limits, but making a couple of those seems way cheaper than $50/hr for the ~172 hrs in a month.
Anthropic via Azure has sent me an invoice of around $8000 for 3-5 days of Opus 4.1 usage and there is no way to track how many tokens during those days and how many cached etc. (And I thought its part of the azure sponsorship but that's another story)
I think the main limitation is not code validation but assumption verification. When you ask an LLM to write some code based on a few descriptive lines of text, it is, by necessity, making a ton of assumptions. Oddly, none of the LLM's I've seen ask for clarification when multiple assumptions might all be likely. Moreover, from the behavior I've seen, they don't really backtrack to select a new assumption based on further input (I might be wrong here, it's just a feeling).
What you don't specify, it must to assume. And therein lies a huge landscape of possibilities. And since the AI's can't read your mind (yet), its assumptions will probably not precisely match your assumptions unless the task is very limited in scope.
> Oddly, none of the LLM's I've seen ask for clarification when multiple assumptions might all be likely.
It's not odd, they've just been trained to give helpful answers straight away.
If you tell them not to make assumptions and to rather first ask you all their questions together with the assumptions they would make because you want to confirm before they write the code, they'll do that too. I do that all the time, and I'll get a list of like 12 things to confirm/change.
That's the great thing about LLM's -- if you want them to change their behavior, all you need to do is ask.
OK but if the verification loop really makes the agents MUCH more useful, then this usefulness difference can be used as a training signal to improve the agents themselves. So this means the current capabilities levels are certainly not going to remain for very long (which is also what I expect but I would like to point out it's also supported by this)
Thats a strong RL technique that could equal the quality of RLHF.
Where they'd get training data?
Source code generation is possible due to large training set and effort put into reinforcing better outcomes.
I suspect debugging is not that straightforward to LLM'ize.
It's a non-sequential interaction - when something happens, it's not necessarily caused the problem, timeline may be shuffled. LLM would need tons of examples where something happens in debugger or logs and associate it with another abstraction.
I was debugging something in gdb recently and it was a pretty challenging bug. Out of interest I tried chatgpt, and it was hopeless - try this, add this print etc. That's not how you debug multi-threaded and async code. When I found the root cause, I was analyzing how I did it and where did I learn that specific combination of techniques, each individually well documented, but never in combination - it was learning from other people and my own experience.
How long ago was this? I've had outstansingly impressive results asking Copilot Chat with Sonnet 4.5 or ChatGPT to debug difficult multithreaded C++.
Few months back with ChatGPT 5. Multi-threaded Rust & custom async runtime, data integrity bug, reproduced every ~5th run.
Have you tried running gdb from a Claude Code or Codex CLI session?
No, I'm in academia and the goal is not code or product launch. I find research process to struggle a lot once someone solves a problem instead of you.
I understand that AI can help with writing, coding, analyzing code bases and summarizing other papers, but going through these myself makes a difference, at least for me. I tried ChatGPT 3.5 when I started and while I got a pile of work done, I had to throw it away at some point because I didn't fully understand it. AI could explain to me various parts, but it's different when you create it.
For interactive programs like this, I use tmux and mention "send-keys" and "capture-pane" and it's able to use it to drive an interactive program. My demo/poc for this is making the agent play 20 questions with another agent via tmux
> Where they'd get training data?
They generated it, and had a compiler compile it, and then had it examine the output. Rinse, repeat.
LLMs are okay at bisecting programs and identifying bugs in my experience. Sometimes they require guidance but often enough I can describe the symptom and they identify the code causing the issue (and recommend a fix). They’re fairly methodical, and often ask me to run diagnostic code (or do it themselves).
> I suspect debugging is not that straightforward to LLM'ize.
Debugging is not easy but there should be a lot of training corpus for "bug fixing" from all the commits that have ever existed.
Debugging has been excellent for me with Opus 4.5 and Claude Code.
I've only done a tiny bit of agent-assisted coding, but without the ability to run tests the AI will really go off the rails super quick, and it's kinda hilarious to watch it say "Aha! I know what the problem is!" over and over as it tries different flavors until it gives up.
I might go further and suggest that the key to getting useful results out of HUMAN coding agents is also to have good mechanisms in place to help them exercise and validate the code.
We valued automated tests and linters and fuzzers and documentation before AI, and that's because it serves the same purpose.
> At the most basic level this means making sure they can run commands to execute the code - easiest with languages like Python, with HTML+JavaScript you need to remind them that Playwright exists and they should use it.
So I've been exploring the idea of going all-in on this "basic level" of validation. I'm assembling systems out of really small "services" (written in Go) that Claude Code can immediately run and interact with using curl, jq, etc. Plus when building a particular service I already have all of the downstream services (the dependencies) built and running so a lot of dependency management and integration challenges disappear. Only trying this out at a small scale as yet, but it's fascinating how the LLMs can potentially invert a lot of the economics that inform the current conventional wisdom.
(Shameless plug: I write about this here: https://twilightworld.ai/thoughts/atomic-programming/)
My intuition is that LLMs will for many use cases lead us away from things like formal verification and even comprehensive test suites. The cost of those activities is justified by the larger cost of fixing things in production; I suspect that we will eventually start using LLMs to drive down the cost of production fixes, to the point where a lot of those upstream investments stop making sense.
> My intuition is that LLMs will for many use cases lead us away from things like formal verification and even comprehensive test suites. The cost of those activities is justified by the larger cost of fixing things in production; I suspect that we will eventually start using LLMs to drive down the cost of production fixes, to the point where a lot of those upstream investments stop making sense.
There is still a cost to having bugs, even if deploying fixes becomes much cheaper. Especially if your plan is to wait until they actually occur in practice to discover that you have a bug in the first place.
Put differently: would you want the app responsible for your payroll to be developed in this manner? Especially considering that the bug in question would be "oops, you didn't get paid."
Claude code and other AI coding tools must have a * mandatory * hook for verification.
For front end - the verification is make sure that the UI looks expected (can be verified by an image model) and clicking certain buttons results in certain things (can be verified by chatgpt agent but its not public ig).
For back end it will involve firing API requests one by one and verifying the results.
To make this easier, we need to somehow give an environment for claude or whatever agent to run these verifications on and this is the gap that is missing. Claude Code, Codex should now start shipping verification environments that make it easy for them to verify frontend and backend tasks and I think antigravity already helps a bit here.
------
The thing about backend verification is that it is different in different companies and requires a custom implementation that can't easily be shared across companies. Each company has its own way to deploy stuff.
Imagine a concrete task like creating a new service that reads from a data stream, runs transformations, puts it in another data stream where another new service consumes the transformed data and puts it into an AWS database like Aurora.
``` stream -> service (transforms) -> stream -> service -> Aurora ```
To one shot this with claude code, it must know everything about the company
- how does one consume streams in the company? Schema registry?
- how does one create a new service and register dependencies? how does one deploy it to test environment and production?
- how does one even create an Aurora DB? request approvals and IAM roles etc?
My question is: what would it take for Claude Code to one shot this? At the code level it is not too hard and it can fit in context window easily but the * main * problem is the fragmented processes in creating the infra and operations behind it which is human based now (and need not be!).
-----
My prediction is that companies will make something like a new "agent" environment where all these processes (that used to require a human) can be done by an agent without human intervention.
I'm thinking of other solutions here, but if anyone can figure it out, please tell!
Maybe in the short term, but that doesn't solve some fundamental problems. Consider, NP problems, problems whose solutions can be easily verified. But that they can all be easily verified does not (as far as we know) mean they can all be easily solved. If we look at the P subset of NP, the problems that can be easily solved, then the easy verification is no longer their key feature. Rather, it is something else that distinguishes them from the harder problems in NP.
So let's say that, similarly, there are programming tasks that are easier and harder for agents to do well. If we know that a task is in the easy category, of course having tests is good, but since we already know that an agent does it well, the test isn't the crucial aspect. On the other hand, for a hard task, all the testing in the world may not be enough for the agent to succeed.
Longer term, I think it's more important to understand what's hard and what's easy for agents.
> At the most basic level this means making sure they can run commands to execute the code
Yeah, it's gonna be fun waiting for compilation cycles when those models "reason" with themselves about a semicolon. I guess we just need more compute...
One objection: all the "don't use --yolo" training in the world is useless if a sufficiently context-poisoned LLM starts putting malware in the codebase and getting to run it under the guise of "unit tests".
For now, this is mitigated by only including trusted content in the context; for instance, absolutely do not allow it to access general web content.
I suspect that as it becomes more economical to play with training your own models, people will get better at including obscured malicious content in data that will be used during training, which could cause the LLM to intrinsically carry a trigger/path that would cause malicious content to be output by the LLM under certain conditions.
And of course we have to worry about malicious content being added to sources that we trust, but that already exists - we as an industry typically pull in public repositories without a complete review of what we're pulling. We outsource the verification to the owners of the repository. Just as we currently have cases of malicious code sneaking into common libraries, we'll have malicious content targeted at LLMs
shameless plug: I'm working on an open source project https://blocksai.dev/ to attempt to solve this. (and just added a note for me to add formal verification)
Elevator pitch: "Blocks is a semantic linter for human-AI collaboration. Define your domain in YAML, let anyone (humans or AI) write code freely, then validate for drift. Update the code or update the spec, up to human or agent."
(you can add traditional linters to the process if you want but not necessary)
The gist being you define a bunch of validators for a collection of modules you're building (with agentic coding) with a focus on qualifying semantic things;
- domain / business rules/measures
- branding
- data flow invariants — "user data never touches analytics without anonymization"
- accessibility
- anything you can think of
Then you just tell your agentic coder to use the cli tool before committing, so it keeps the code in line with your engineering/business/philosophical values.
(boring) example of it detecting if blog posts have humour in them, running in Claude Code -> https://imgur.com/diKDZ8W
Reminder YAML is a serialization format. IaC standardizing on it (hashicorp being an outlier) was a mistake. It’s a good compilation target, but please add a higher level language for whatever you’re doing.
Quick feedback: both the „learn more“ link at the very top and the „Explore all examples“ link lead to 404
Thanks will fix that up shortly.
That's bad news for C++, Rust, and other slow compilers.
I've tried getting claude to set up testing frameworks, but what ends up happening is it either creates canned tests, or it forgets about tests, or it outright lies about making tests. It's definitely helpful, but feels very far from a robust thing to rely on. If you're reviewing everything the AI does then it will probably work though.
Something I find helps a lot is having a template for creating a project that includes at least one passing test. That way the agent can run the tests at the start using the correct test harness and then add new tests as it goes along.
I use cookiecutter for this, here's my latest Python library template: https://github.com/simonw/python-lib
LLMs are very good at looking at a change set and finding untested paths. As a standard part of my workflow, I always pass the LLM's work through a "reviewer", which is a fresh LLM session with instructions to review the uncommitted changes. I include instructions for reviewing test coverage.
I've also found that LLMs typically just partially implement a given task/story/spec/whatever. The reviewer stage will also notice a mismatch between the spec and the implementation.
I have an orchestrator bounce the flow back and forth between developing and reviewing until the review comes back clean, and only then do I bother to review its work. It saves so much time and frustration.
What tooling are you using for the orchestration?
Claude Code hooks is a great way to integrate these things
Better question is which tools at what level
you've done some great articles on this topic and my experience aligns with your view completely.
gemini and claude do that already IIUC, self debugging iterations
[dead]
Not so sure about formal verification though. ime with Rust LLM agents tend to struggle with semi-complex ownership or trait issues and will typically reach for unnecessary/dangerous escape hatches ("unsafe impl Send for ..." instead of using the correct locks, for example) fairly quickly. Or just conclude the task is impossible.
> automatic code formatters
I haven't tried this because I assumed it'll destroy agent productivity and massively increase number of tokens needed, because you're changing the file out under the LLM and it ends up constantly re-reading the changed bits to generate the correct str_replace JSON. Or are they smart enough that this quickly trains them to generate code with zero-diff under autoformatting?
But in general of course anything that's helpful for human developers to be more productive will also help LLMs be more productive. For largely identical reasons.
I've directly faced this problem with automatic code formatters, but it was back around Claude 3.5 and 3.7. It would consistently write nonconforming code - regardless of having context demanding proper formatting. This caused both extra turns/invocations with the LLM and would cause context issues - both filling the context with multiple variants of the file and also having a confounding/polluting/poisoning effect due to having these multiple variations.
I haven't had this problem in a while, but I expect current LLMs would probably handle those formatting instructions more closely than the 3.5 era.
I'm finding my agents generate code that conforms to Black quite effectively, I think it's probably because I usually start them in existing projects that were already formatted using Black so they pick up those patterns.
I still quite often have even Opus 4.5 generate empty indented lines (regardless of explicit instructions in AGENTS.md not to (besides explicitly referencing the style guide as well), the code not containing any before and the auto-formatter removing them), for example. Trailing whitespace is much rarer but happens as well. Personally I don't care too much, since I've found LLMs to be most efficient when performing roughly the work of a handful commits at most in one thread, so I let the pre-commit hook sort it out after being done with a thread.