
Go 1.26 includes a new implementation of go fix that can help you use more modern features of Go.
The 1.26 release of Go this month includes a completely rewritten go fix subcommand. Go fix uses a suite of algorithms to identify opportunities to improve your code, often by taking advantage of more modern features of the language and library. In this post, we’ll first show you how to use go fix to modernize your Go codebase. Then in the second section we’ll dive into the infrastructure behind it and how it is evolving. Finally, we’ll present the theme of “self-service” analysis tools to help module maintainers and organizations encode their own guidelines and best practices.
The go fix command, like go build and go vet, accepts a set of patterns that denote packages. This command fixes all packages beneath the current directory:
$ go fix ./...
On success, it silently updates your source files. It discards any fix that touches generated files since the appropriate fix in that case is to the logic of the generator itself. We recommend running go fix over your project each time you update your build to a newer Go toolchain release. Since the command may fix hundreds of files, start from a clean git state so that the change consists only of edits from go fix; your code reviewers will thank you.
To preview the changes the above command would have made, use the -diff flag:
$ go fix -diff ./...
--- dir/file.go (old)
+++ dir/file.go (new)
- eq := strings.IndexByte(pair, '=')
- result[pair[:eq]] = pair[1+eq:]
+ before, after, _ := strings.Cut(pair, "=")
+ result[before] = after
…
You can list the available fixers by running this command:
$ go tool fix help
…
Registered analyzers:
any replace interface{} with any
buildtag check //go:build and // +build directives
fmtappendf replace []byte(fmt.Sprintf) with fmt.Appendf
forvar remove redundant re-declaration of loop variables
hostport check format of addresses passed to net.Dial
inline apply fixes based on 'go:fix inline' comment directives
mapsloop replace explicit loops over maps with calls to maps package
minmax replace if/else statements with calls to min or max
…
Adding the name of a particular analyzer shows its complete documentation:
$ go tool fix help forvar
forvar: remove redundant re-declaration of loop variables
The forvar analyzer removes unnecessary shadowing of loop variables.
Before Go 1.22, it was common to write `for _, x := range s { x := x ... }`
to create a fresh variable for each iteration. Go 1.22 changed the semantics
of `for` loops, making this pattern redundant. This analyzer removes the
unnecessary `x := x` statement.
This fix only applies to `range` loops.
By default, the go fix command runs all analyzers. When fixing a large project it may reduce the burden of code review if you apply fixes from the most prolific analyzers as separate code changes. To enable only specific analyzers, use the flags matching their names. For example, to run just the any fixer, specify the -any flag. Conversely, to run all the analyzers except selected ones, negate the flags, for instance -any=false.
As with go build and go vet, each run of the go fix command analyzes only a specific build configuration. If your project makes heavy use of files tagged for different CPUs or platforms, you may wish to run the command more than once with different values of GOARCH and GOOS for better coverage:
$ GOOS=linux GOARCH=amd64 go fix ./...
$ GOOS=darwin GOARCH=arm64 go fix ./...
$ GOOS=windows GOARCH=amd64 go fix ./...
Running the command more than once also provides opportunities for synergistic fixes, as we’ll see below.
The introduction of generics in Go 1.18 marked the end of an era of very few changes to the language spec and the start of a period of more rapid—though still careful—change, especially in the libraries. Many of the trivial loops that Go programmers routinely write, such as to gather the keys of a map into a slice, can now be conveniently expressed as a call to a generic function such as maps.Keys. Consequently these new features create many opportunities to simplify existing code.
In December 2024, during the frenzied adoption of LLM coding assistants, we became aware that such tools tended—unsurprisingly—to produce Go code in a style similar to the mass of Go code used during training, even when there were newer, better ways to express the same idea. Less obviously, the same tools often refused to use the newer ways even when directed to do so in general terms such as “always use the latest idioms of Go 1.25.” In some cases, even when explicitly told to use a feature, the model would deny that it existed. (See my 2025 GopherCon talk for more exasperating details.) To ensure that future models are trained on the latest idioms, we need to ensure that these idioms are reflected in the training data, which is to say the global corpus of open-source Go code.
Over the past year, we have built dozens of analyzers to identify opportunities for modernization. Here are three examples of the fixes they suggest:
minmax replaces an if statement by a use of Go 1.21’s min or max functions:
x := f()
if x < 0 {
x = 0
}
if x > 100 {
x = 100
}
x := min(max(f(), 0), 100)
rangeint replaces a 3-clause for loop by a Go 1.22 range-over-int loop:
for i := 0; i < n; i++ {
f()
}
for range n {
f()
}
stringscut (whose -diff output we saw earlier) replaces uses of strings.Index and slicing by Go 1.18’s strings.Cut:
i := strings.Index(s, ":")
if i >= 0 {
return s[:i]
}
before, _, ok := strings.Cut(s, ":")
if ok {
return before
}
These modernizers are included in gopls, to provide instant feedback as you type, and in go fix, so that you can modernize several entire packages at once in a single command. In addition to making code clearer, modernizers may help Go programmers learn about newer features. As part of the process of approving each new change to the language and standard library, the proposal review group now considers whether it should be accompanied by a modernizer. We expect to add more modernizers with each release.
Go 1.26 includes a small but widely useful change to the language specification. The built-in new function creates a new variable and returns its address. Historically, its sole argument was required to be a type, such as new(string), and the new variable was initialized to its “zero” value, such as "". In Go 1.26, the new function may be called with any value, causing it to create a variable initialized to that value, avoiding the need for an additional statement. For example:
ptr := new(string) *ptr = "go1.25"
ptr := new("go1.26")
This feature filled a gap that had been discussed for over a decade and resolved one of the most popular proposals for a change to the language. It is especially convenient in code that uses a pointer type *T to indicate an optional value of type T, as is common when working with serialization packages such as json.Marshal or protocol buffers. This is such a common pattern that people often capture it in a helper, such as the newInt function below, saving the caller from the need to break out of an expression context to introduce additional statements:
type RequestJSON struct {
URL string
Attempts *int // (optional)
}
data, err := json.Marshal(&RequestJSON{
URL: url,
Attempts: newInt(10),
})
func newInt(x int) *int { return &x }
Helpers such as newInt are so frequently needed with protocol buffers that the proto API itself provides them as proto.Int64, proto.String, and so on. But Go 1.26 makes all these helpers unnecessary:
data, err := json.Marshal(&RequestJSON{
URL: url,
Attempts: new(10),
})
To help you take advantage of this feature, the go fix command now includes a fixer, newexpr, that recognizes “new-like” functions such as newInt and suggests fixes to replace the function body with return new(x) and to replace every call, whether in the same package or an importing package, with a direct use of new(expr).
To avoid introducing premature uses of new features, modernizers offer fixes only in files that require at least the minimum appropriate version of Go (1.26 in this instance), either through a go 1.26 directive in the enclosing go.mod file or a //go:build go1.26 build constraint in the file itself.
Run this command to update all calls of this form in your source tree:
$ go fix -newexpr ./...
At this point, with luck, all of your newInt-like helper functions will have become unused and may be safely deleted (assuming they aren’t part of a stable published API). A few calls may remain where it would be unsafe to suggest a fix, such as when the name new is locally shadowed by another declaration. You can also use the deadcode command to help identify unused functions.
Applying one modernization may create opportunities to apply another. For example, this snippet of code, which clamps x to the range 0–100, causes the minmax modernizer to suggest a fix to use max. Once that fix is applied it suggests a second fix, this time to use min.
x := f()
if x < 0 {
x = 0
}
if x > 100 {
x = 100
}
x := min(max(f(), 0), 100)
Synergies may also occur between different analyzers. For example, a common mistake is to repeatedly concatenate strings within a loop, resulting in quadratic time complexity—a bug and a potential vector for a denial-of-service attack. The stringsbuilder modernizer recognizes the problem and suggests using Go 1.10’s strings.Builder:
s := ""
for _, b := range bytes {
s += fmt.Sprintf("%02x", b)
}
use(s)
var s strings.Builder
for _, b := range bytes {
s.WriteString(fmt.Sprintf("%02x", b))
}
use(s.String())
Once this fix is applied, a second analyzer may recognize that the WriteString and Sprintf operations can be combined as fmt.Fprintf(&s, "%02x", b), which is both cleaner and more efficient, and offer a second fix. (This second analyzer is QF1012 from Dominik Honnef’s staticcheck, which is already enabled in gopls but not yet in go fix, though we plan to add staticcheck analyzers to the go command starting in Go 1.27.)
Consequently, it may be worth running go fix more than once until it reaches a fixed point; twice is usually enough.
A single run of go fix may apply dozens of fixes within the same source file. All fixes are conceptually independent, analogous to a set of git commits with the same parent. The go fix command uses a simple three-way merge algorithm to reconcile the fixes in sequence, analogous to the task of merging a set of git commits that edit the same file. If a fix conflicts with the list of edits accumulated so far, it is discarded, and the tool issues a warning that some fixes were skipped and that the tool should be run again.
This reliably detects syntactic conflicts arising from overlapping edits, but another class of conflict is possible: a semantic conflict occurs when two changes are textually independent but their meanings are incompatible. As an example consider two fixes that each remove the second-to-last use of a local variable: each fix is fine by itself, but when both are applied together the local variable becomes unused, and in Go that’s a compilation error. Neither fix is responsible for removing the variable declaration, but someone has to do it, and that someone is the user of go fix.
A similar semantic conflict arises when a set of fixes causes an import to become unused. Because this case is so common, the go fix command applies a final pass to detect unused imports and remove them automatically.
Semantic conflicts are relatively rare. Fortunately they usually reveal themselves as compilation errors, making them impossible to overlook. Unfortunately, when they happen, they do demand some manual work after running go fix.
Let’s now delve into the infrastructure beneath these tools.
Since the earliest days of Go, the go command has had two subcommands for static analysis, go vet and go fix, each with its own suite of algorithms: “checkers” and “fixers”. A checker reports likely mistakes in your code, such as passing a string instead of an integer as the operand of a fmt.Printf("%d") conversion. A fixer safely edits your code to fix a bug or to express the same thing in a better way, perhaps more clearly, concisely, or efficiently. Sometimes the same algorithm appears in both suites when it can both report a mistake and safely fix it.
In 2017 we redesigned the then-monolithic go vet program to separate the checker algorithms (now called “analyzers”) from the “driver”, the program that runs them; the result was the Go analysis framework. This separation enables an analyzer to be written once then run in a diverse range of drivers for different environments, such as:
go fix and go vet.One benefit of the framework is its ability to express helper analyzers that don’t report diagnostics or suggest fixes of their own but instead compute some intermediate data structure that may be useful to many other analyzers, amortizing the costs of its construction. Examples include control-flow graphs, the SSA representation of function bodies, and data structures for optimized AST navigation.
Another benefit of the framework is its support for making deductions across packages. An analyzer can attach a “fact” to a function or other symbol so that information learned while analyzing the function’s body can be used when later analyzing a call to the function, even if the call appears in another package or the later analysis occurs in a different process. This makes it easy to define scalable interprocedural analyses. For example, the printf checker can tell when a function such as log.Printf is really just a wrapper around fmt.Printf, so it knows that calls to log.Printf should be checked in a similar manner. This process works by induction, so the tool will also check calls to further wrappers around log.Printf, and so on. An example of an analyzer that makes heavy use of facts is Uber’s nilaway, which reports potential mistakes resulting in nil pointer dereferences.
The process of “separate analysis” in go fix is analogous to the process of separate compilation in go build. Just as the compiler builds packages starting from the bottom of the dependency graph and passing type information up to importing packages, the analysis framework works from the bottom of the dependency graph up, passing facts (and types) up to importing packages.
In 2019, as we started developing gopls, the language server for Go, we added the ability for an analyzer to suggest a fix when reporting a diagnostic. The printf analyzer, for example, offers to replace fmt.Printf(msg) with fmt.Printf("%s", msg) to avoid misformatting should the dynamic msg value contain a % symbol. This mechanism has become the basis for many of the quick fixes and refactoring features of gopls.
While all these developments were happening to go vet, go fix remained stuck as it was back before the Go compatibility promise, when early adopters of Go used it to maintain their code during the rapid and sometimes incompatible evolution of the language and libraries.
The Go 1.26 release brings the Go analysis framework to go fix. The go vet and go fix commands have converged and are now almost identical in implementation. The only differences between them are the criteria for the suites of algorithms they use, and what they do with computed diagnostics. Go vet analyzers must detect likely mistakes with low false positives; their diagnostics are reported to the user. Go fix analyzers must generate fixes that are safe to apply without regression in correctness, performance, or style; their diagnostics may not be reported, but the fixes are directly applied. Aside from this difference of emphasis, the task of developing a fixer is no different from that of developing a checker.
As the number of analyzers in go vet and go fix continues to grow, we have been investing in infrastructure both to improve the performance of each analyzer and to make it easier to write each new analyzer.
For example, most analyzers start by traversing the syntax trees of each file in the package looking for a particular kind of node such as a range statement or function literal. The existing inspector package makes this scan efficient by pre-computing a compact index of a complete traversal so that later traversals can quickly skip subtrees that don’t contain any nodes of interest. Recently we extended it with the Cursor datatype to allow flexible and efficient navigation between nodes in all four cardinal directions—up, down, left, and right, similar to navigating the elements of an HTML DOM—making it easy and efficient to express a query such as “find each go statement that is the first statement of a loop body”:
var curFile inspector.Cursor = ...
// Find each go statement that is the first statement of a loop body.
for curGo := range curFile.Preorder((*ast.GoStmt)(nil)) {
kind, index := curGo.ParentEdge()
if kind == edge.BlockStmt_List && index == 0 {
switch curGo.Parent().ParentEdgeKind() {
case edge.ForStmt_Body, edge.RangeStmt_Body:
...
}
}
}
Many analyzers start by searching for calls to a specific function, such as fmt.Printf. Function calls are among the most numerous expressions in Go code, so rather than search every call expression and test whether it is a call to fmt.Printf, it is much more efficient to pre-compute an index of symbol references, which is done by typeindex and its helper analyzer. Then the calls to fmt.Printf can be enumerated directly, making the cost proportional to the number of calls instead of to the size of the package. For an analyzer such as hostport that seeks an infrequently used symbol (net.Dial), this can easily make it 1,000× faster.
Some other infrastructural improvements over the past year include:
strings.Cut in a package that is itself imported by strings.We have come a long way, but there remains much to do. Fixer logic can be tricky to get right. Since we expect users to apply hundreds of suggested fixes with only cursory review, it’s critical that fixers are correct even in obscure edge cases. As just one example (see my GopherCon talk for several more), we built a modernizer that replaces calls such as append([]string{}, slice...) by the clearer slices.Clone(slice) only to discover that, when slice is empty, the result of Clone is nil, a subtle behavior change that in rare cases can cause bugs; so we had to exclude that modernizer from the go fix suite.
Some of these difficulties for authors of analyzers can be ameliorated with better documentation (both for humans and LLMs), particularly checklists of surprising edge cases to consider and test. A pattern-matching engine for syntax trees, similar to those in staticcheck and Tree Sitter, could simplify the fiddly task of efficiently identifying the locations that need fixing. A richer library of operators for computing accurate fixes would help avoid common mistakes. A better test harness would let us check that fixes don’t break the build, and preserve dynamic properties of the target code. These are all on our roadmap.
More fundamentally, we are turning our attention in 2026 to a “self-service” paradigm.
The newexpr analyzer we saw earlier is a typical modernizer: a bespoke algorithm tailored to a particular feature. The bespoke model works well for features of the language and standard library, but it doesn’t really help update uses of third-party packages. Although there’s nothing to stop you from writing a modernizer for your own public APIs and running it on your own project, there’s no automatic way to get users of your API to run it too. Your modernizer probably wouldn’t belong in gopls or the go vet suite unless your API is particularly widely used across the Go ecosystem. Even in that case you would have to obtain code reviews and approvals and then wait for the next release.
Under the self-service paradigm, Go programmers would be able to define modernizations for their own APIs that their users can apply without all the bottlenecks of the current centralized paradigm. This is especially important as the Go community and global Go corpus are growing much faster than the ability of our team to review analyzer contributions.
The go fix command in Go 1.26 includes a preview of the first fruits of this new paradigm: the annotation-driven source-level inliner, which we’ll describe in an upcoming companion blog post next week. In the coming year, we plan to investigate two more approaches within this paradigm.
First, we will be exploring the possibility of dynamically loading modernizers from the source tree and securely executing them, either in gopls or go fix. In this approach a package that provides an API for, say, a SQL database could additionally provide a checker for misuses of the API, such as SQL injection vulnerabilities or failure to handle critical errors. The same mechanism could be used by project maintainers to encode internal housekeeping rules, such as avoiding calls to certain problematic functions or enforcing stronger coding disciplines in critical parts of the code.
Second, many existing checkers can be informally described as “don’t forget to X after you Y!”, such as “close the file after you open it”, “cancel the context after you create it”, “unlock the mutex after you lock it”, “break out of the iterator loop after yield returns false”, and so on. What such checkers have in common is that they enforce certain invariants on all execution paths. We plan to explore generalizations and unifications of these control-flow checkers so that Go programmers can easily apply them to new domains, without complex analytical logic, simply by annotating their own code.
We hope that these new tools will save you effort during maintenance of your Go projects and help you learn about and benefit from newer features sooner. Please try out go fix on your projects and report any problems you find, and do share any ideas you have for new modernizers, fixers, checkers, or self-service approaches to static analysis.
I really liked this part:
In December 2024, during the frenzied adoption of LLM coding assistants, we became aware that such tools tended—unsurprisingly—to produce Go code in a style similar to the mass of Go code used during training, even when there were newer, better ways to express the same idea. Less obviously, the same tools often refused to use the newer ways even when directed to do so in general terms such as “always use the latest idioms of Go 1.25.” In some cases, even when explicitly told to use a feature, the model would deny that it existed. [...] To ensure that future models are trained on the latest idioms, we need to ensure that these idioms are reflected in the training data, which is to say the global corpus of open-source Go code.
PHP went through a similar effort a while back to just clear out places like Stackoverflow of terrible out of date advice (e.g. posts advocating magic_quotes). LLMs make this a slightly different problem because, for the most part, once the bad advice is in the model it's never going away. In theory there's an easier to test surface around how good the advice it's giving is but trying to figure out how it got to that conclusion and correct it for any future models is arcane. It's unlikely that model trainers will submit their RC models to various communities to make sure it isn't lying about those specific topics so everything needs to happen in preparation of the next generation and relying on the hope that you've identified the bad source it originally trained on and that the model will actually prioritize training on that same, now corrected, source.
This is one area where reinforcement learning can help.
The way you should think of RL (both RLVR and RLHF) is the "elicitation hypothesis[1]." In pretraining, models learn their capabilities by consuming large amounts of web text. Those capabilities include producing both low and high quality outputs (as both low and high quality outputs are present in their pretraining corpora). In post training, RL doesn't teach them new skills (see E.G. the "Limits of RLVR"[2] paper). Instead, it "teaches" the models to produce the more desirable, higher-quality outputs, while suppressing the undesirable, low-quality ones.
I'm pretty sure you could design an RL task that specifically teaches models to use modern idioms, either as an explicit dataset of chosen/rejected completions (where the chosen is the new way and the rejected is the old), or as a verifiable task where the reward goes down as the number of linter errors goes up.
I wouldn't be surprised if frontier labs have datasets for this for some of the major languages and packages.
[1] https://www.interconnects.ai/p/elicitation-theory-of-post-tr...
I believe you absolutely could... as the model owner. The question is whether Go project owners can convince all the model trainers to invest in RL to fix their models and the follow up question is whether the single maintainer of some critical but obscure open source project could also convince the model trainers to commit to RL when they realize the model is horribly mistrained.
In Stackoverflow data is trivial to edit and the org (previously, at least) was open to requests from maintainers to update accepted answers to provide more correct information. Editing is trivial and cheap to carry out for a database - for a model editing is possible (less easy but do-able), expensive and a potential risk to the model owner.
I know Claude will read through code from Go libraries it has imported to ensure it is doing things correctly, but I do have to wonder for other languages and those small libraries, if we'll start seeing things like AGENT_README.md a file that describes the project, then describes what functionality is where in the code, and if necessary drills on a source file by source file basis (unless it's too massive - context limits are still limits). In any regard, I could see that becoming more common. Especially if you link to said file from the README.md for the model to go to. ;)
I think this can be fixed more generally by biasing towards newer data in model outputs and putting more weight on authoritative sources rather than treating all data the same. So no one needs to go in and specifically single out Go code but will instead look at new examples which use features like generics from sources like Google who would follow best/better practices than the rest of the codebase.
[flagged]
They're particularly bad about concurrent go code, in my experience - it's almost always tutorial-like stuff, over-simplified and missing error and edge case handling to the point that it's downright dangerous to use... but it routinely slips past review because it seems simple and simple is correct, right? Go concurrency is so easy!
And then you point out issues in a review, so the author feeds it back into an LLM, and code that looks like it handles that case gets added... while also introducing a subtle data race and a rare deadlock.
Very nearly every single time. On all models.
> a subtle data race and a rare deadlock
That's a langage problem that humans face as well, which golang could stop having (see C++'s Thread Safety annotations).
For Go, there is https://pkg.go.dev/gvisor.dev/gvisor/tools/checklocks. There are some missing things from C++ Thread Safety annotations, but those could be added.
Go has a pretty good race detector already, and all it (usually) takes to enable it is passing the -race flag to go build/test/run/etc.
Not true! There are a fair number of them, and they're even reasonably general-purpose, e.g. https://www.ponylang.io/
Most that can recall achieve this by simply not having any locks at all. That's feasible with some careful design.
Outside proof-oriented languages though, I'm not aware of any that prevent livelocks, much less both. When excluding stuff that's single threaded but might otherwise qualify, e.g. Elm. "Lack of progress" is what most care about though, and yeah that realm is much more "you give up too much to get that guarantee" in nearly all cases (e.g. no turing completeness).
I probably agree that they don't protect you from all deadlocks, but some languages protect you from some dead locks.
You should be using rust... mm kay :\
Doing concurrency in Rust was more complex (though not overly so) than doing it in Golang was, but the fact that the compiler will outright not let me pass mutable refs to each thread does make me feel more comfortable about doing so at all.
Meanwhile I copy-pasted a Python async TaskGroup example from the docs and still found that, despite using a TaskGroup which is specifically designed to await every task and only return once all are done, it returned the instant theloop was completed and tasks were created and then the program exited without having done any of the work.
Concurrency woo~
Yeah, I'd really have liked to see something like [Trio](https://trio.readthedocs.io/en/stable/) gain mid more attention in Python. Structured approaches can prevent a huge amount of those issues, and Python code is absolutely riddled with concurrency problems in my experience. Much more so in practice than other languages except maybe javascript (when including async ordering mistakes). It makes it a real nightmare to try to build anything actually reliable.
The person I was replying to sounded exactly like the Rust zealots roving the internet trying to convince people to change.
So you are trying to explain concurrency to the folks who implemented CSP in both Plan9 and Go. Interesting. I should return "cspbook.pdf" back.
One day, maybe today, you will learn to read
(eval) rather than (read), then.
On concurrency, Go has the bolts screwed in; it basically was 'lets reuse everything we can from Plan9 into a multiplatform language'.
Good use case for Elixir. Apparently it performs best across all programming languages with LLM completions and its concurrency model is ideal too.
This is the exact opposite of my experience.
Claude 4.6 has been excellent with Go, and truly incompetent with Elixir, to the point where I would have serious concerns about choosing Elixir for a new project.
Shouldn't you have concerns picking Claude 4.6 for your next project if it produces subpar elixer code? Cheapy shot perhaps, but I have a feeling exotic languages will remain more exotic longer now that LLM aided development is becoming the norm.
We've finally figured out how to spread ossification from network protocols to programming languages! \o/
The specific agent is irrelevant. This is related to a broader personal opinion regarding LLMs and language choice.
Before we continue, the following opinion comes with several important caveats:
1. It only applies to paid professional work. If it's a hobby project, choose whatever makes you happy.
2. It ignores the strengths and weaknesses of different languages. These may outweigh any LLM-related concerns.
3. This is my opinion today. I _think_ it will survive longer than the next LLM cycle, but who knows these days.
4. May contain nuts.
Okay, that's the ass-covering dispensed with, on to the opinion:
If the choice is between a language which is "LLM friendly" (for want of a better phrase) and one which is not, it is irresponsible to choose the latter.
We live in different realities.
Opus and Sonnett practically writes the same idiomatic elixir (phoenix, mind you) code that I would have written myself, with few edits.
It's scary good.
I envy your reality.
I have run into that a lot which is annoying. Even though all the code compiles because go is backwards compatible it all looks so much different. Same issue for python but in that case the API changes lead to actual breakage. For this reason I find go to be fairly great for codegen as the stability of the language is hard to compete with and the standard lib a powerful enough tool to support many many use cases.
The use of LLMs will lead to homogeneous, middling code.
Middling code should not exist. Boilerplate code should not exist. For some reason we're suddenly accepting code-gen as SOP instead of building a layer of abstraction on top of the too-onerous layer we're currently building at. Prior generations of software development would see a too-onerous layer and build tools to abstract to a higher level, this generation seems stuck in an idea that we just need tooling to generate all that junk but can continue to work at this level.
But Go culture promulgates this practice of repeating boilerplate. In fact this is one of the biggest confusion points of new gophers. "I want to do a thing that seems common enough, what library are you all using to do X?". Everyone scoffs, pushes up their glasses and says, "well actually, you should just use the standard library, it's always worked just fine for me". And the new gopher is confused because they really believe that reinventing the wheel is an acceptable practice. This is what leads to using LLMs to write all that code (admittedly, it's a fine use of an LLM).
LLMs have always been great at generating code that doesn't really mean anything - no architectural decisions, the same for "any" program. But only rarely does one see questions why we're needing to generating "meaningless" code in the first place.
This gets to one of my core fears around the last few years of software development. A lot of companies right now are saddling their codebases with pages and pages of code that does what they need it to do but of which they have no comprehension.
For a long time my motto around software development has been "optimize for maintainability" and I'm quite concerned that in a few years this habit is going to hit us like a truck in the same way the off-shoring craze did - a bunch of companies will start slowly dying off as their feature velocity slows to a crawl and a lot of products that were useful will be lost. It's not my problem, I know, but it's quite concerning.
The "LLMs shouldn't be writing code" take is starting to feel like the new "we should all just use No-Code."
We’ve been trying to "build a better layer" for thirty years. From Dreamweaver to Scratch to Bubble, the goal was always the same: hide the syntax so the "logic" can shine. But it turns out, the syntax wasn't the enemy—the abstraction ceiling was.
Where are the amazing no-hassle, no-boilerplate tools from last generation? Or the generation before that? Give me a break: it's easy to post this but it's proven very hard to simply "pick the right abstraction for everyone".
[dead]
It does. I’ve been writing Go for long enough, and the code that LLMs output is pretty average. It’s what I would expect a mid level engineer to produce. I still write code manually for stuff I care about or where code structure matters.
Maybe the best way is to do the scaffolding yourself and use LLMs to fill the blanks. That may lead to better structured code, but it doesn’t resolve the problem described above where it generates suboptimal or outdated code. Code is a form of communication and I think good code requires an understanding of how to communicate ideas clearly. LLMs have no concept of that, it’s just gluing tokens together. They litter code with useless comments while leaving the parts that need them most without.
I am also of the opinion that LLMs are still pretty bad at what's called "Low level design" - that is structuring functions and classes in a project. I wonder if a rule like torvalds' "No more than 4 levels of indentation" might make them work better.
Do LLMs generate code similar to middling code of a given domain? Why not generate in a perfect language used only by cool and very handsome people, like Fortran, and then translate it to once the important stuff is done?
This might work if Fortran were portable, or if only one compiler were targeted.
middling code, delivered within a tolerable time frame, budget, without taking excessive risk, is good enough for many real-world commercial software projects. homogeneous middling code, written by humans or extruded by machines, is arguably even a positive for the organisation: lots of organisations are more interested in delivery of software projects being predictable, or having a high bus-factor due to the fungibility of the folks (or machines) building and maintaining the code, rather than depending upon excellence.
You might even say that LLMs are not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
This is a rob pike reference for the uninitiated: https://news.ycombinator.com/item?id=30688969
I'm not sure if that's a criticism or praise - I mean, most people strive for readable code.
LLM generated code reminds me of perl's "write-only" reputation.
Does it really? Because I see some quite fine code. The problem is assumptions, or missing side effects when the code is used, or getting stuck in a bad approach "loop" - but not code quality per se.
In all honesty I've only used LLMs in anger with Go, and come away (generally speaking) happy with what it produced.
For a few years, yeah. Eventually it will probably lead to the average quality of code being considerably higher than it was pre-LLMs.
[dead]
I'd prefer we start nuking the idea of using LLMs to write code, not help it get better. Why don't you people listen to Rob Pike, this technology is not good for us. Its a stain on software and the world in general, but I get it most of ya'll yearn for slop. The masses yearn for slop.
I totally agree. I read threads like this and I just can’t believe people are wasting their time with LLM’s.
The masses yearn to not have to fiddle with bs for rent and food
I definitely see that with C++ code Not so easy to "fix", though. Or so I think. But I do hope still, as more and more "modern" C++ code gets published
battle of my life. several times i’ve had to update my agent instructions to prefer modern and usually better syntax to the old way of doing things. largely it’s worked well for me. i find that making the agents read release notes, and some official blog posts, helps them maintain a healthy and reasonably up-to-date instructions on writing go.
I think tooling that can modify your source code to make it more modern is really cool stuff. OpenRewrite comes to mind for Java, but nothing comes to the top of my mind for other languages. And heck, I into recently learned about OpenRewrite and I've been writing Java for a long time.
Even though I don't like Go, I acknowledge that tooling like this built right into the language is a huge deal for language popularity and maturity. Other languages just aren't this opinionated about build tools, testing frameworks, etc.
I suspect that as newer languages emerge over the years, they'll take notes from Go and how well it integrates stuff like this.
Coccinelle for C, used by Linux kernel devs for decades, here's an article from 2009:
https://lwn.net/Articles/315686
Also IDE tooling for C#, Java, and many other languages; JetBrains' IDEs can do massive refactorings and code fixes across millions of lines of code (I use them all the time), including automatically upgrading your code to new language features. The sibling comment is slightly "wrong" — they've been available for decades, not mere years.
Here's a random example:
https://www.jetbrains.com/help/rider/ConvertToPrimaryConstru...
These can be applied across the whole project with one command, rewriting however many problems there are.
Also JetBrains has "structural search and replace" which takes language syntax into account, it works on a higher level than just text like what you'd see in text editors and pseudo-IDEs (like vscode):
https://www.jetbrains.com/help/idea/structural-search-and-re...
https://www.jetbrains.com/help/idea/tutorial-work-with-struc...
For modern .NET you have Roslyn analyzers built in to the C# compiler which often have associated code fixes, but they can only be driven from the IDE AFAIK. Here's a tutorial on writing one:
https://learn.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/t...
Rust has clippy nagging you with a bunch of modernity fixes, and sometimes it can autofix them. I learned about a lot of small new features that make the code cleaner through clippy.
In PHP you can use Rector[1]
It's used a lot to migrate old codebases. The tool is using itself to downgrade[2] it so that it can run on older PHP versions to help upgrades.
Does anyone have experience transforming a typescript codebase this way? Typescript's LSP server is not powerful enough and doesn't support basic things like removing a positional argument from a function (and all call sites).
Would jscodeshift work for this? Maybe in conjunction with claude?
jscodeshift supports ts as a parser, so it should work.
If you want to also remove argument from call sites, you'll likely need to create your own tool that integrates TS Language Service data and jscodeshift.
LLMs definitely help with these codemods quite a bit -- you don't need to manually figure out the details in manipulating AST. But make sure to write tests -- a lot of them -- and come up with a way to quickly fix bugs, revert your change and then iterate. If you have set up the workflow, you may be able to just let LLM automate this for you in a loop until all issues are fixed.
ESLint (and typescript-eslint) has the concept of fixers, which updates the source code.
Try ast-grep
python has a number of these via pyupgrade, which are also included in ruff: https://docs.astral.sh/ruff/rules/#pyupgrade-up
Haskell has had hlint for a very long time. Things like rewriting chained calls of `concat` and `map` into `concatMap`, or just rewriting your boolean expressions like `if a then b else False`.
> but nothing comes to the top of my mind for other languages
"cargo clippy --fix" for Rust, essentially integrated with its linter. It doesn't fix all lints, however.
Java and .NET IDEs have had this capabilities for years now, even when Eclipse was the most used one there were the tips from Checkstyle, and other similar plugins.
Yeah I've noticed the IDEs have this ability, but I think tooling outside of IDEs that can be applied in a repeatable way is much better than doing a bunch of mouse clicks in an IDE to change something.
I think the two things that make this a big deal are: callable from the command line (which means it can integrate with CI/CD or AI tools) and like I mentioned, the fact this is built into Go itself.
eslint had `--fix` since like 10 years, so this is not exactly new.
Lebab too: https://lebab.github.io/
I can’t find where in the article the author claims it is new (as in original).
In fact, the author shows that this is an evolution of go vet and others.
What’s new, however, is the framework that allows home-grown add ons, which doesn’t have to do everything from scratch.
Its tooling like this that really makes golang an excellent language to work with. I had missed that rangeint addition to the language but with go fix I'll just get that improvement for free!
Real kudos to the golang team.
There have been many situations where I'd rather use another language, but Go's tooling is so good that I still end up writing it in Go. So hard to beat the build in testing, linting, and incredible compilation.
Absolutely.
The Go team has built such trust with backwards compatibility that improvements like this are exciting, rather than anxiety-inducing.
Compare that with other ecosystems, where APIs are constantly shifting, and everything seems to be @Deprecated or @Experimental.
I just searched for `for` loops with `:=` within and hand-fixed them. I found a few forms of the for loops and where there was a high number, I used regexp.
This tool is way cooler, post-redesign.