
Happy Birthday, Go!
This past Monday, November 10th, we celebrated the 16th anniversary of Go’s open source release!
We released Go 1.24 in February and Go 1.25 in August, following our now well-established and dependable release cadence. Continuing our mission to build the most productive language platform for building production systems, these releases included new APIs for building robust and reliable software, significant advances in Go’s track record for building secure software, and some serious under-the-hood improvements. Meanwhile, no one can ignore the seismic shifts in our industry brought by generative AI. The Go team is applying its thoughtful and uncompromising mindset to the problems and opportunities of this dynamic space, working to bring Go’s production-ready approach to building robust AI integrations, products, agents, and infrastructure.
First released in Go 1.24 as an experiment and then graduated in Go 1.25, the
new testing/synctest package
significantly simplifies writing tests for concurrent, asynchronous
code. Such code is particularly common in network services,
and is traditionally very hard to test well. The synctest package works by
virtualizing time itself. It takes tests that used to be slow, flaky, or both,
and makes them easy to rewrite into reliable and nearly instantaneous tests,
often with just a couple extra lines of code. It’s also a great example of Go’s
integrated approach to software development: behind an almost trivial API, the
synctest package hides a deep integration with the Go runtime and other parts
of the standard library.
This isn’t the only boost the testing package got over the past year. The new
testing.B.Loop API is both easier to use
than the original testing.B.N API and addresses many of the traditional—and
often invisible!—pitfalls of writing Go benchmarks. The
testing package also has new APIs that make it easy to
cleanup in tests that use
Context, and that make it
easy to write to the test’s log.
Go and containerization grew up together and work great with each other. Go 1.25 launched container-aware scheduling, making this pairing even stronger. Without developers having to lift a finger, this transparently adjusts the parallelism of Go workloads running in containers, preventing CPU throttling that can impact tail latency and improving Go’s out-of-the-box production-readiness.
Go 1.25’s new flight recorder builds on our already powerful execution tracer, enabling deep insights into the dynamic behavior of production systems. While the execution tracer generally collected too much information to be practical in long-running production services, the flight recorder is like a little time machine, allowing a service to snapshot recent events in great detail after something has gone wrong.
Go continues to strengthen its commitment to secure software development, making significant strides in its native cryptography packages and evolving its standard library for enhanced safety.
Go ships with a full suite of native cryptography packages in the standard library, which reached two major milestones over the past year. A security audit conducted by independent security firm Trail of Bits yielded excellent results, with only a single low-severity finding. Furthermore, through a collaborative effort between the Go Security Team and Geomys, these packages achieved CAVP certification, paving the way for full FIPS 140-3 certification. This is a vital development for Go users in certain regulated environments. FIPS 140 compliance, previously a source of friction due to the need for unsupported solutions, will now be seamlessly integrated, addressing concerns related to safety, developer experience, functionality, release velocity, and compliance.
The Go standard library has continued to evolve to be safe by default and
safe by design. For example, the os.Root
API—added in Go 1.24—enables traversal-resistant file system
access, effectively combating a class of vulnerabilities where an
attacker could manipulate programs into accessing files intended to be
inaccessible. Such vulnerabilities are notoriously challenging to address
without underlying platform and operating system support, and the new
os.Root API offers a straightforward,
consistent, and portable solution.
In addition to user-visible changes, Go has made significant improvements under the hood over the past year.
For Go 1.24, we completely redesigned the map
implementation, building on the latest and greatest ideas in
hash table design. This change is completely transparent, and brings significant
improvements to map performance, lower tail latency of map operations, and
in some cases even significant memory wins.
Go 1.25 includes an experimental and significant advancement in Go’s garbage collector called Green Tea. Green Tea reduces garbage collection overhead in many applications by at least 10% and sometimes as much as 40%. It uses a novel algorithm designed for the capabilities and constraints of today’s hardware and opens up a new design space that we’re eagerly exploring. For example, in the forthcoming Go 1.26 release, Green Tea will achieve an additional 10% reduction in garbage collector overhead on hardware that supports AVX-512 vector instructions—something that would have been nigh impossible to take advantage of in the old algorithm. Green Tea will be enabled by default in Go 1.26; users need only upgrade their Go version to benefit.
Go is about far more than the language and standard library. It’s a software development platform, and over the past year, we’ve also made four regular releases of the gopls language server, and have formed partnerships to support emerging new frameworks for agentic applications.
Gopls provides Go support to VS Code and other LSP-powered editors and IDEs. Every release sees a litany of features and improvements to the experience of reading and writing Go code (see the v0.17.0, v0.18.0, v0.19.0, and v0.20.0 release notes for full details, or our new gopls feature documentation!). Some highlights include many new and enhanced analyzers to help developers write more idiomatic and robust Go code; refactoring support for variable extraction, variable inlining, and JSON struct tags; and an experimental built-in server for the Model Context Protocol (MCP) that exposes a subset of gopls’ functionality to AI assistants in the form of MCP tools.
With gopls v0.18.0, we began exploring automatic code modernizers. As Go
evolves, every release brings new capabilities and new idioms; new and better
ways to do things that Go programmers have been finding other ways to do. Go
stands by its compatibility promise—the old way will continue
to work in perpetuity—but nevertheless this creates a bifurcation between old
idioms and new idioms. Modernizers are static analysis tools that recognize old
idioms and suggest faster, more readable, more secure, more modern
replacements, and do so with push-button reliability. What gofmt did for
stylistic consistency, we hope modernizers can do for idiomatic
consistency. We’ve integrated modernizers as IDE suggestions, where they can
help developers not only maintain more consistent coding standards, but where we
believe they will help developers discover new features and keep up with the
state of the art. We believe modernizers can also help AI coding assistants keep
up with the state of the art and combat their proclivity to reinforce outdated
knowledge of the Go language, APIs, and idioms. The upcoming Go 1.26 release
will include a total overhaul of the long-dormant go fix command to make it
apply the full suite of modernizers in bulk, a return to its pre-Go 1.0
roots.
At the end of September, in collaboration with Anthropic and the Go community, we released v1.0.0 of the official Go SDK for the Model Context Protocol (MCP). This SDK supports both MCP clients and MCP servers, and underpins the new MCP functionality in gopls. Contributing this work in open source helps empower other areas of the growing open source agentic ecosystem built around Go, such as the recently released Agent Development Kit (ADK) for Go from Google. ADK Go builds on the Go MCP SDK to provide an idiomatic framework for building modular multi-agent applications and systems. The Go MCP SDK and ADK Go demonstrate how Go’s unique strengths in concurrency, performance, and reliability differentiate Go for production AI development and we are expecting more AI workloads to be written in Go in the coming years.
Go has an exciting year ahead of it.
We’re working on advancing developer productivity through the brand new go fix
command, deeper support for AI coding assistants, and ongoing improvements to
gopls and VS Code Go. General availability of the Green Tea garbage collector,
native support for Single Instruction Multiple Data (SIMD) hardware features,
and runtime and standard library support for writing code that scales even
better to massive multicore hardware will continue to align Go with modern
hardware and improve production efficiency. We’re focusing on Go’s “production
stack” libraries and diagnostics, including a massive (and long in the making)
upgrade to encoding/json, driven by Joe Tsai and people across
the Go community; leaked goroutine
profiling, contributed by
Uber’s Programming Systems team; and many
other improvements to net/http, unicode, and other foundational packages.
We’re working to provide well-lit paths for building with Go and AI, evolving
the language platform with care for the evolving needs of today’s developers,
and building tools and capabilities that help both human developers and AI
assistants and systems alike.
On this 16th anniversary of Go’s open source release, we’re also looking to the future of the Go open source project itself. From its humble beginnings, Go has formed a thriving contributor community. To continue to best meet the needs of our ever-expanding user base, especially in a time of upheaval in the software industry, we’re working on ways to better scale Go’s development processes—without losing sight of Go’s fundamental principles—and more deeply involve our wonderful contributor community.
Go would not be where it is today without our incredible user and contributor communities. We wish you all the best in the coming year!
I know they say that your programming language isn't the bottleneck, but I remember sitting there being frustrated as a young dev that I couldn't parse faster in the languages I was using when I learned about Go.
It took a few more years before I actually got around to learning it and I have to say I've never picked up a language so quickly. (Which makes sense, it's got the smallest language spec of any of them)
I'm sure there are plenty of reasons this is wrong, but it feels like Go gets me 80% of the way to Rust with 20% of the effort.
The nice thing about Go is that you can learn "all of it" in a reasonable amount of time: gotchas, concurrency stuff, everything. There is something very comforting about knowing the entire spec of a language.
I'm convinced no more than a handful of humans understand all of C# or C++, and inevitably you'll come across some obscure thing and have to context switch out of reading code to learn whatever the fuck a "partial method" or "generic delegate" means, and then keep reading that codebase if you still have momentum left.
> The nice thing about Go is that you can learn "all of it" in a reasonable amount of time
This always feels like one of those “taste” things that some programmers tend to like on a personal level but has almost no evidence that it leads to more real-world success vs any other language.
Like, people get real work done every day at scale with C# and C++. And Java, and Ruby, and Rust, and JavaScript. And every other language that programmers castigate as being huge and bloated.
I’m not saying it’s wrong to have a preference for smaller languages, I just haven’t seen anything in my career to indicate that smaller languages outperform when it comes to faster delivery or less bugs.
As an aside, I’d even go so far as to say that the main problem with C++ is not that it has so many features in number, but that its features interact with each other in unpredictable ways. Said another way, it’s not the number of nodes in the graph, but the number of edges and the manner of those edges.
Just an anecdote and not necessarily generalizable, but I can at least give one example:
I'm in academia doing ML research where, for all intents and purposes, we work exclusively in Python. We had a massive CSV dataset which required sorting, filtering, and other data transformations. Without getting into details, we had to rerun the entire process when new data came in roughly every week. Even using every trick to speed up the Python code, it took around 3 days.
I got so annoyed by it that I decided to rewrite it in a compiled language. Since it had been a few years since I've written any C/C++, which was only for a single class in undergrad and I remember very little of, I decided to give Go a try.
I was able to learn enough of the language and write up a simple program to do the data processing in less than a few hours, which reduced the time it took from 3+ days to less than 2 hours.
I unfortunately haven't had a chance or a need to write any more Go since then. I'm sure other compiled, GC languages (e.g., Nim) would've been just as productive or performant, but I know that C/C++ would've taken me much longer to figure out and would've been much harder to read/understand for the others that work with me who pretty much only know Python. I'm fairly certain that if any of them needed to add to the program, they'd be able to do so without wasting more than a day to do so.
Did you try scipy/numpy or any python library with a compiled implementation before picking up Go?
Of course, but the dataset was mostly strings that needed to be cross-referenced with GIS data. Tried every library under the sun. The greatest speed up I got was using polars to process the mostly-string CSVs, but didn't help much. With that said, I think polars was also just released when we were working with that dataset and I'm sure there's been a lot of performance improvements since then.
These only help if you can move the hot loop into some compiled code in those libraries. There's a lot of cases where this isn't possible and at that point there's just no way to make python fast (basically, as soon as you have a for loop in python that runs over every point in your dataset, you've lost).
> I’m not saying it’s wrong to have a preference for smaller languages, I just haven’t seen anything in my career to indicate that smaller languages outperform when it comes to faster delivery or less bugs.
I can imagine myself grappling with a language feature unobvious to me and eventually getting distracted. Sure, there is a lot of things unobvious to me but Go is not one of them and it influenced the whole environment.
Or, when choosing the right language feature, I could end up with weighing up excessively many choices and still failing to get it right, from the language correctness perspective (to make code scalable, look nice, uniform, play well with other features, etc).
An example not related to Go: bash and rc [1]. Understanding 16 pages of Duff’s rc manual was enough for me to start writing scripts faster than I did in bash. It did push me to ease my concerns about program correctness, though, which I welcomed. The whole process became more enjoyable without bashisms getting in the way.
Maybe it’s hard to measure the exact benefit but it should exist.
I think Go is a great language when hiring. If you're hiring for C++, you'll be wary of someone who only knows JavaScript as they have a steep learning curve ahead. But learning Go is very quick when you already know another programming language.
I agree that empirical data in programming is difficult, but i’ve used many of those languages personally, so I can say for myself at least that I’m far more productive in Go than any of those other languages.
> As an aside, I’d even go so far as to say that the main problem with C++ is not that it has so many features in number, but that its features interact with each other in unpredictable ways. Said another way, it’s not the number of nodes in the graph, but the number of edges and the manner of those edges.
I think those problems are related. The more features you have, the more difficult it becomes to avoid strange, surprising interactions. It’s like a pharmacist working with a patient who is taking a whole cocktail of prescriptions; it becomes a combinatorial problem to avoid harmful reactions.
> Like, people get real work done every day at scale with C# and C++.
That would be me. I _like_ C#, but there are elements to that language that I _never_ work with on a daily basis, it's just way too large of a language.
Go is refreshing in it's simplicity.
I've been writing go professionally for about ten years, and with go I regularly find myself saying "this is pretty boring", followed by "but that's a good thing" because I'm pretty sure that I won't do anything in a go program that would cause the other team members much trouble if I were to get run over by a bus or die of boredom.
In contrast writing C++ feels like solving an endless series of puzzles, and there is a constant temptation to do Something Really Clever.
> I'm pretty sure that I won't do anything in a go program that would cause the other team members much trouble
Alas there are plenty of people who do[0] - for some reason Go takes architecture astronaut brain and wacks it up to 11 and god help you if you have one or more of those on your team.
[0] flashbacks to the interface calling an interface calling an interface calling an interface I dealt with last year - NONE OF WHICH WERE NEEDED because it was a bloody hardcoded value in the end.
My cardinal rule in Go is just don't use interfaces unless you really, really need to and there's no other way. If you're using interfaces you're probably up to no good and writing Java-ish code in Go. (usually the right reason to use interfaces is exportability)
Yes, not even for testing. Use monkey-patching instead.
> My cardinal rule in Go is just don't use interfaces unless you really, really need to and there's no other way.
They do make some sense for swappable doodahs - like buffers / strings / filehandles you can write to - but those tend to be in the lower levels (libraries) rather than application code.
Go is okay. I don't hate it but I certainly don't love it.
The packaging story is better than c++ or python but that's not saying much, the way it handles private repos is a colossal pain, and the fact that originally you had to have everything under one particular blessed directory and modules were an afterthought sure speaks volumes about the critical thinking (or lack thereof) that went into the design.
Also I miss being able to use exceptions.
When Go was new, having better package management than Python and C++ was saying a lot. I’m sure Go wasn’t the first, but there weren’t many mainstream languages that didn’t make you learn some imperative DSL just to add dependencies.
Sure, but all those languages didn't have the psychotic design that mandated all your code lives under $GOPATH for the first several versions.
I'm not saying it's awful, it's just a pretty mid language, is all.
I picked up Go precisely in 2012 because $GOPATH (as bad as it was) was infinitely better than CMake, Gradle, Autotools, pip, etc. It was dead simple to do basic dependency management and get an executable binary out. In any other mainstream language on offer at the time, you had to learn an entire programming language just to script your meta build system before you could even begin writing code, and that build system programming language was often more complex than Go.
That was a Plan9ism, I think. Java had something like it with CLASSPATH too, didn't it?
I never understood the GOPATH freakout, coming from Python it seemed really natural- it's a mandatory virtualenv.
The fact that virtualenv exists at all should be viewed by the python community as a source of profound shame.
The idea that it's natural and accepted that we just have python v3.11, 3.12, 3.13 etc all coexisting, each with their own incompatible package ecosystems, and in use on an ad-hoc, per-directory basis just seems fundamentally insane to me.
The language has changed a lot since then. Give it a fresh look sometime.
It's still pretty mid and still missing basic things like sets.
But mid is not all that bad and Go has a compelling developer experience that's hard to beat. They just made some unfortunate choices at the beginning that will always hold it back.
The tradeoff with that language simplicity is that there's a whole lot of gotchas that come with Go. It makes things look simpler than they actually are.
> I'm convinced no more than a handful of humans understand all of C# or C++
How would the proportion of humans that understand all of Rust compare?
For Rust vs C++, I'd say it'll be much easier to have a complete understanding of Rust. C++ is an immensely complex language, with a lot of feature interactions.
C# is actually fairly complex. I'm not sure if it's quite at the same level as Rust, but I wouldn't say it's that far behind in difficulty for complete understanding.
Rust managed to learn a lot from C++ and other languages' mistakes.
So while it has quite a bit of essential complexity (inherent in the design space it operates: zero overhead low-level language with memory safety), I believe it fares overall better.
Like no matter the design, a language wouldn't need 10 different kinds of initializer syntaxes, yet C++ has at least that many.
I'm pretty convinced that nobody has a full picture of Rust in their head. There isn't even a spec to read.
There is, in fact, a spec to read[1], as of earlier this year.
[1] https://rustfoundation.org/media/ferrous-systems-donates-fer...
Rust is very advanced, with things like higher-ranked trait bounds (https://doc.rust-lang.org/nomicon/hrtb.html) and generic associated types (https://www.ncameron.org/rfcs/1598) that are difficult because they are essential complexity not accidental complexity.
For Rust I'd expect the implementation to be the real beast, versus the language itself. But not sure how it compares to C++ implementation complexity.
Rust isn’t that complicated if you have some background in non GC languages.
There's a different question too, that I think is more important (for any language): how much of the language do you need to know in order to use it effectively. As another poster mentioned, the issue with C++ might not be the breath of features, but rather how they interact in non-obvious ways.
This is also what I like about JS, except it's even easier than Go. Meanwhile Python has a surprising number of random features.
ECMAScript is an order of magnitude more complicated than Go by virtually every measure - length of language spec, ease of parsing, number of context-sensitive keywords and operators, etc.
Yeah I’m pretty sure people who say JS is easy don’t know about its Prototype based OOP
You don't have to know about it, but if you do, it's actually simpler than how other languages do OOP.
Not convinced. Especially with property flags.
strict mode makes it okay
Sorry, hard disagree. Try to understand what `this` means in JS in its entirety and you'll agree it's by no stretch of the imagination a simple language. It's more mind-bending and hence _The Good Parts_.
I think JS is notoriously complicated: the phrase “the good parts” has broad recognition among programmers.
Just so we're on the same page, this is the current JS spec:
https://262.ecma-international.org/16.0/index.html
I don't agree. (And frankly don't like using JS without at least TypeScript.)
While I might not think that JS is a good language (for some definition of a good language), to me the provided spec does feel pretty small, considering that it's a language that has to be specified to the dot and that the spec contains the standard library as well.
It has some strange or weirdly specified features (ASI? HTML-like Comments?) and unusual features (prototype-based inheritance? a dynamically-bounded this?), but IMO it's a small language.
Shrugging it off as just being large because it contains the "standard library" ignores that many JS language features necessarily use native objects like symbols or promises, which can't be entirely implemented in just JavaScript alone, so they are intrinsic rather than being standard library components, akin to Go builtins rather than the standard library. In fact, in actual environments, the browser and/or Node.JS provide the actual standard library, including things like fetch, sockets, compression codecs, etc. Even ignoring almost all of those bits though, the spec is absolutely enormous, because JavaScript has:
- Regular expressions - not just in the "standard library" but in the syntax.
- An entire module system with granular imports and exports
- Three different ways to declare variables, two of which create temporal dead zones
- Classes with inheritance, including private properties
- Dynamic properties (getters and setters)
- Exception handling
- Two different types of closures/first class functions, with different binding rules
- Async/await
- Variable length "bigint" integers
- Template strings
- Tagged template literals
- Sparse arrays
- for in/for of/iterators
- for await/async iterators
- The with statement
- Runtime reflection
- Labeled statements
- A lot of operators, including bitwise operators and two sets of equality operators with different semantics
- Runtime code evaluation with eval/Function constructor
And honestly it's only scratching the surface, especially of modern ECMAScript.
A language spec is necessarily long. The JS language spec, though, is so catastrophically long that it is a bit hard to load on a low end machine or a mobile web browser. It's on another planet.
Yeah, a lot of the quirks come from it being small
The Javascript world hides its complexity outside the core language, though. JS itself isn't so weird (though as always see the "Wat?" video), but the incantations required to type and read the actual code are pretty wild.
By the time you understand all of typescript, your templating environment of choice, and especially the increasingly arcane build complexity of the npm world, you've put in hours comparable to what you'd have spent learning C# or Java for sure (probably more). Still easier than C++ or Rust though.
…do you know you can just write JavaScript and run it in the browser? You don’t need TypeScript, NPM or build tools.
You do if you want more than one file, or if you want to use features that a user’s target browser may not support.
nodejs and npm are easy for beginners, especially compared to the Python packaging situation
[flagged]
I learned Go this year, and this assertion just... isn't true? There are a bunch of subtleties and footguns, especially with concurrency.
C++ is a basket case, it's not really a fair comparison.
As they (I) say, writing a concurrent Go program is easy, writing a correct one is a different story :)
I’ve been using Python since 2008, and I don’t feel like I understand very much of it at all, but after just a couple of years of using Go in a hobby capacity I felt I knew it very well.
Well that's good, since Go was specifically designed for juniors.
From Rob Pike himself: "It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."
However, the main design goal was to reduce build times at Google. This is why unused dependencies are a compile time error.
> This is why unused dependencies are a compile time error.
https://go.dev/doc/faq?utm_source=chatgpt.com#unused_variabl...
> There are two reasons for having no warnings. First, if it’s worth complaining about, it’s worth fixing in the code. (Conversely, if it’s not worth fixing, it’s not worth mentioning.) Second, having the compiler generate warnings encourages the implementation to warn about weak cases that can make compilation noisy, masking real errors that should be fixed.
I believe this was a mistake (one that sadly Zig also follows). In practice there are too many things that wouldn't make sense being compiler errors, so you need to run a linter. When you need to comment out or remove some code temporarily, it won't even build, and then you have to remove a chain of unused vars/imports until it let's you, it's just annoying.
Meanwhile, unlinted go programs are full of little bugs, e.g. unchecked errors or bugs in err-var misuse. If there only were warnings...
Yeah, but just going back to warnings would be a regression.
I believe the correct approach is to offer two build modes: release and debug.
Debug compiles super fast and allows unused variables etc, but the resulting binary runs super slowly, maybe with extra safety checks too, like the race detector.
Release is the default, is strict and runs fast.
That way you can mess about in development all you want, but need to clean up before releasing. It would also take the pressure off having release builds compile fast, allowing for more optimisation passes.
That doesn't make any sense, you'd still need to run the linters on release. Why bail out on "unused var" and not on actually harmful stuff.
> Debug compiles super fast and allows unused variables etc, but the resulting binary runs super slowly, maybe with extra safety checks too, like the race detector.
At least in the golang / unused-vars at Google case, allowing unused vars is explicitly one of the things that makes compilation slower.
In that case it's not "faster compilation as in less optimization". It's "faster compilation as in don't have to chase down and potentially compile more parts of a 5,000,000,000 line codebase because an unused var isn't bringing in a dependency that gets immediately dropped on the floor".
So it's kinda an orthogonal concern.
Accidentally pulling in a unused dependency during development is, if not a purely hypothetical scenario, at least an extreme edge case. During debug, most of the times you already built those 5000000000 lines while trying to reproduce a problem on the original version of the code. Since that didn’t help, you now want to try commenting out one function call. Beep! Unused var.
Right, I meant that the binary should run slowly on purpose, so that people don't end up defaulting to just using the debug build. A nice way of doing so without just putting `sleep()`s everywhere would be to enable extra safety checks.
I feel like people always take the designed for juniors thing the wrong way by implying that beneficial (to general software engineering) features or ideas were left out as a trade off to make the language easier to learn at the cost of what the language could be to a senior. I don't think the go designers see these as opposing trade offs.
Whats good for the junior can be good for the senior. I think PL values have leaned a little too hard towards valuing complexity and abstract 'purity' while go was a break away from that that has proved successful but controversial.
> This is why unused dependencies are a compile time error.
I think my favourite bit of Go opinionatedness is the code formatting.
K&R or GTFO.
Oh you don't like your opening bracket on the same line? Tough shit, syntax error.
But it also has a advantages that you can literally read a lot of code from other devs without twisting your eyes sideways because everybody has their own style.
Exactly.
"This is Go. You write it this way. Not that way. Write it this way and everyone can understand it."
I wish I was better at writing Go, because I'm in the middle of writing a massive and complex project in Go with a lot of difficult network stuff. But you know what they say, if you want to eat a whole cow, you just have to pick and end and start eating.
Yep ... its like people never read some of the main dev's motivations. The ability for people to be able to read each others code was a main point.
I don't know but for me a lot of attacks on Go, often come from non-go developers, VERY often Rust devs. When i started Go, it was always Rust devs in /r/programming pushing their agenda as Rust being the next best thing, the whole "rewrite everything in Rust"...
About 10 years ago, learned Rust and these days, i can barely read the code anymore with the tons of new syntax that got added. Its like they forgot the lessons from C++...
> I don't know but for me a lot of attacks on Go, often come from non-go developers, VERY often Rust devs.
I see it as a bit like Python and Perl. I used to use both but ended up mostly using Python. They're different languages, for sure, but they work in similar ways and have similar goals. One isn't "better" than the other. You hardly ever see Perl now, I guess in the same way there's a lot of technology that used to be everywhere but is now mostly gone.
I wanted to pick a not-C language to write a thing to deal with a complex but well-documented protocol (GD92, and we'll see how many people here know what that is) that only has proprietary software implementing it, and I asked if Go or Rust would be a good fit. Someone told me that Go is great for concurrent programming particularly to do with networks, and Rust is also great for concurrent processing and takes type safety very seriously. Well then, I guess I want to pick apart network packets where I need to play fast and loose with ints and strings a bit, so maybe I'll use Go and tread carefully. A year later, I have a functional prototype, maybe close to MVP, written in Go (and a bit of Lua, because why not).
The Go folks seem to be a lot more fun to be around than the Rust folks.
But at least they're nothing like the Ruby on Rails folks.
Just because it was a design goal doesn't mean it succeeded ;)
From Russ Cox this time: "Q. What language do you think Go is trying to displace? ... One of the surprises for me has been the variety of languages that new Go programmers used to use. When we launched, we were trying to explain Go to C++ programmers, but many of the programmers Go has attracted have come from more dynamic languages like Python or Ruby."
It's interesting that I've also heard the same from people involved in Rust. Expecting more interest from C++ programmers and being surprised by the numbers of Ruby/Python programmers interested.
I wonder if it's that Ruby/Python programmers were interested in using these kinds of languages but were being pushed away by C/C++.
The people writing C++ either don't need much convincing to switch because they see the value or are unlikely to give it up anytime soon because they don't see anything Rust does as being useful to them, very little middle ground. People from higher level languages on the other hand see in Rust a way to break into a space that they would otherwise not attempt because it would take too long a time to reach proficiency. The hard part of Rust is trying to simultaneously have hard to misuse APIs and no additional performance penalty (however small). If you relax either of those goals (is it really a problem if you call that method through a v-table?), then Rust becomes much easier to write. I think GC Rust would already be a nice language to use that I'd love, like a less convoluted Scala, it just wouldn't have fit in a free square that ensured a niche for it to exist and grow, and would likely have died in the vine.
I think on average C++ programmers are more interested in Rust than in Go. But C programmers are on average probably not interested in either. I do agree that the accessible nature of the two languages (or at least perception thereof) compared to C and C++ is probably why there's more people coming from higher-level languages interested in the benefits of static typing and better performance.
It really depends on product area.
No.
I write a lot of Go, a bit of Rust, and Zig is slowly creeping in.
To add to the above comment, a lot of what Go does encourages readability... Yes it feels pedantic at moments (error handling), but those cultural, and stylistic elements that seem painful to write make reading better.
Portable binaries are a blessing, fast compile times, and the choices made around 3rd party libraries and vendoring are all just icing on the cake.
That 80 percent feeling is more than just the language, as written, its all the things that come along with it...
Error handling is objectively terrible in Go and the explicitness of the always repeating pattern just makes humans pay less attention to potentially problematic lines and otherwise increases the noise to signal ratio.
Error handling isn't even a pain to write any more with AI autocomplete which gets it right 95%+ of the time in my experience.
You're not wrong but... there is a large contingent of the Go community that has a rather strong reaction to AI/ML/LLM generated code at any level.
I keep using the analogy, that the tools are just nail guns for office workers but some people remain sticks in the mud.
Nail guns are great because they're instant and consistent. You point, you shoot, and you've unimpeachably bonded two bits of wood.
For non-trivial tasks, AI is neither of those. Anything you do with AI needs to be carefully reviewed to correct hallucinations and incorporate it into your mental model of the codebase. You point, you shoot, and that's just the first 10-20% of the effort you need to move past this piece of code. Some people like this tradeoff, and fair enough, but that's nothing like a nailgun.
For trivial tasks, AI is barely worth the effort of prompting. If I really hated typing `if err != nil { return nil, fmt.Errorf("doing x: %w", err) }` so much, I'd make it an editor snippet or macro.
> Nail guns are great because they're instant and consistent. You point, you shoot, and you've unimpeachably bonded two bits of wood.
You missed it.
If I give a random person off the street a nail gun, circular saw and a stack of wood are they going to do a better job building something than a carpenter with a hammer and hand saw?
> Anything you do with AI needs to be carefully reviewed
Yes, and so does a JR engineer, so do your peers, so do you. Are you not doing code reviews?
> If I give a random person off the street a nail gun, circular saw and a stack of wood
If this is meant to be an analogy for AI, it doesn't make sense. We've seen what happens when random people off the street try to vibe-code applications. They consistently get hacked.
> Yes, and so does a JR engineer
Any junior dev who consistently wrote code like an AI model and did not improve with feedback would get fired.
You are responsible for the AI code you check in. It's your reputation on the line. If people felt the need to assume that much responsibility for all code they review, they'd insist on writing it themselves instead.
> there is a large contingent of the Go community that has a rather strong reaction to AI/ML/LLM generated code at any level.
This Go community that you speak of isn't bothered by writing the boilerplate themselves in the first place, though. For everyone else the LLMs provide.
> Which makes sense, it's got the smallest language spec of any of them
I think go is fairly small, too, but “size of spec” is not always a good measure for that. Some specs are very tight, others fairly loose, and tightness makes specs larger (example: Swift’s language reference doesn’t even claim to define the full language. https://docs.swift.org/swift-book/documentation/the-swift-pr...: “The grammar described here is intended to help you understand the language in more detail, rather than to allow you to directly implement a parser or compiler.”)
(Also, browsing golang’s spec, I think I spotted an error in https://go.dev/ref/spec#Integer_literals. The grammar says:
decimal_lit = "0" | ( "1" … "9" ) [ [ "_" ] decimal_digits ] .
Given that, how can 0600 and 0_600 be valid integer literals in the examples?)You're looking at the wrong production. They are octal literals:
octal_lit = "0" [ "o" | "O" ] [ "_" ] octal_digits .Thanks! Never considered that a 21st century language designed for “power of two bits per word” hardware would keep that feature from the 1970s, so I never looked at that production.
Are there other modern languages that still have that?
0600 and 0_600 are octal literals:
octal_lit = "0" [ "o" | "O" ] [ "_" ] octal_digits .Never mind, I was wrong. Here’s a playground showing how go parses each one: https://go.dev/play/p/hyWPkL_9C5W
> Octals must start with zero and then o/O literals.
No, the o/O is optional (hence in square brackets), only the leading zero is required. All of these are valid octal literals in Go:
0600 (zero six zero zero)
0_600 (zero underscore six zero zero)
0o600 (zero lower-case-letter-o six zero zero)
0O600 (zero upper-case-letter-o six zero zero)
My bad! I was wrong; added a playground demonstration the parsing behavior above.
My original comment was incorrect. These are being parsed as octals, not decimals: https://go.dev/play/p/hyWPkL_9C5W
I don't understand the framing you have here, of Rust being an asymptote of language capability. It isn't. It's its own set of tradeoffs. In 2025, it would not make much sense to write a browser in Go. But there are a lot of network services it doesn't really make sense to write in Rust: you give up a lot (colored functions, the borrow checker) to avoid GC and goroutines.
Rust is great. One of the stupidest things in modern programming practice is the slapfight between these two language communities.
Unfortunately, it's the remaining 20% of Rust features that provide 80% of its usefulness.
Language can be bottleneck if there's something huge missing from it that you need, like how many of them didn't have first class support for cooperative multitasking, or maybe you need it to be compiled, or not compiled, or GC vs no GC. Go started out with solid greenthreading, while afaik no major lang/runtime had something comparable at the time (Java now does supposedly).
The thing people tend to overvalue is the little syntax differences, like how Scala wanted to be a nicer Java, or even ObjC vs Swift before the latter got async/await.
I'll be the one to nickpick, but Scala never intended to be a nicer Java. It was and still is an academic exercise in compiler and language theory. Also, judging by Kotlin's decent strides, "little Syntex differences" get you a long way on a competent VM/Runtime/stdlib.
Kotlin's important feature is the cooperative multitasking. Java code has been mangled all these years to work around not having that. I don't think many would justify the switch to Kotlin otherwise.
It's probably an important feature now, but it's a recent one in this context.
Oh true, I thought it was older
Similar story for me. I was looking for a language that just got out of the way. That didn’t require me to learn a full imparable DSL just to add a few dependencies and which could easily produce some artifact that I could share around without needing to make sure the target machine had all the right dependencies installed.
It really is a lovely language and ecosystem of tools, I think it does show its limitations fairly quickly when you want to build something a bit complex though. Really wish they would have added sumtypes
Go is getting more complex over time though. E.g. generics.
>> I'm sure there are plenty of reasons this is wrong, but it feels like Go gets me 80% of the way to Rust with 20% of the effort.
By 20% of the effort, do you mean learning curve or productivity?
Funny thing is that also makes it easier on LLM / AI... Tried a project a while ago both creating the same thing in Rust and Go. Go's worked from the start, while Rust's version needed a lot of LLM interventions and fixes to get it to compile.
We shall not talk about compile time / resource usage differences ;)
I mean, Rust is nice, but compared to when i learned it like 10 years ago, it really looks a lot more these days, like it took too much of a que from C++.
While Go syntax is still the same as it was 10 years ago with barely anything new. What may anger people but even so...
The only thing i love to see is reduce executable sizes because pushing large executables on a dinky upload line, to remove testing is not fun.
> I'm sure there are plenty of reasons this is wrong, but it feels like Go gets me 80% of the way to Rust with 20% of the effort.
I don't see it. Can you say what 80% you feel like you're getting?
The type system doesn't feel anything alike, I guess the syntax is alike in the sense that Go is a semi-colon language and Rust though actually basically an ML deliberately dresses as a semi-colon language but otherwise not really. They're both relatively modern, so you get decent tooling out of the box.
But this feels a bit like if somebody told me that this new pizza restaurant does a cheese pizza that's 80% similar to the Duck Ho Fun from that little place near the extremely tacky student bar. Duck Ho Fun doesn't have nothing in common with cheese pizza, they're both best (in my opinion) if cooked very quickly with high heat - but there's not a lot of commonality.
> I don't see it. Can you say what 80% you feel like you're getting?
I read it as “80% of the way to Rust levels of reliability and performance.” That doesn’t mean that the type system or syntax is at all similar, but that you get some of the same benefits.
I might say that, “C gets you 80% of the way to assembly with 20% of the effort.” From context, you could make a reasonable guess that I’m talking about performance.
Yes, for me I've always pushed the limits of what kinds of memory and cpu usage I can get out of languages. NLP, text conversion, video encoding, image rendering, etc...
Rust beats Go in performance.. but nothing like how far behind Java, C#, or scripting languages (python, ruby, typescript, etc..) are from all the work I've done with them. I get most of the performance of Rust with very little effort a fully contained stdlib/test suite/package manger/formatter/etc.. with Go.
Rust is the most defect free language I have ever had the pleasure of working with. It's a language where you can almost be certain that if it compiles and if you wrote tests, you'll have no runtime bugs.
I can only think of two production bugs I've written in Rust this year. Minor bugs. And I write a lot of Rust.
The language has very intentional design around error handling: Result<T,E>, Option<T>, match, if let, functional predicates, mapping, `?`, etc.
Go, on the other hand, has nil and extremely exhausting boilerplate error checking.
Honestly, Go has been one of my worst languages outside of Python, Ruby, and JavaScript for error introduction. It's a total pain in the ass to handle errors and exceptional behavior. And this leads to making mistakes and stupid gotchas.
I'm so glad newer languages are picking up on and copying Rust's design choices from day one. It's a godsend to be done with null and exceptions.
I really want a fast, memory managed, statically typed scripting language somewhere between Rust and Go that's fast to compile like Go, but designed in a safe way like Rust. I need it for my smaller tasks and scripting. Swift is kind of nice, but it's too Apple centric and hard to use outside of Apple platforms.
I'm honestly totally content to keep using Rust in a wife variety of problem domains. It's an S-tier language.
> I really want a fast, memory managed, statically typed scripting language somewhere between Rust and Go that's fast to compile
It could as well be Haskell :) Only partly a joke: https://zignar.net/2021/07/09/why-haskell-became-my-favorite...
Borgo could be that language for you. It compiles down to Go, and uses constructs like Option<T> instead of nil, Result<T,E> instead of multiple return values, etc. https://github.com/borgo-lang/borgo
> I really want a fast, memory managed, statically typed scripting language somewhere between Rust and Go that's fast to compile like Go, but designed in a safe way like Rust
OCaml is pretty much that, with a very direct relationship with Rust, so it will even feel familiar.
I agree with a lot of what you said. I'm hoping Rust will warm on me as I improve in it. I hate nil/null.
> Go... extremely exhausting boilerplate error checking
This actually isn't correct. That's because Go is the only language that makes you think about errors at every step. If you just ignored them and passed them up like exceptions or maybe you're basically just exchanging handling errors for assuming the whole thing pass/fail.
If you you write actual error checking like Go in Rust (or Java, or any other language) then Go is often less noisy.
It's just two very different approaches to error handling that the dev community is split on. Here's a pretty good explanation from a rust dev: https://www.youtube.com/watch?v=YZhwOWvoR3I
It’s very common in Go to just pass the error on since there’s no way to handle it in that layer.
Rust forces you to think about errors exactly as much, but in the common case of passing it on it’s more ergonomic.
just be careful with unwrap :)
Go is in the same performance profile as Java and C#. There are tons of benchmarks that support this.
1) for one-off scripts and 2) If you ignore memory.
You can make about anything faster if you provide more memory to store data in more optimized formats. That doesn't make them faster.
Part of the problem is that Java in the real world requires an unreasonable number of classes and 3rd party libraries. Even for basic stuff like JSON marshaling. The Java stdlib is just not very useful.
Between these two points, all my production Java systems easily use 8x more memory and still barely match the performance of my Go systems.
I genuinely can’t think of anything the Java standard library is missing, apart from a json parser which is being added.
It’s your preference to prefer one over the other, I prefer Java’s standard library because atleast it has a generic Set data structure in it and C#’s standard library does have a JSON parser.
I don’t think discussions about what is in the standard library really refutes anything about Go being within the same performance profile though.
Memory is the most common tradeoff engineers make for better performance. You can trivially do so yourself with java, feel free to cut down the heap size and Java's GC will happily chug along 10-100 times as often without a second thought, they are beasts. The important metric is that Java's GC will be able to keep up with most workloads, and it won't needlessly block user threads from doing their work. Also, not running the GC as often makes Java use surprisingly small amounts of energy.
As for the stdlib, Go's is certainly impressive but come on, I wouldn't even say that in general case Java's standard library is smaller. It just so happens that Go was developed with the web in mind almost exclusively, while Java has a wider scope. Nonetheless, the Java standard library is certainly among the bests in richness.
Java’s collectors vastly outperform Go’s. Look at the Debian binary tree benchmarks [0]. Go just uses less memory because it’s AOT compiled from the start and Java’s strategy up until recently is to never return memory to the OS. Java programs are typically on servers where it’s the only application running.
[0] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Also -- Java versus Java native-image
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
IIRC the native image GC is still the serial GC by default. Which would probably perform the worst out of all the available GCs.
I know on HotSpot they’re planning to make G1 the default for every situation. Even where it would previously choose the serial GC.
[dead]
I guess the 80% would be a reasonably performant compiled binary with easily managed dependencies? And the extra 20% would be the additional performance and peace of mind provided by the strictness of the Rust compiler.
Single binary deployment was a big deal when Go was young; that might be worth a few percent. Also: automatically avoiding entire categories of potential vulnerabilities due to language-level design choices and features. Not compile times though ;)
Wild guess but, with the current JS/python dominance, maybe it’s just the benefits of a (modern) compiled language.
I love Go. One thing I haven't seen noted here is how great it is for use in monorepos. Adding a new application is just a matter of making a folder and putting a main packaged go file with a main() func. Running go install at the root ./.. takes care of compiling everything quickly and easily.
This combined with the ease of building CLI programs has been an absolute godsend in the past when I've had to quickly spin up CLI tools which use business logic code to fix things.
I don't understand how this isn't also true for practically every other language?
It just isn't. There's nothing stopping other languages from being that easy but very few even try.
Go sees itself more as a total dev environment than just a language. There's integrated build tooling, package management, toolchain management, mono repo tools, testing, fuzzing, coverage, documentation, formatting, code analysis tools, performance tools...everything integrated in a single binary and it doesn't feel bloated at all.
You see a lot of modern runtimes and languages have learned from Go. Deno, Bun and even Rust took a lot of their cues from Go. It's understood now that you need a lot more than just a compiler/runtime to be useful. In languages that don't have that kind of tooling the community is trying to make them more Go-like, for example `uv` for Python.
Getting started in a Go project is ridiculously easy because of that integrated approach.
I personally like Go and appreciate its simplicity and tooling and everything but the example given is "making a folder" and "putting a ... main() func" in it. But, like, this is exactly as easy with every single other language that I can think of.
The second part "Running go install at the root ./.." is actually terrible and risky but, still, trivial with make (a - literally - 50 year old program) or shell or just whatever.
I get that the feelz are nice and all (just go $subcmd) but.. come on.
> "making a folder" and "putting a ... main() func" in it
You can't do that with python for instance. First, you need a python interpreter on the target machine, and on top of that you need the correct version of the interpreter. If yours is too old or not old enough, things might break. And then, you need to install all the dependencies. The correct version of each, as well. And they might not exist on your system, or conflict with some other lib you have on your target machine.
Same problem with any other interpreted language, including Java and C# obviously.
C/C++ dependency management is a nightmare too.
Rust is slightly better, but there was no production-ready rust 16 years ago (or even 10 years ago).
You also need a version of the go compiler, possibly one new enough to handle some //go:magic:comments.
I agree that static linking is great and that python sucks but I was trying to say I can, very easily, mkdir new-py-program/app.py and stick __main__ in it or mkdir new-perl-program/app.pl or mkdir my-new-c-file/main.c etc.
For 2/3 of the above I can even make easy/single executable files go-style.
Nowadays, with uv (and probably some other tools too) it's pretty easy to ship a python program on a machine that doesn't even have python on it, so it's pretty much a solved problem today (in most cases). But 5 or 10 years ago it was a real hassle that go solved elegantly. Yes you can make python executables but they are like 100 Mb even for a simple hello world. It's a last resort solution.
I don't understand your comment on magic comments. You don't need them to cross-compile a program. I was already doing that routinely 10 years ago. All I needed is a `GOOS=LINUX GOARCH=386 go build myprog && scp myprog myserver:`
[dead]
> But, like, this is exactly as easy with every single other language that I can think of.
I mean, not exactly. Rust (or rather Cargo) requires you to declare binaries in your Cargo.toml, for example. It also, AIUI, requires a specific source layout - binaries need to be named `main.rs` or be in `src/bin`. It's a lot more ceremony and it has actively annoyed me whenever I tried out Rust.
> The second part "Running go install at the root ./.." is actually terrible and risky but, still, trivial with make (a - literally - 50 year old program) or shell or just whatever.
Again, no, it is not trivial. Using make requires you to write a Makefile. Using shell requires you to write a shell script.
I'm not saying any of this is prohibitive - or even that they should convince anyone to use Go - but it is just not true to say that other languages make this just as easy as Go.
> declare binaries in your Cargo.toml, for example. It also, AIUI, requires a specific source layout - binaries need to be named `main.rs` or be in `src/bin`.
It does not. Those are the defaults. You can configure something else if you wish. Most people just don’t bother, because there’s not really advantages to breaking with default locations 99% of the time.
I think one other major part is that, compared to e.g. make the build process is more-or-less the same for all Go projects. There is some variation, and some (newcomers I want to think) still like to wrap go commands into Makefile, but it's still generally very easy to understand and very uniform across different Go projects
The distinction, I believe, is between "possible" and "easy". Go makes a lot of very specific things easy via some of its design choices with language and tooling.
As a counter example, it seems like e.g. c++ is mostly concerned about making things possible and rarely about easy.
It’s not true of C/C++ which need changes to the build system. Also true for rust workspaces which is how I’d recommend structuring monorepos although it is generally easy (you just need to add a small cargo.toml file) or you can not use a workspace but you still need to declare the binary if I recall correctly.
It's true for any C/C++ project which bothers to write 10 lines of GNU `make` code.
The problem is that many projects still pander to inferior 1980s-era `make` implementations, and as such rely heavily on the abominations that are autotools and cmake.
Autotools is far too pessimistic, but there are still considerable differences between different compiling environments. Tooling like CMake will always be necessary unless you are only targeting unix and your dependency graph isn't terribly deep.
Even in that specific niche I find using a programmatically generated ninja file to be a far superior experience to GNU make.
This is the theory, yes. And then reality comes bursting through the door and you are confronted with the awfulness that is the C/++ toolchain. Starting with multi-OS, multi-architecture builds, and then plunging into the cesspit that is managing third party dependencies.
Let’s be real. C/++ has nothing even approaching a sane way to do builds. It is just degrees from slightly annoying to full on dumpster fire.
C/C++ library dependencies are a thing, and there's no universal solution to acquiring and installing them.
The most universal thing in C/C++ is vendoring, IMHO.
If you are distributing source, you distribute everything. Then, it only needs a compiler and libc. That vendored package is tested, and it works on your platform, so there's no guesswork.
That might work for small, self-contained dependencies in projects which aren't themselves libraries. But good luck vendoring your deps when one of those deps is something really large like Qt or WebKit, or if you're building a library that other applications need to be able to link against.
Not really since those vendored libraries would have their own dependencies as well
No, you don't. All you need is to create a folder `bin` and any file with `main` fn in it will create a binary.
With bazel/buck/pants/etc won't be problem for other major languages.
Have you worked with those before? "Quickly and easily" are not exactly what comes to mind.
Yes, at Google, roughtly 10 years ago - great experience. Recently with my own small projects, and on Windows (where it lacks the same charm... yet - but eventually!)
But yes, I worked - mainly Java (back then) with GWT, some Python, Sawzall, R, some other internal langs.
Agreed. This use case is not mentioned enough.
Contributing to a new Go codebase is easy.
The Go codebases look all alike. Not only the language has really few primitives but also the code conventions enforced by standard library, gofmt, and golangci-lint implies that the structure of code bases are very similar.
Many language communities can't even agree on the build tooling.
I'm still trying to convince the scientists I work with that they should format their code or use linters. Making them mandatory in Go was a good decision.
> I'm still trying to convince the scientists I work with that they should format their code or use linters.
Consider adding a pre-commit hook if you are allowed to.
My group's repos enforce strict rules, theirs does not.
Yeah, I've been there. I would get passed down horribly formatted code from another repo and it showed the data scientists writing it barely knew what they were doing. It was their repo, we couldn't do anything about it. They wouldn't reformat the code, because they were afraid it would break. They also passed us a lot of Python, and you can see where they got this fear from.
I like that I can understand a Go file without deciphering 15 layers of macros.
i've just started learning Go and i really like this aspect. one way to do things, one way to format.. the % operator is a bit confusing for a negative number - that took me down a little rabbit-hole, learning about how a remainder can be different to how i normally think about it.