UUID package coming to Go standard library

2026-03-072:03375247github.com

I would like to suggest the addition to the standard library of a package to generate and parse UUID identifiers, specifically versions 3, 4 and 5. The main reason I see to include it is that the m...

@mzattahri
@mzattahri

I would like to suggest the addition to the standard library of a package to generate and parse UUID identifiers, specifically versions 3, 4 and 5.

The main reason I see to include it is that the most popular 3rd-party package (github.com/google/uuid) is a staple import in every server/db based Go program, as confirmed by a quick Github code search.

Additionally:

Addendum

Would like to point out how Go is rather the exception than the norm with regards to including UUID support in its standard library.

Reactions are currently unavailable

You can’t perform that action at this time.


Page 2

You can’t perform that action at this time.


Read the original article

Comments

  • By matja 2026-03-078:208 reply

    > UUID versions 1, 2, 3, 4, 5 are already outdated.

    Interesting comment, since v4 is the only version that provides the maximal random bits and is recommended for use as a primary key for non-correlated rows in several distributed databases to counter hot-spotting and privacy issues.

    Edit: Context links for reference, these recommend UUIDv4:

    https://www.cockroachlabs.com/docs/stable/uuid

    https://docs.cloud.google.com/spanner/docs/schema-design#uui...

    • By da_chicken 2026-03-0712:56

      Yeah, I thought it was a strange comment, too. v7 is great when you explicitly need monotonicity, but encoded timestamps can expose information about your system. v4 is still very valid.

    • By jandrewrogers 2026-03-0721:19

      I think "outdated" was a poor choice of words. It is a failure to meet application requirements, which has more to do with design than age. Every standardized UUID is expressly prohibited in some application contexts due to material deficiencies, including v4. That includes newer standards like v7 and v8.

      In practice, most orgs with sufficiently large and complex data models use the term "UUID" to mean a pure 128-bit value that makes no reference to the UUID standard. It is not difficult to find yourself with a set of application requirements that cannot be satisfied with a standardized UUID.

      The sophistication of our use case scenarios for UUIDs exceeds their original design assumptions. They don't readily support every operation you might want to do on a UUID.

    • By zadikian 2026-03-078:541 reply

      Yeah v4 is the goto, and you only use something else if you have a very specific reason like needing rough ordering

      • By jodleif 2026-03-0710:441 reply

        Deterministic uuids is a very standard usecase

        • By 8organicbits 2026-03-0713:203 reply

          You're talking about the hash-based UUIDv3/v5? I haven't found examples of those being used, but I'm curious.

          Using MD5 or 122 bits of a SHA1 hash seems questionable now that both algorithms have known collisions. Using 122 bits of a SHA2/3 seems pretty limited too. Maybe if you've got trusted inputs?

          • By buffalobuffalo 2026-03-0722:51

            I use these a lot. My favorite use case is templates, especially ones that were not initially planned in the architecture.

            Let's say i have some entity like an "organization" that has data that spans several different tables. I want to use that organization as a "parent" in such a way where i can clone them to create new "child" organizations structured the same way they are. I also want to periodically be able to pull changes from the parent organization down into the child organization.

            If the primary keys for all tables involved are UUIDs, I can accomplish this very easily by mapping all IDs in the relevant tables `id => uuid5(id, childOrgId)`. This can be done to all join tables, foreign keys, etc. The end result is a perfect "child" clone of the organization with all data relations still in place. This data can be refreshed from the parent organization any time simply by repeating the process.

          • By eureka7 2026-03-0716:45

            I remember using them in a massive SQL query that needed to generate a GIS data set from multiple tables with an ungodly amount of JOINs and sub-queries to achieve ID stability. Don't ask :p

            For those ~~curious~~ worried, no, this was not a security sensitive context.

          • By zadikian 2026-03-0717:561 reply

            Common one is if you want two structs deemed "equivalent" based on a few fields to get the same ID, and you're only concerned about accidental collision. There are valid use cases for that, but I've also seen it misused often.

            v7 rough ordering also helps as a PK in certain sharded DBs, while others want random, or nonsharded ones usually just serial int.

            • By 8organicbits 2026-03-0719:471 reply

              Have you seen UUIDv3/v5 used there though? I've seen lots of md5 historically and sha variants recently, but not the UUID approach.

              • By zadikian 2026-03-082:35

                Yeah, I've seen both 3 and 5 used, not just hashes in some custom format. That way it works with Postgres uuid type etc.

    • By gzread 2026-03-0712:041 reply

      If you want 128 bits of randomness why not use 128 bits of randomness? A random UUID presupposes the random number has to fit in UUID format.

      • By da_chicken 2026-03-0713:031 reply

        122 bits of randomness.

        It's the same reason we use UTF-8. It's well supported. UUIDs are well supported by most languages and storage systems. You don't have to worry about endianness or serialization. It's not a thing you have to think about. It's already been solved and optimized.

        • By gzread 2026-03-0713:071 reply

          byte[16] is well supported by most languages and storage systems.

          • By da_chicken 2026-03-0713:594 reply

            Sure.

            Now generate your random ID. Did you use a CSPRNG, or were your devs lazy and just used a PRNG? Are you doing that every time you're generating one of these IDs in any system that might need to communicate with your API? Or maybe they just generated one random number, and now they're adding 1 every time.

            Now transfer it over a wire. Are you sure the way you're serializing it is how the remote system will deserialize it? Maybe you should use a string representation, since character transmission is a solved problem with UTF-8. OK, so who decides what that canonical representation is? How do we make it recognizable as an ID without looking like something that people should do arithmetic with?

            It's not like random IDs were a new idea in 2002.

            • By 10000truths 2026-03-0715:272 reply

              None of these are rocket-science problems, they're just standardization issues. You build a library with your generate_id/serialize_id/deserialize_id functions that work with a wrapper type, and tell your devs to use that library. UUID libraries are exactly that, except backed by an RFC.

              • By da_chicken 2026-03-0719:10

                Of course they're not rocket science. But, the question here is, "Why don't you use random 16 bytes instead of a UUIDv4?" It's not a question about rocket science. The answer is still, "Because UUIDv4 is still a better way to do it." The UUID standard solves the second and third tier problems and knock-on effects you don't think about until you've run a system for awhile, or until you start adding multiple information systems that need to interact with the same data.

                But, using UUIDv4 shouldn't be rocket science, either. UUID support should be built in to a language intended for web applications, database applications, or business applications. That's why you're using Go or C# instead of C. And Go is somewhat focused on micro-service architectures. It's going to need to serialize and deserialize objects regularly.

            • By jkrejcha 2026-03-088:081 reply

              > Now generate your random ID. Did you use a CSPRNG, or were your devs lazy and just used a PRNG?

              There's nothing about UUIDs that need to make them cryptographically secure. Many programming language libraries don't (and some explicitly recommend against using them if you need cryptographically strong randomness).

              • By foxglacier 2026-03-0819:21

                Not for security but to make sure you don't accidentally reuse the same seed. I've done that before when the PRNG seed was the time the application started and it turns out you can run multiple instances at the same time.

            • By gzread 2026-03-0714:243 reply

              How's your UUIDv4 generated?

              > Are you sure the way you're serializing it is how the remote system will deserialize it?

              It's 16 bytes. There's no serialization.

              • By wredcoll 2026-03-0714:532 reply

                What do they look like when I put it in a url?

                • By pphysch 2026-03-0717:141 reply

                  Use whatever encoding you want? Base64 is probably one of the most practical, but you're not obligated to use that.

                  • By bastawhiz 2026-03-0719:401 reply

                    UUIDs don't use base64

                    • By pphysch 2026-03-0821:151 reply

                      You can absolutely encode a UUID in base64, as you can any string of 128 bits.

                      • By bastawhiz 2026-03-0915:421 reply

                        128 random bits in some random format aren't a uuid. 0.2ml of water isn't a raindrop. If I say "you can provide me with a uuid" and you give me a base64-encoded string, it's getting rejected by validation. If I say "this text needs to be a Unicode string" and you give me a base64-encoded Unicode string's byte array, it's not going to go well.

                        • By pphysch 2026-03-1017:34

                          Why are you implying that converting from base64 to and from standard UUID representation (hyphen-delimited hexadecimal) is more than a trivial operation? Either client or server can do this at any point.

                          Does Postgres not truly support UUID because it internally represents it as 128 bits instead of a huge number of encoded bytes in the standard representation? Of course not.

              • By bastawhiz 2026-03-0719:391 reply

                > There's no serialization.

                Hex encoding with hyphens in the right spot isn't serialization?

              • By intelVISA 2026-03-0716:511 reply

                Vibe endian

            • By efilife 2026-03-0714:464 reply

              You are really making it seem like a huge problem. Generate random bytes, serialize to a string and store in a db. Done

              A downvote tells me nothing. Please tell me what I'm missing, maybe I could learn something

              • By bastawhiz 2026-03-0719:51

                > serialize to a string and store in a db

                Ah, here we are. If it's just bytes, why store it as a string? Sixteen bytes is just a 128-bit integer, don't waste the space. So now the DB needs to know how to convert your string back to an integer. And back to a string when you ask for it.

                "Well why not just keep it as an integer?"

                Sure, in which base? With leading zeroes as padding?

                But now you also need to handle this in JavaScript, where you have to know to deserialize it to a Bigint or Buffer (or Uint8Array).

                UUIDs just mean you don't need to do any of this crap yourself. It's already there and it already works. Everything everywhere speaks the same UUIDs.

              • By TomatoCo 2026-03-0717:13

                You have to generate random bytes with sufficient entropy to avoid collisions and you have to have a consistent way to serialize it to a string. There's already a standard for this, it's called UUID.

              • By hamburglar 2026-03-0722:19

                It’s really not that complicated a problem. Don’t worry, you’ll certainly be able to solve all the problems yourself as you encounter them. What you end up with will be functionally equivalent to a proper UUID and will only have cost you man-months of pain, but then you will be able to truly understand the benefit of not spending your effort on easy problems that someone solved before you.

              • By zadikian 2026-03-0718:13

                It's not a huge problem. Uuid adds convenience over reinventing that wheel everywhere. And some of those wheels would use the wrong random or hash or encoding.

                (Downvote wasn't me)

    • By bootsmann 2026-03-079:454 reply

      Really? Doesn’t v4 locally make the inserts into the B-Tree pretty messy? I was taught to use v7 because it allows writes to be a lot faster due to memory efficient paging by the kernel (something you lose with v4 because the page of a subsequent write is entirely random).

      • By sintax 2026-03-0710:441 reply

        https://www.thenile.dev/blog/uuidv7#why-uuidv7 has some details: " UUID versions that are not time ordered, such as UUIDv4 (described in Section 5.4), have poor database-index locality. This means that new values created in succession are not close to each other in the index; thus, they require inserts to be performed at random locations. The resulting negative performance effects on the common structures used for this (B-tree and its variants) can be dramatic. ".

        Also mentioned on HN https://news.ycombinator.com/item?id=45323008

        • By ownagefool 2026-03-0711:52

          In more practical terms:-

          1. Users - your users table may not benefit by being ordered by created_at ( or uuid7 ) index because whether or not you need to query that data is tied to the users activity rather than when they first on-boarded.

          2 Orders - The majority of your queries on recent orders or historical reporting type query which should benefit for a created_at ( or uuidv7 ) index.

          Obviously the argument is then you're leaking data in the key, but my personal take is this is over stated. You might not want to tell people how old a User is, but you're pretty much always going to tell them how old an Order is.

      • By da_chicken 2026-03-0713:37

        It's memory and disk paging both.

        There's also a hot spot problem with databases. That's the performance problem with autoincrement integers. If you are always writing to the same page on disk, then every write has to lock the same page.

        Uuidv7 is a trade off between a messy b-tree (page splits) and a write page hot spot (latch contention). It's always on the right side of the b-tree, but it's spread out more to avoid hot spots.

        That still doesn't mean you should always use v7. It does reversibly encode a timestamp, and it could be used to determine the rate that ids are generated (analogous to the German tank problem). If the uuidv7 is monotonic, then it's worse for this issue.

      • By out_of_protocol 2026-03-0710:321 reply

        v7 exposes creation date, and maybe you don't want that. So, depends on use-case

        • By 1f60c 2026-03-0712:331 reply

          I think I read something once about using v7 internally and exposing v4 in your API.

          • By talkin 2026-03-0716:14

            Or even an autoincrement int primary key internally. Depending on your scale and env etc, but still fits enough use cases.

      • By matja 2026-03-0710:01

        In distributed databases I've worked with, there's usually something like a B-tree per key range, but there can be thousands of key ranges distributed over all the nodes in the cluster in parallel, each handling modifications in a LSM. The goal there is to distribute the storage and processing over all nodes equally, and that's why predictable/clustered IDs fail to do so well. That's different to the Postgres/MySQL scenario where you have one large B-tree per index.

    • By pclmulqdq 2026-03-0712:081 reply

      I believe current official guidance if you want a lot of random data is to use v8, the "user-defined" UUID. The use of v4 is strictly less flexible here.

      • By 8organicbits 2026-03-0713:061 reply

        No, UUIDv8 offers 122 bits for vendor specific or experimental use cases. If you fill those bits randomly, you get the same amount of randomness as a v4. The spec is explicit that it does not replace v4 for random data use case.

        > To be clear, UUIDv8 is not a replacement for UUIDv4 (Section 5.4) where all 122 extra bits are filled with random data.

        https://www.rfc-editor.org/rfc/rfc9562.html#section-5.8-2

        • By pclmulqdq 2026-03-0716:341 reply

          Yes, vendor-specific data can be 100% random.

          • By 8organicbits 2026-03-0717:51

            It can be, but you should prefer UUIDv4 if you do that. One problem is that UUIDv8 does not promise uniqueness.

            > UUIDv8's uniqueness will be implementation specific and MUST NOT be assumed.

            Here's a spec compliant UUIDv8 implementation I made that doesn't produce unique IDs: https://github.com/robalexdev/uuidv8-xkcd-221

            So, given a spec-compliant UUIDv4 you can assume it is unique, but you'd need out-of-band information to make the same assumption about a UUIDv8.

            I wrote much more in a blog post: https://alexsci.com/blog/uuid-oops/

    • By lijok 2026-03-0715:41

      Have you considered using two uuids for more randomness

    • By arccy 2026-03-0710:09

      [flagged]

  • By vzaliva 2026-03-075:305 reply

    A slow day in Go-news land? :)

    It is heathwarming to see such mundane small tech bit making front page of HN when elsewhere is is debated whether programming as profession is dead or more broadly if AI will be enslaving humanity in the next decade. :)

    • By serial_dev 2026-03-076:433 reply

      It’s nice to have a break from AI FUD. It reminds me of a time when I could browse HN without getting anxiety immediately, because nowadays you can’t open a comment section without finding a comment about how you ngmi.

      • By JimDabell 2026-03-079:26

        Well fortunately you’re here to take what was a discussion completely unrelated to AI and drag it back around to AI again.

        If you’re tired of talking about AI, why did you post this?

      • By 0x696C6961 2026-03-076:495 reply

        Man... I spent the last 6 months writing code using voice chat with multiple concurrent Claude code agents using an orchestration system because I felt like that was the new required skill set.

        In the past few weeks I've started opening neovim again and just writing code. It's still 50/50 with a Claude code instance, but fuck I don't feel a big productivity difference.

        • By cwbriscoe 2026-03-076:572 reply

          I just write my own code and then ask AI to find any issues and correct them if I feel it is good advice. What AI is amazing at is writing most of my test cases. Saves me a lot of time.

          • By LtWorf 2026-03-078:593 reply

            I've seen tests doing:

            a = 1

            assert a == 1

            // many lines here where a is never used

            assert a == 1

            Yes AI test cases are awesome until you read what it's doing.

            • By ownagefool 2026-03-079:04

              To be fair, many human tests I've read do similar.

              Especially when folks are trying to push % based test metrics and have types ( and thus they tests assert types where the types can't really be wrong ).

              I use AI to write tests. Many of them the e2e fell into the pointless niche, but I was able to scope my API tests well enough to get very high hit rate.

              The value of said API tests aren't unlimited. If I had to hand roll them, I'm not sure I would have written as many, but they test a multitude of 400, 401, 402, 403, and 404s, and the tests themselves have absolutely caught issues such as validator not mounting correctly, or the wrong error status code due to check ordering.

            • By alecbz 2026-03-0719:24

              It's good at writing/updating tedious test cases and fixtures when you're directing it more closely. But yes, it's not as great at coming up with what to test in the first place.

            • By gzread 2026-03-0712:101 reply

              I write assert(a==1) right before the line where a is assumed to be 1 (to skip a division by a) even if I know it's 1. Especially if I know it's 1!

              • By zahlman 2026-03-107:30

                The assertion here is not about implementation logic. GP presumably has in mind unit tests, specifically in a framework where the test logic is implemented with such assertions. (For the Python ecosystem, pytest is pretty much standard, and works that way.)

          • By porridgeraisin 2026-03-077:141 reply

            Yep. Especially for tests with mock data covering all sorts of extreme edge cases.

            • By koakuma-chan 2026-03-077:492 reply

              Don't use AI for that, it doesn't know what your real data looks like.

              • By porridgeraisin 2026-03-078:24

                Majority of data in typical message-passing plumbing code are a combination of opaque IDs, nominal strings, few enums, and floats. It's mostly OK for these cases, I have found. Esp. in typed languages.

              • By UqWBcuFx6NV4r 2026-03-0713:19

                lol. okay. neither do you.

        • By tossandthrow 2026-03-078:52

          There has always been a difference on modality and substance.

          This is the same thing as picking a new smart programming language or package, or insisting that Dvorak layout is the only real way forward.

          Personally I try to put as much distance to the modality discussion and get intimate with the substance.

        • By p0w3n3d 2026-03-079:09

          > voice chat ... required skill set

          But we're still required to go to the office, and talking to a computer on the open space is highly unwelcome

        • By gzread 2026-03-0712:081 reply

          Right. If AI actually made you more productive, there would be more good software around, and we wouldn't have the METR study showing it makes you 20% slower.

          AI delivers the feeling of productivity and the ability to make endless PoCs. For some tasks it's actually good, of course, but writing high quality software by itself isn't one.

          • By UqWBcuFx6NV4r 2026-03-0713:171 reply

            Ah, yes. LLM-assisted development. That thing that is not at all changing, that thing that different people aren’t doing differently, and that thing that some people aren’t definitely way better at than others. I swear that some supposedly “smart” people on this website throw their ability to think critically out the window when they want to weigh in on the AI culture war. B-but the study!

            I can way with certainty that: 1. LLM-assisted development has gotten significantly, materially better in the past 12 months.

            2. I would be incredibly skeptical of any study that’s been designed, executed, analysed, written about, published, snd talked about here, within that period of time.

            This is the equivalent of a news headline stating with “science says…”.

            • By xerox13ster 2026-03-0716:55

              Nobody is interested in your piece of anecdata and asserting that something has gotten better without doing any studies on it, is the exact opposite of critical thinking.

              You are displaying the exact same thing that you were complaining about.

        • By stavros 2026-03-079:231 reply

          Really? The past two weeks I've been writing code with AI and feel a massive productivity difference, I ended up with 22k loc, which is probably around as many I'd have manually written for the featureset at hand, except it would have taken me months.

          • By 0x696C6961 2026-03-0713:371 reply

            My work involves fixing/adding stuff in legacy systems. Most of the solutions AI comes up with are horrible. I've reverted back to putting problems on my whiteboard and just letting it percolate. I still let AI write most of the code once I know what I want. But I've stopped delegating any decision making to it.

            • By stavros 2026-03-0713:381 reply

              Ah, yeah, I can see that. It's not as good with legacy systems, I've found.

              • By richard_todd 2026-03-0720:55

                Well at least for what I do, success depends on having lots of unit tests to lean on, regardless of whether it is new or existing code. AI plus a hallucination-free feedback loop has been a huge productivity boost for me, personally. Plus it’s an incentive to make lots of good tests (which AI is also good at)

      • By MrBuddyCasino 2026-03-078:44

        A lot of people‘s business model is to to capitalize on LLM anxiety to sell their PUA-tier courses.

    • By VLM 2026-03-0715:10

      Its a small tech bit but a big architecture / management decision.

      Basically, who runs golang?

      The perfectionists are correct, UUIDs are awful and if there's a pile of standards that all have small problems the best thing you can do is make a totally new standard to add to the already too long list.

      The in-the-trenches system software devs want this BAD. Check out https://en.wikipedia.org/wiki/Universally_unique_identifier#... They want a library that flawlessly interops with everything on that list, ideally. Something you can trust and will not deprecate a function you need for live code and it just works. I admit a certain affinity to this perspective.

      The cryptobros want to wait, there is some temporary current turmoil in UUID land. Not like "drama" but things are in flux and it would be horrible for golang to be stuck permanently supporting forever some interim thing that officially gets dropped (or worse, under scrutiny has a security hole or something, but for reverse compatibility with older/present golang would need permanent-ish reverse compatibility) Can't we just wait until 2027 or so? This is not the ideal time to set UUID policy in concrete. Just wait a couple more months or a year or two? https://datatracker.ietf.org/doc/html/rfc9562

      I think I covered the three groups that are fighting pretty accurately and at least semi fairly, I did make fun of the perfectionists a little but cut me a break everyone makes fun of those guys.

      So, yeah, a "small technical bit" but its actually a super huge architectural / leadership / management decision.

      I hope they get it correct, I love golang and have a side thing with tinygo. If you're doing something with microcontrollers that doesn't use networking and you're not locked in to a framework/rtos, just use tinygo its SO cool. Its just fun. I with tinygo had any or decent networking. Why would I need zephyr if I have go routines? Hmm.

      I've been around the block a few times with UUID-alike situations and the worst thing they could decide is to swing to an extreme. They'll probably be OK this is not golangs first time around the block either.

      It'll probably be OK. I hope.

    • By sourcegrift 2026-03-0720:49

      I'm seeing deep technical stuff after months, so I'm happy!

    • By YesThatTom2 2026-03-0712:241 reply

      Here we see Go haters in their natural habitat, the HN comment section.

      Watch as they stand at the watering hole, bored and listless. A sad look on their faces, knowing that now that Go has generics, all their joy has left their life. Like the dog that caught his tail, they are confused.

      One looks at his friends as if to say, "Now what?"

      Suddenly there is a noise.

      All heads turn as they see the HN post about UUIDs.

      One of the members pounces on it. "Why debate this when the entire industry is collapsing?"

      No reply. Silence.

      His peers give a half-hearted smile, as if to say, "Thanks for trying" but the truth is apparent. The joy of hating on programming languages is nil when AI is the only thing looking at code any more.

      The Go hater returns to the waterhole. Defeated.

      • By nightfly 2026-03-0712:40

        I think you're massively misreading the tone of the comment you're relying to

  • By KingOfCoders 2026-03-077:451 reply

    One thing I love about Go, not fancy-latest-hype features, until the language collapses or every upgrade becomes a nightmare, just adding useful stuff and getting out of the way.

    • By grey-area 2026-03-0713:301 reply

      I know, I recently upgraded and skipped several releases without any issues with some large codebases.

      The compatability guarantee is a massive win, so exciting to have a boring language to build on that doesn’t change much but just gradually gets better.

      • By knorker 2026-03-0716:442 reply

        Really? My experience is that of C, C++, Go, Python, and Rust, Go BY FAR breaks code most often. (except the Python 2->3 change)

        Sure, most of that is not the compiler or standard library, but dependencies. But I'm not talking random opensource library (I can't blame the core for that), but things like protobuf breaking EVERY TIME. Or x/net, x/crypto, or whatever.

        But also yes, from random dependencies. It seems that language-culturally, Go authors are fine with breaking changes. Whereas I don't see that with people making Rust crates. And multiple times I've dug out C++ projects that I have not touched in 25 years, and they just work.

        • By grey-area 2026-03-0717:441 reply

          The stdlib has been very very stable since the first release - I still use some code from Go 1.0 days which has not evolved much.

          The x/ packages are more unstable yes, that's why they're outside stdlib, though I haven't personally noticed any breakage and have never been bitten by this. What breakage did you see?

          I think protobuf is notorious for breaking (but more from user changes). I don't use it I'm afraid so have no opinion on that, though it has gone through some major revisions so perhaps that's what you mean?

          I don't tend to use much third party code apart from the standard library and some x libraries (most libraries are internal to the org), I'm sure if you do have a lot of external dependencies you might have a different experience.

          • By knorker 2026-03-0720:271 reply

            Well, for C++ the backwards compatability is even better. Unless you're using `gets()` or `auto_ptr`, old C++ code either just continue to compile perfectly, or was always broken.

            Sure, the Go standard library is in some sense bigger, so it's nice of them to not break that. But short of a Python2->3 or Perl5->6 migration, isn't that just table stakes for a language?

            The only good thing about Go is that its standard library has enough coverage to do a reasonable number of things. The only good thing. But any time you need to step outside of that, it starts a bit-rotting timer that ticks very quickly.

            > though [protobuf] has gone through some major revisions so perhaps that's what you mean?

            No, it seems it's broken way more often than that, requiring manual changes.

            • By grey-area 2026-03-0721:361 reply

              But any time you need to step outside of that, it starts a bit-rotting timer that ticks very quickly.

              This is not my experience with my own or third party code. I can't remember any regressions I experienced caused by code changes to the large stdlib at all in the last decade, and perhaps one caused by changes to a third party library (sendgrid, who changed their API with breaking changes, not really a Go problem).

              A 'bit-rotting timer' isn't very specific or convincing, do you have examples in mind?

              • By knorker 2026-03-0810:261 reply

                >> But any time you need to step outside of that

                "That" here refers to the standard library, so:

                > I can't remember any regressions I experienced caused by code changes to the large stdlib at all in the last decade

                I agree. But I'm saying it's a very low bar, since that's true for every language. But repeating myself I do acknowledge that Go in some senses has a bigger standard library. It's still just table stakes to not break stdlib.

                > A 'bit-rotting timer' isn't very specific or convincing, do you have examples in mind?

                I don't want to dox myself by digging up examples. But it seems that maybe half the time dependabot or something encourages me to bump versions on a project that's otherwise "done", I have to spend time adjusting to non backwards compatible changes.

                This is not my experience at all in other languages. And you would expect it to be MORE common in languages where third party code is needed for many things that Go stdlib has built in, not less.

                I've made and maintained opensource code continuously since years started with "19", and aside from Java Applets, everything else just continues to work.

                > sendgrid, who changed their API with breaking changes, not really a Go problem

                To repeat: "It seems that language-culturally, Go authors are fine with breaking changes".

                • By grey-area 2026-03-0814:04

                  I disagree about culture, I’d say that’s the culture of js.

                  For Go I’d say it’s the opposite and you have obviously been unlucky in your choices which you don’t want to talk about.

                  But it is not a universal experience. That is the only third party package with breaking changes I have experienced.

        • By herewulf 2026-03-0720:451 reply

          Isn't the x for experimental and therefore breaking API changes are expected?

          • By knorker 2026-03-0810:41

            Sure.

            To repeat: "It seems that language-culturally, Go authors are fine with breaking changes". I just chose x as examples of near-stdlib, as opposed to appearing to complain about some library made by some random person with skill issues or who had a reasonable opinion that since almost nobody uses the library, it's OK to break compat. Protobuf is another. (not to mention the GCP libraries, that both break and move URLs, and/or get deprecated for a rewrite every Friday)

            The standard library not breaking is table stakes for a language, so I find it hard to give credit to Go specifically for table stakes.

            And it's not like Go standard library is not a bit messy. As any library would be in order to maintain compatibility. E.g. net.Dialer has Timeout (and Deadline), but it also has DialContext, introduced later.

            If the Go standard library had managed to maintain table stakes compatibility without collecting cruft, that'd be more impressive. But as those are contradictory requirements in practice, we shouldn't expect that of any language.

HackerNews