Understanding the Go Compiler: The Linker

2026-02-0817:4617148internals-for-interns.com

In the previous post , we watched the compiler transform optimized SSA into machine code bytes and package them into object files. Each .o file contains the compiled code for one package—complete with…

In the previous post , we watched the compiler transform optimized SSA into machine code bytes and package them into object files. Each .o file contains the compiled code for one package—complete with machine instructions, symbol definitions, and relocations marking addresses that need fixing.

But your program isn’t just one package. Even a simple “hello world” imports fmt, which imports io, os, reflect, and dozens of other packages. Each package is compiled separately into its own object file. None of these files can run on their own.

This is where the linker comes in. The linker’s job is to take all these separate object files and combine them into a single executable that your operating system can run.

Let me show you what the linker does and how it does it.

At a high level, the linker performs four main tasks:

1. Symbol Resolution: Your code calls fmt.Println, but that function is defined in a different object file. The linker finds all these cross-file references and connects them.

2. Relocation: Remember those placeholder addresses in the machine code? The linker patches them with actual addresses now that it knows where everything will live in memory.

3. Dead Code Elimination: If you import a package but only use one function, the linker removes all the unused functions. This keeps your binary small.

4. Layout and Executable Generation: The linker decides where in memory each piece of code and data will live, then writes out an executable in the format your OS expects (ELF on Linux, Mach-O on macOS, PE on Windows).

Let’s walk through each of these steps, starting with how the linker figures out what symbols exist and where they live.

Symbol Resolution

Every object file contains symbols—names that identify functions, global variables, and other program elements. Some symbols are defined in a file (the actual code or data lives there), while others are just referenced (the code uses them, but they live somewhere else).

Let me show you what I mean:

// main.go
package main

import "fmt"

func main() {
 fmt.Println("Hello")
}

When compiled, your main.o contains main.main—that’s your function, complete with machine code. But it also references fmt.Println, and that code isn’t here. It’s just a name pointing somewhere else.

Note: In practice, fmt.Println gets inlined by the compiler, so there’s no actual cross-package reference in this case. But the concept holds for functions that don’t get inlined.

Over in fmt.o, you’ll find the actual fmt.Println implementation. But that file references io.Writer, os.Stdout, and dozens more symbols from other packages.

Each package defines some symbols and references others. The linker needs to match all these references with their definitions. To do that, it first needs to build a complete picture of what exists.

Before the linker can do anything useful, it needs to know about every symbol in your program. That’s the job of the Loader (src/cmd/link/internal/loader/ ).

The Loader reads object files and builds a unified index of all symbols. It starts with your main package, reads that object file, and discovers its imports. Your code uses fmt, so now fmt needs to be loaded. And fmt imports io, os, reflect, and others. The Loader keeps following imports until it has found every package your program depends on. The runtime package always gets loaded too, since every Go program needs it.

As it reads each file, the Loader records every symbol and connects references to definitions. When your code calls a function from another package, the object file just says “I need this symbol.” The Loader looks it up and records where it points. Most symbols are identified by name, but some—like string literals—are content-addressable, identified by a hash of their contents. If two packages both use "Hello", they produce the same hash and share a single copy in the final binary.

The index itself is straightforward. Each symbol gets a unique integer ID. The Loader maintains a few key data structures: a mapping from symbol ID to its location (which object file, which local index within that file), lookup tables to go from a name like fmt.Println to its ID, and space for attributes like “is this symbol reachable?” that get filled in later. The actual code and data bytes stay in the object files—the Loader just records where to find them.

By the end, the Loader has a complete picture: every symbol indexed, every reference resolved. You can find the loading logic in src/cmd/link/internal/loader/loader.go .

But having everything indexed doesn’t mean we need everything. Time to trim the fat.

Dead Code Elimination

The Loader indexed every symbol from every package, but you probably don’t use all of them. If you import fmt just to call Println, you don’t need the dozens of other functions in that package.

The linker solves this with dead code elimination. Starting from main.main, it traces through every function call and every global variable access, setting that “is this symbol reachable?” attribute we mentioned earlier. When it’s done, anything not marked gets dropped. If you imported a package with fifty functions but only called one, the other forty-nine disappear.

This is why Go binaries stay reasonably small despite static linking. You can find this logic in src/cmd/link/internal/ld/deadcode.go .

With symbols resolved and dead code eliminated, the linker knows exactly what needs to go into the final binary. But there’s a problem: the machine code still has placeholder addresses for symbols that live in other packages.

Relocation

When the compiler generated machine code for a package, it knew about symbols within that package but not about symbols defined elsewhere. Every call to a function in another package, every reference to a variable from an imported module—those are just placeholders saying “fill this in later.” The linker’s job now is to figure out where all these cross-package symbols actually go, and then patch those placeholders with real addresses.

This creates a chicken-and-egg situation: you can’t fill in the addresses until you know where everything is, but you need to lay out all the code and data first to know where everything is. The linker solves this in two passes: first assign addresses to everything, then go back and patch the code.

Address Assignment

The linker organizes memory into sections based on what each symbol contains and how it will be used:

Memory sections layout

The linker processes symbols one by one, placing each at the next available address in its section. Functions get aligned to appropriate boundaries (typically 16 or 32 bytes depending on the architecture) for cache efficiency. Read-only data gets grouped together so it can be protected from modification. The .bss section is special—it takes no space in the file since everything there is just zeros, but the OS allocates the memory when the program loads. By the end of this pass, every symbol has a concrete address.

Now that everything has an address, it’s time to fix up all those placeholders.

Patching Relocations

Each placeholder has an associated relocation record saying what symbol’s address belongs there. The linker goes through every relocation, looks up the target’s address, and patches it in. For function calls, the CPU expects a relative offset (“jump forward 500 bytes”), so the linker computes the distance between the call site and the target. For global variable references, it writes the absolute address directly. When this pass finishes, the machine code is complete—every placeholder replaced with a real address.

The linker now has fully-linked machine code. All that’s left is packaging it into a file the operating system can actually run.

Generating the Executable

Finally, the linker organizes everything into sections, groups them into segments, and writes the executable file. Let’s look at how this organization works.

Sections

The linker groups symbols into sections based on what they are and how they’ll be used:

  • .text holds executable code—your functions, marked read-execute
  • .rodata holds read-only data—string literals, constants, type descriptors
  • .data holds initialized global variables—read-write
  • .bss holds zero-initialized globals—read-write, but takes no space in the file
  • .noptrdata and .noptrbss hold data the garbage collector can ignore (no pointers)

Go also generates special sections for runtime metadata. The .gopclntab section contains the PC-line table—the mapping from program counter values to source file and line numbers that makes stack traces work and enables reflection.

But sections are the linker’s internal organization. The operating system thinks in terms of segments.

Segments

Sections get grouped into segments for loading. While sections are the linker’s view of the data, segments are the OS loader’s view. The OS doesn’t care about individual sections; it maps entire segments into memory with the right permissions.

A typical Go executable has a text segment (code + read-only data, mapped read-execute) and a data segment (writable data + BSS, mapped read-write). On some platforms there’s also a separate read-only data segment between them for .rodata.

The segment layout matters for security. Modern systems use W^X (write xor execute)—memory can be writable or executable, but not both. By separating code and data into different segments with different permissions, the linker enables this protection.

With segments defined, the linker writes everything to disk in a format the OS understands.

File Format and Loading

Different operating systems use different executable formats—Linux uses ELF, macOS uses Mach-O, Windows uses PE. Despite the differences, they all contain:

  • A header identifying the file format and architecture
  • Program headers (or equivalent) describing segments to load
  • Section headers describing the contents for debuggers and tools
  • The actual code and data bytes
  • Optionally, debug information (DWARF format)

One interesting detail: the header specifies an entry point—where the OS starts executing—and it’s not your main function. It’s Go runtime startup code like _rt0_amd64_linux, which sets up the stack, initializes the memory allocator, starts the garbage collector, and launches the scheduler before finally calling your main.main.

You can find the output code in src/cmd/link/internal/ld/elf.go and similar files for other formats. If you want to explore the final structure of a Go binary in more detail, check out my talk Deep dive into a Go binary from GopherCon UK.

Everything we’ve discussed so far assumes the default case: a standalone executable with everything bundled in. But the linker can produce other kinds of output too.

Go prefers static linking—bundling everything into one self-contained binary. The Go runtime, the standard library, all your dependencies: they’re all compiled in. No external dependencies means you can copy the binary to another machine and it just works.

When you use cgo, Go has to dynamically link against system libraries like libc. The linker adds a .dynamic section with symbol tables, library names, and relocation entries. It also specifies an interpreter—the path to the dynamic linker (/lib64/ld-linux-x86-64.so.2 on Linux). When you run the program, the kernel loads the dynamic linker first, which resolves external symbols and loads shared libraries before jumping to your code.

With -buildmode flags, the linker can produce other output types: C-compatible static libraries (c-archive), shared libraries (c-shared), or Go plugins (plugin). Each mode changes what gets exported, how the runtime initializes, and what file format gets written.

Now that we’ve seen all the pieces, let’s watch them work together on a concrete example.

Walking Through a Complete Example

Let’s trace a simple program with two packages through the entire linking process.

main.go:

package main

import "example/greeter"

func main() {
 greeter.Hello()
}

greeter/greeter.go:

package greeter

import "fmt"

//go:noinline
func Hello() {
 fmt.Println("Hello")
}

Note: The //go:noinline directive prevents the compiler from inlining Hello into main.main. Without it, the compiler would inline the function and there would be no cross-package call for the linker to resolve.

Let’s follow this program through each phase of linking.

After Compilation

The compiler produces separate object files. main.o contains main.main and has a reference to example/greeter.Hello—it calls that function but doesn’t have the code. There’s a relocation marking where the call address needs to be filled in.

greeter.o contains example/greeter.Hello, which in turn references fmt.Fprintln (that’s what fmt.Println calls internally). And fmt.a (the archive for the fmt package) has the actual implementation, along with references to io.Writer, os.Stdout, and more.

The linker starts by loading all these pieces and figuring out what’s what.

Loading and Resolving

The linker loads all these files and builds a symbol table. Note that symbol names include the full module path:

Symbol Table:
  main.main              → defined in main.o
  example/greeter.Hello  → defined in greeter.o
  fmt.Fprintln           → defined in fmt.a
  (plus hundreds more from runtime and std library)

Every reference can be matched to a definition. If something were missing, the linker would stop here with an undefined symbol error.

Next, the linker figures out what’s actually used.

Dead Code Elimination

Starting from main.main, the linker traces through all the calls:

main.main → calls example/greeter.Hello
example/greeter.Hello → calls fmt.Fprintln
fmt.Fprintln → calls io.Writer methods, uses os.Stdout
...

Everything in this chain is marked reachable. Anything not in the chain—functions from packages you imported but never actually used—gets dropped.

With the set of reachable symbols determined, the linker assigns each one an address.

Assigning Addresses

Now the linker lays out all the reachable symbols in memory. Here’s what it looks like for our example (addresses from an actual build):

Text section (starting at 0x401000):
  0x46f1e0: _rt0_amd64_linux (entry point)
  0x439040: runtime.main
  0x491b20: main.main
  0x491ac0: example/greeter.Hello
  0x48cac0: fmt.Fprintln
  ...

Data section (starting at 0x554000):
  0x55e148: os.Stdout
  ...

Now the linker can patch all the placeholder addresses in the machine code.

Patching Relocations

With addresses assigned, the linker goes back and fills in all the placeholders.

In main.main, there’s a call to example/greeter.Hello. We can see it in the disassembly:

TEXT main.main(SB)
  0x491b20  CMPQ SP, 0x10(R14)
  0x491b24  JBE 0x491b31
  0x491b26  PUSHQ BP
  0x491b27  MOVQ SP, BP
  0x491b2a  CALL example/greeter.Hello(SB)  ← patched with offset to 0x491ac0
  0x491b2f  POPQ BP
  0x491b30  RET

The CALL instruction at 0x491b2a contains a relative offset that jumps to example/greeter.Hello at 0x491ac0. Same thing for the call from greeter.Hello to fmt.Fprintln—the linker computes the offset and patches it in.

Now all the jumps and calls point to the right places.

All that’s left is writing the final file.

Writing the Executable

Finally, the linker writes everything out. On Linux, we can inspect the result with readelf (on macOS, use otool -h):

$ readelf -h ./example
ELF Header:
 Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
 Class: ELF64
 Data: 2's complement, little endian
 Type: EXEC (Executable file)
 Machine: Advanced Micro Devices X86-64
 Entry point address: 0x46f1e0
 Number of program headers: 6
 Number of section headers: 25
 ...

There it is—a complete, standalone executable. The entry point 0x46f1e0 is _rt0_amd64_linux, the runtime startup code that will eventually call our main.main.

If you want to see this happening on your own code, there are some useful commands to explore.

Try It Yourself

If you want to peek behind the curtain, there are a few commands that let you see what the linker is doing.

To watch the linker work, pass -v through ldflags:

$ go build -ldflags="-v" .
# example
build mode: exe, symbol table: on, DWARF: on
HEADER = -H5 -T0x401000 -R0x1000
107437 symbols, 20441 reachable
 48122 package symbols, 39987 hashed symbols, 14790 non-package symbols, 4538 external symbols
112153 liveness data

You’ll see how many symbols were loaded, how many are reachable after dead code elimination, and other build information.

Once you have a binary, you can inspect its symbol table with nm:

go tool nm ./example | less

This dumps every symbol in the executable along with its address. It’s a lot of output—even our simple program has over 2000 symbols from the runtime.

To see how the sections are laid out in memory, use your platform’s binary inspection tool:

readelf -S ./example # Linux
otool -l ./example # macOS

And if you want to see the entire build process, including the exact link command:

go clean && go build -x .

The go clean ensures you get the full output—without it, cached builds might skip steps.

This prints every command the go tool runs. You’ll see the compiler invocations, then the linker invocation with all its flags. It’s a good way to understand what’s happening under go build.

Let’s wrap up what we’ve learned.

The linker is the final step in the compilation process. It takes separate object files and combines them into a single executable:

  • Symbol Resolution: The Loader builds a global index of every symbol in your program, following imports recursively and connecting references to definitions. Content-addressable symbols let identical data (like string literals) be shared across packages.

  • Dead Code Elimination: Starting from main.main, the linker traces reachability and drops everything that isn’t used. This is why Go binaries stay reasonably small despite static linking.

  • Relocation: The linker assigns each symbol a concrete address, organizing them into sections (.text, .rodata, .data, .bss), then patches all the placeholder addresses in the machine code.

  • Executable Generation: Sections get grouped into segments with appropriate permissions (W^X), and the linker writes everything out in the OS-specific format (ELF, Mach-O, PE). The entry point isn’t your main—it’s runtime startup code that initializes the Go runtime before calling your code.

Go’s linker also handles different build modes—from the default statically-linked executable to C archives, shared libraries, and plugins.

If you want to dive deeper into the linker, explore src/cmd/link/internal/ld/ . The code is well documented, and seeing how a real production linker works is fascinating.

And with that, we’ve completed our journey through the Go compiler! From source code through scanning, parsing, type checking, IR optimization, SSA transformation, code generation, and finally linking—your Go program is now a standalone executable ready to run.

But the story doesn’t end here. That executable contains the Go runtime: the scheduler that manages goroutines, the garbage collector that reclaims memory, the memory allocator, and all the machinery that makes Go’s concurrency model work. In the next series, we’ll explore how the runtime brings your program to life. Stay tuned!


Read the original article

Comments

  • By jjcm 2026-02-149:129 reply

    This is entirely tangential to the article, but I’ve been coding in golang now going on 5 years.

    For four of those years, I was a reluctant user. In the last year I’ve grown to love golang for backend web work.

    I find it to be one of the most bulletproof languages for agentic coding. I have a two main hypotheses as to why:

    - very solid corpus of well-written code for training data. Compare this to vanilla js or php - I find agents do a very poor job with both of these due to what I suspect is poorly written code that it’s been trained on. - extremely self documenting, due to structs giving agents really solid context on what the shape of the data is

    In any file an agent is making edits in, it has all the context it needs in the file, and it has training data that shows how to edit it with great best practices.

    My main gripe with go used to be that it was overly verbose, but now I actually find that to be a benefit as it greatly helps agents. Would recommend trying it out for your next project if you haven’t given it a spin.

    • By JetSetIlly 2026-02-1412:011 reply

      Interesting. I've only dipped my toe in the AI waters but my initial experience with a Go project wasn't good.

      I tried out the latest Claude model last weekend. As a test I asked it to identify areas for performance improvement in one of my projects. One of the areas looked significant and truth be told, was an area I expected to see in the list.

      I asked it to implement the fix. It was a dozen or so lines and I could see straightaway that it had introduced a race condition. I tested it and sure enough, there was a race condition.

      I told it about the problem and it suggested a further fix that didn't solve the race condition at all. In fact, the second fix only tried to hide the problem.

      I don't doubt you can use these tools well, but it's far too easy to use them poorly. There are no guard rails. I also believe that they are marketed without any care that they can be used poorly.

      Whether Go is a better language for agentic programming or not, I don't know. But it may be to do with what the language is being used for. My example was a desktop GUI application and there'll be far fewer examples of those types of application written in Go.

      • By wild_egg 2026-02-1413:116 reply

        You need to be telling it to create reproduction test cases first and iterate until it's truly solved. There's no need for you to manually be testing that sort of thing.

        The key to success with agents is tight, correct feedback loops so they can validate their own work. Go has great tooling for debugging race conditions. Tell it to leverage those properly and it shouldn't have any problems solving it unless you steer it off course.

        • By epolanski 2026-02-1414:231 reply

          +1 half the time I see such posts the answer is "harness".

          Put the LLM in a situation where it can test and reason about its results.

          • By JetSetIlly 2026-02-1414:29

            I do have a test harness. That's how I could show that the code suggested was poor.

            If you mean, put the LLM in the test harness. Sure, I accept that that's the best way to use the tools. The problem is that there's nothing requiring me or anyone else to do that.

        • By Someone 2026-02-1415:511 reply

          If that’s what you have to do that makes LLMs look more like advanced fuzzers that take textual descriptions as input (“find code that segfaults calling x from multiple threads”, followed by “find changes that make the tests succeed again”) than as truly intelligent. Or, maybe, we should see them as diligent juniors who never get tired.

          • By wild_egg 2026-02-1418:01

            I don't see any problems with either of those framings.

            It really doesn't matter at all whether these things are "truly intelligent". They give me functioning code that meets my requirements. If standard fuzzers or search algorithms could do the same, I would use those too.

        • By JetSetIlly 2026-02-1414:202 reply

          I accept what you say about the best way to use these agents. But my worry is that there is nothing that requires people to use them in that way. I was deliberately vague and general in my test. I don't think how Claude responded under those conditions was good at all.

          I guess I just don't see what the point of these tools are. If I was to guide the tool in the way you describe, I don't see how that's better than just thinking about and writing the code myself.

          I'm prepared to be shown differently of course, but I remain highly sceptical.

          • By wild_egg 2026-02-1418:411 reply

            Just want to say upfront: this mindset is completely baffling to me.

            Someone gives you a hammer. You've never seen one before. They tell you it's a great new tool with so many ways to use it. So you hook a bag on both ends and use it to carry your groceries home.

            You hear lots of people are using their own hammers to make furniture and fix things around the home.

            Your response is "I accept what you say about the best way to use these hammers. But my worry is that there is nothing that requires people to use them in that way."

            These things are not intelligent. They're just tools. If you don't use a guide with your band saw, you aren't going to get straight cuts. If you want straight cuts from your AI, you need the right structure around it to keep it on track.

            Incidentally, those structures are also the sorts of things that greatly benefit human programmers.

            • By JetSetIlly 2026-02-1419:21

              "These things are not intelligent. They're just tools."

              Correct. But they are being marketed as being intelligent and can easily convince a casual observer that they are through the confidence of their responses. I think that's a problem. I think AI companies are encouraging people to use these tools irresponsibly. I think the tools should be improved so they can't be misused.

              "Incidentally, those structures are also the sorts of things that greatly benefit human programmers."

              Correct. And that's why I have testing in place and why I used it to show that the race condition had been introduced.

          • By strawhatguy 2026-02-1415:031 reply

            Okay. If you’re being vague, you get vague results.

            Golang and Claude have worked well for me, on existing production codebases, because I tell it precisely what I want and it does it.

            I’ve never found generic “find performance issues” just by reading the code helpful.

            Write specifications, give it freedom to implement, and it can surprise you.

            Hell once it thought of how to backfill existing data with the change I was making, completely unasked. And I’m like that’s awesome

            • By JetSetIlly 2026-02-1415:20

              "Okay. If you’re being vague, you get vague results."

              No. I was vague and got a concrete suggestion.

              I have no issue with people using Claude in an optimal way. The problem is that it's too easy to use in a poor way.

              My example was to test my own curiosity about whether these tools live up to the claims that they'll be replacing programmers. On the evidence I've seen I don't believe they will and I don't see how Go is any different to any other language in that regard.

              IMO, for tools like Claude to be truly useful, they need to understand their own limitations and refuse to work unless the conditions are correct. As you say, it works best when you tell it precisely what you want. So why doesn't Claude recognise when you're not being precise and refuse to work until you are?

              To reiterate, I think coding assistants are great when used in the optimal way.

        • By kitd 2026-02-1419:36

          TDD and the coding agent: a match made in heaven.

          It is Valentine's Day after all.

        • By treyd 2026-02-1413:25

          If only there was a way to prevent race conditions by design as part if the language's type system, and in a way that provides rich and detailed error messages that allow coding agents to troubleshoot issues directly (without having to be prompted to write/run tests that just check for race conditions).

    • By reactordev 2026-02-1412:02

      Go’s design philosophy actually aligns with AI’s current limitations very well.

      AI has trouble with deep complexity, go is simple by design. With usually only one or two correct paths instruction wise. Architecturally you can design your src however but there’s a pretty well established standard.

    • By epolanski 2026-02-1414:22

      I don't believe the "corpus" argument that much.

      I have been extending the Elm language with Effect semantics (ala ZIO/Rio/Effect-ts) for a new langauge called Eelm (extended-Elm or effectful-elm) and both Haskell (the language that the Elm compiler is written in) and Eelm (the target language, now we some new fancy capabilities) shouldn't have a particularly relevant corpus of code.

      Yet, my experiments show that Opus 4.6 is terrific at understanding and authoring both Haskell and Eelm.

      Why? I think it stems from the properties of these languages themselves: no mutability makes it reason to think about, fully statically typed, excellent compiler and diagnostics. On top of that the syntax is rather small.

    • By jespino 2026-02-1412:50

      One of the things that makes it work so well with agents is two facts. Go is a language that is focused on simplicity and also the gofmt and go coding style makes that almost all go code looks familiar, because everyone write the code with a very consistent style. That two things makes the experience pleasant and the work for the llm easier.

    • By hippo22 2026-02-1416:42

      I have had good experience with Go, but I've also had good results with TypeScript. Compile-time checks are very important to getting good results. I don't think the simplicity of the language matters as much as the LLM being generally aware of the language via training data and being able to validate the output via compilation.

    • By oncallthrow 2026-02-1411:171 reply

      Yeah in my experience Claude is significantly better at writing go than other languages I’ve tried (Python, typescript)

      • By 9rx 2026-02-1416:00

        Same goes for humans. There are some wild exceptions, but most Go projects look like they were written by the same person.

    • By tejinderss 2026-02-1410:562 reply

      I wonder how is the experience writing Rust or Zig with LLMs. I suspect zig might not have enough training data and rust might struggle with compile times and extra context required for borrow checker.

      • By jwxz 2026-02-1412:35

        I found Opus 4.6 to be good at Zig.

        I got it to write me an rsync like CLI for copying files to/from an Android device using MTP, all in a single ~45 min sitting. It works incredibly well. OpenMTP was the only other free option on macOS. After being frustrated by it, I decided to try out Opus 4.6 and was pleasantly surprised.

        I later discovered that I could plug in a USB-C hard drive directly into the phone, but the program was nonetheless very useful.

      • By embedding-shape 2026-02-1411:40

        > I wonder how is the experience writing Rust or Zig with LLMs

        I've had no issues with Rust, mostly (99% of the time) using codex with gpt-5.2 xhigh and does as well as any other language. Not sure why you think compile times would be an issue, the LLM doesn't really care if it takes 1 minute or 1 hour to compile, it's more of a "your hardware + project" issue than about the LLMs. Also haven't found it to struggle with borrow checker, if it screw up it sees the compilation errors, fixes it, just like with any other languages I've tried to use with LLMs.

    • By dizhn 2026-02-1411:08

      I'm having similarly good results with go and agents. Another good language for it is flutter/dart in my experience.

    • By IhateAI_2 2026-02-149:32

      [dead]

  • By KingOfCoders 2026-02-1414:101 reply

    Perfectly happy with Go, my "Go should do X" / "Go should have Y" days are over.

    But if I could have a little wish, "cargo check" would be it.

    • By 12345hn6789 2026-02-1419:44

      Enums is mine.

      Going on year 4 working at $DAY_JOB and just last week we had a case where enums and also union types would have made things simpler.

  • By Surac 2026-02-148:505 reply

    I can see no difference to an ordinary linker. Anyone care to explain it to me.?

    • By jespino 2026-02-1412:461 reply

      Yes, it is not specially different from other linkers. It has some tasks building the final binary including special sections in the binary, and is more aware about the specifics of the go language. But there is nothing that is extremely different from other linkers. The whole point of the series is to explain a real compiler, but in general, most of the parts of the go compiler are very widely used in other languages, like ssa, ast, escape analysis, inlining...

      • By froh 2026-02-1418:201 reply

        when does golang create the final dynamic dispatch tables? isn't that the one thing that in golang needs real compute at final link time, beyond what a C linker would do? and where C++ has all information at compile time, while golang can only create the dispatch tables at link time?

        • By jespino 2026-02-1418:451 reply

          Yes, there is some information that is written by the linker in the final data section of the binary, the itab, that is the interface table for the dynamic dispatching. AFAIK, it is done there because you need to know other packages structs and interfaces to have the whole picture and build that table, and that happens using the build cache.

          • By froh 2026-02-1419:59

            yes, the interface tables! that was the word I didn't remember. and that is some computation going on there not "just" merging sections, and, in a normal static linker, wiring exports to imports, and not pulling in unneeded definitions (dead code elimination).

            the interface table computation is a golang speciality, a fascinating one.

            and the implementation of interface magic is disturbingly not mentioned in the article.

    • By gregwebs 2026-02-1412:121 reply

      The difference is that Go has its own linker rather than using a system linker. Another article could explain the benefits of tighter integration and the drawbacks of this approach. Having its own toolchain I assume is part of what enables the easy cross compilation of Go.

      • By jrockway 2026-02-1417:39

        You can actually make go spit out .o files and link it with your favorite linker. Bazel does this, if you ask it to.

        I played a lot with experimental linkers when I was trying to get build time down for our (well, $JOB-1's) large Go binary, but they didn't help that much. The toolchain that comes with Go is quite good.

    • By jenoer 2026-02-148:55

      What is there to explain? The author did not claim there is a difference in the article.

    • By pjmlp 2026-02-1410:01

      Why should it be one?

    • By cloudhead 2026-02-149:221 reply

      The title is misleading

      • By jespino 2026-02-1412:43

        Misleading in what way? This is the linker part of a serie of posts about understanding the go compiler. I think there is no much space to be misleading.

HackerNews