How do you capture WHY engineering decisions were made, not just what?

2026-03-1319:483056

We onboarded a senior engineer recently strong, 8 years experience. He spent 3 weeks playing code archaeologist just to understand WHY our codebase looks the way it does.

Not what the code does. That was fast. But the reasoning behind decisions:

- Why Redis over in-memory cache? - Why GraphQL ...

We onboarded a senior engineer recently strong, 8 years experience. He spent 3 weeks playing code archaeologist just to understand WHY our codebase looks the way it does.

Not what the code does. That was fast. But the reasoning behind decisions:

- Why Redis over in-memory cache? - Why GraphQL for this one service but REST everywhere else? - Why that strange exception in the auth flow for enterprise users?

Answers were buried in closed PRs with no descriptions, 18-month-old Slack threads, and the heads of two engineers who left last year.

We tried ADRs. Lasted 6 weeks. Nobody maintained them. We tried PR description templates. Ignored within a month. We have a Notion architecture doc. Last updated 14 months ago.

Every solution requires someone to manually write something. Nobody does.

Curious how teams at HN actually handle this:

1. Do you have a system that actually works long-term? 2. Has anyone automated any part of this? 3. Or is everyone quietly suffering through this on every new hire?


Comments

  • By lowenbjer 2026-03-1320:321 reply

    My take after running engineering teams at multiple companies: documentation survives when it lives next to the code. File-level header comments explaining each component's purpose and role in the architecture. A good README tying it all together. If you compartmentalize architecture into folders, a README per folder. This works for humans, LLMs, and GitHub search alike.

    ADRs, Notion docs, and Confluence pages die because they're separate from the code. Out of sight, out of mind.

    If you want to be really disciplined about it, set up an LLM-as-judge git hook that runs on each PR. It checks whether code changes are consistent with the existing documentation and blocks the merge if docs need updating. That way the enforcement is automated and you only need a little human discipline, not a lot.

    There's no way to avoid some discipline though. But the less friction you add, the more likely it sticks.

    • By zain__t 2026-03-1320:37

      The git hook idea for enforcing doc updates is really interesting has that actually worked long term for your team or does it eventually get bypassed?

  • By vova_hn2 2026-03-1321:15

    I suppose you are trying to "warm up" the audience before announcing you product, which is... fine, I guess.

    I also had a an idea for a solution to this problem long time ago.

    I wanted to make a thing that would allow you to record a meeting (in the company I where I worked back then such things where mostly discussed in person), transcribe it and link parts of the conversation to relevant tickets, pull requests and git commits.

    Back then the tech wasn't ready yet, but now it actually looks relatively easy to do.

    For now, I try to leave such breadcrumbs manually, whenever I can. For example, if the reason why a part of the code exists seems non-obvious to me, I will write an explanation in a comment/docstring and leave a link to a ticket or a ticket comment that provides additional context.

  • By pxue 2026-03-1323:00

    we use a service called Briefhq.

    First: It’s a MCP/cli you can hook up to Claude code and slack, it integrates with GitHub.

    the harness lets you record decision as contextual info you can pull up whenever your start a planning session.

    It also makes sure your decisions don’t conflict with each other.

    I find myself talking through a decision I made months ago with it, update it with any new decisions and it just figures out how to merge everything.

    No extra workflow outside of this.

HackerNews