We should revisit literate programming in the agent era

2026-03-0819:58292253silly.business

Literate programming is the idea that code should be intermingled with prose such that an uninformed reader could read a code base as a narrative, and come away with an understanding of how it works…

Literate programming is the idea that code should be intermingled with prose such that an uninformed reader could read a code base as a narrative, and come away with an understanding of how it works and what it does.

Although I have long been intrigued by this idea, and have found uses for it in a couple of different cases, I have found that in practice literate programming turns into a chore of maintaining two parallel narratives: the code itself, and the prose. This has obviously limited its adoption.

Historically in practice literate programming is most commonly found as Jupyter notebooks in the data science community, where explanations live alongside calculations and their results in a web browser.

Frequent readers of this blog will be aware that Emacs Org Mode supports polyglot literate programming through its org-babel package, allowing execution of arbitrary languages with results captured back into the document, but this has remained a niche pattern for nerds like me.

Even for someone as enthusiastic about this pattern as I am, it becomes cumbersome to use Org as the source of truth for larger software projects, as the source code essentially becomes a compiled output, and after every edit in the Org file, the code must be re-extracted and placed into its destination ("tangled", in Org Mode parlance). Obviously this can be automated, but it's easy to get into annoying situations where you or your agent has edited the real source and it gets overwritten on the next tangle.

That said, I have had enough success with using literate programming for bookkeeping personal configuration that I have not been able to fully give up on the idea, even before the advent of LLMs.

For example: before coding agents, I had been adapting a pattern for using Org Mode for manual testing and note-taking: instead of working on the command line, I would write more commands into my editor and execute them there, editing them in place until each step was correct, and running them in-place, so that when I was done I would have a document explaining exactly the steps that were taken, without extra steps or note-taking. Combining the act of creating the note and running the test gives you the notes for free when the test is completed.

This is even more exciting now that we have coding agents. Claude and Kimi and friends all have a great grasp of Org Mode syntax; it's a forgiving markup language and they are quite good at those. All the documentation is available online and was probably in the training data, and while a big downside of Org Mode is just how much syntax there is, but that's no problem at all for a language model.

Now when I want to test a feature, I ask the clanker to write me a runbook in Org. Then I can review it – the prose explains the model's reflection of the intent for each step, and the code blocks are interactively executable once I am done reviewing, either one at a time or the whole file like a script. The results will be stored in the document, under the code, like a Jupyter notebook.

I can edit the prose and ask the model to update the code, or edit the code and have the model reflect the meaning upon the text. Or ask the agent to change both simultaneously. The problem of maintaining the parallel systems disappears.

The agent is told to handle tangling, and the problem of extraction goes away. The agent can be instructed with an AGENTS.md file to treat the Org Mode file as the source of truth, to always explain in prose what is going on, and to tangle before execution. The agent is very good at all of these things, and it never gets tired of re-explaining something in prose after a tweak to the code.

The fundamental extra labor of literate programming, which I believe is why it is not widely practiced, is eliminated by the agent and it utilizes capabilities the large language model is best at: translation and summarization.

As a benefit, the code base can now be exported into many formats for comfortable reading. This is especially important if the primary role of engineers is shifting from writing to reading.

I don't have data to support this, but I also suspect that literate programming will improve the quality of generated code, because the prose explaining the intent of each code block will appear in context alongside the code itself.

I have not personally had the opportunity to try this pattern yet on a larger, more serious codebase. So far, I have only been using this workflow for testing and for documenting manual processes, but I am thrilled by its application there.

I also recognize that the Org format is a limiting factor, due to its tight integration with Emacs. However, I have long believed that Org should escape Emacs. I would promote something like Markdown instead, however Markdown lacks the ability to include metadata. But as usual in my posts about Emacs, it's not Emacs's specific implementation of the idea that excites me, as in this case Org's implementation of literate programming does.

It is the idea itself that is exciting to me, not the tool.

With agents, does it become practical to have large codebases that can be read like a narrative, whose prose is kept in sync with changes to the code by tireless machines?

I think that's a compelling question.


Read the original article

Comments

  • By palata 2026-03-0823:4511 reply

    I am not convinced.

    - Natural languages are ambiguous. That's the reason why we created programming languages. So the documentation around the code is generally ambiguous as well. Worse: it's not being executed, so it can get out of date (sometimes in subtle ways).

    - LLMs are trained on tons of source code, which is arguably a smaller space than natural languages. My experience is that LLMs are really good at e.g. translating code between two programming languages. But translating my prompts to code is not working as well, because my prompts are in natural languages, and hence ambiguous.

    - I wonder if it is a question of "natural languages vs programming languages" or "bad code vs good code". I could totally imagine that documenting bad code helps the LLMs (and the humans) understand the intent, while documenting good code actually adds ambiguity.

    What I learned is that we write code for humans to read. Good code is code that clearly expresses the intent. If there is a need to comment the code all over the place, to me it means that the code is maybe not as good as it should be :-).

    Of course there is an argument to make that the quality of code is generally getting worse every year, and therefore there is more and more a need for documentation around it because it's getting hard to understand what the hell the author wanted to do.

    • By bottd 2026-03-090:006 reply

      > If there is a need to comment the code all over the place, to me it means that the code is maybe not as good as it should be :-)

      If good code was enough on its own we would read the source instead of documentation. I believe part of good software is good documentation. The prose of literate source is aimed at documentation, not line-level comments about implementation.

      • By wvenable 2026-03-095:152 reply

        > If good code was enough on its own we would read the source instead of documentation.

        That's 100% how I work -- reading the source. If the code is confusing, the code needs to be fixed.

        • By kalaksi 2026-03-097:252 reply

          Confusing code is one thing, but projects with more complex requirements or edge cases benefit from additional comments and documentation. Not everything is easily inferred from code or can be easily found in a large codebase. You can also describe e.g. chosen tradeoffs.

          • By habinero 2026-03-097:462 reply

            There's no way around just learning the codebase. I have never seen code documentation that was complete or correct, let alone both.

            • By ninalanyon 2026-03-0911:331 reply

              I have written code that was correct and necessarily written the way it was oly to have it repeatedly altered by well meaning colleagues who thought it looked wrong, inefficient, or unidiomatic. Eventually I had to fill it with warning comments and write a substantial essay explaining why it had to be the way it was,

              Code tells you what is happening but it doesn't always do it so that it is easy to understand and it almost never tells you why something is the way it is.

              • By palata 2026-03-0911:431 reply

                Difficult to say without an example, but "code isn't enough" is just one possible conclusion in this case. Another one could be that the code is not actually as good as expected, and another one is that the colleagues may need to... do something about it.

                An obvious example I have is CMake. I have seen so many people complaining about CMake being incomprehensible, refactoring it to make it terrible, even wrapping it in Makefiles (and then wrapping that in Dockerfiles). But the problem wasn't the original CMakeLists or a lack of comments in it. The problem was that those developers had absolutely no clue about how CMake works, and felt like they should spend a few hours modifying it instead of spending a few hours understanding it.

                However, I do agree that sometimes there is a need for a comment because something is genuinely tricky. But that is rare enough that I call it "a comment" and not "literate programming".

                • By tonyedgecombe 2026-03-0913:061 reply

                  I always think the biggest mistake is using CMake in the first place. I’ve never come across a project as convoluted and poorly documented as it.

                  • By palata 2026-03-0914:43

                    What do you mean by "poorly documented"? I have been using it for 20 years, I have yet to find something that is not documented.

                    As for convoluted, I don't find it harder than the other build systems I use.

                    Really the problem I have with CMake is the amount of terribly-written CMakeLists. The norm seems to be to not know the basics of CMake but to still write a mess and then complain about CMake. If people wrote C the way they write CMake, we wouldn't blame the language.

            • By actionfromafar 2026-03-0911:122 reply

              But the documentation can really help in telling why we are doing things. That also seeps in to naming things like classes. If that were not so, we'd just name everything Class1, Class2, Method1, Method2 and so on.

              • By palata 2026-03-0911:441 reply

                My point is that if your code is well written, it is self-documenting. Obviously Class1 and var2 are not self-documenting.

                • By dinfinity 2026-03-0920:131 reply

                  The code is what it does. The comments should contain what it's supposed to do.

                  Even if you give them equal roles, self-documenting code versus commented code is like having data on one disk versus having data in a RAID array.

                  Remember: Redundancy is a feature. Mismatches are information. Consider this:

                  // Calculate the sum of one and one

                  sum = 1 + 2;

                  You don't have to know anything else to see that something is wrong here. It could be that the comment is outdated, which has no direct effects and is easily solved. It could be that this is a bug in the code. In any case it is information and a great starting point for looking into a possible problem (with a simple git blame). Again, without needing any context, knowledge of the project or external documentation.

                  My take on developers arguing for self-documenting code is that they are undisciplined or do not use their tools well. The arguments against copious inline comments are "but people don't update them" and "I can see less of the code".

                  • By palata 2026-03-1011:441 reply

                    > Redundancy is a feature. Mismatches are information. Consider this:

                    Respectfully, if someone wrote code like this, I wouldn't want to work with them. I mean next step is "I copy paste code instead of writing functions, and in the comment above I mention all the other copies, so that it's easy to check that they are all doing the same thing redundantly".

                    > The arguments against copious inline comments are "but people don't update them" and "I can see less of the code".

                    Well no, that's not my argument. I have been navigating code for 20 years and in good codebases, comments are rare and describe something "surprising". Good code is hardly surprising.

                    My problem with "literate programming" (which means "add a lot of comments in the implementation details") is that I find it hard to trust developers who genuinely cannot understand unsurprising code without comments. I am fine with a junior needing more time to learn, but after a few years if a developer cannot do it, it concerns me.

                    • By dinfinity 2026-03-1011:581 reply

                      You did not engage with my main arguments. You should still do so.

                      1. Redundancy: "The code is what it does. The comments should contain what it's supposed to do. [...] You don't have to know anything else to see that something is wrong here." and specifically the concrete trivial (but effective) example.

                      2. "My take on developers arguing for self-documenting code is that they are undisciplined or do not use their tools well. The arguments against copious inline comments are "but people don't update them" and "I can see less of the code"."

                      > Respectfully, if someone wrote code like this, I wouldn't want to work with them. I mean next step is "I copy paste code [...]

                      This is an nonsensical slippery slope fallacy. In no way does that behavior follow from placing many comments in code. It also says nothing about the clearly demonstrated value of redundancy.

                      > I have been navigating code for 20 years and in good codebases, comments are rare and describe something "surprising".

                      Your definition of good here is circular. No argument on why they are good codebases. Did you measure how easy they were to maintain? How easy it was to onboard new developers? How many bugs it contained? Note also that correlation != causation: it might very well be that the good codebases you encountered were solo-projects by highly capable motivated developers and the comment-rich ones were complicated multi-developer projects with lots of developer churn.

                      > My problem with "literate programming" [...] is that I find it hard to trust developers who genuinely cannot understand unsurprising code without comments.

                      This is gatekeeping code by making it less understandable and essentially an admission that code with comments is easier to understand. I see the logic of this, but it is solving a problem in the wrong place. Developer competence should not be ascertained by intentionally making the code worse.

                      • By palata 2026-03-1012:54

                        You talk as if you had scientific proof that literate programming is objectively better, and I was the weirdo contradicting it without bringing any scientific proof.

                        Fact is, you don't have any proof at all, you just have your intuition and experience. And I have mine.

                        > It also says nothing about the clearly demonstrated value of redundancy.

                        Clearly demonstrated, as in your example of "Calculate the sum of one and one"? I wouldn't call that a clear demonstration.

                        > This is gatekeeping code by making it less understandable

                        I don't feel like I am making it less understandable. My opinion is that a professional worker should have the required level of competence (otherwise they are not a professional in that field). In software engineering, we feed code to a compiler, and we trust that the compiler makes sure that the machine executes the code we write. The role of the software engineer is to understand that code.

                        Literate programming essentially says "I am incapable of writing code that is understandable, ever, so I always need to explain it in a natural language". Or "I am incapable of reading code, so I need it explained in a natural language". My experience is that good code is readable by competent software engineers without explaining everything. But not only that: code is more readable when it is more concise and not littered with comments.

                        > and essentially an admission that code with comments is easier to understand.

                        I disagree again. Code with comment is easier to understand for the people who cannot understand it without the comments. Now the question is, again: are those people competent to handle code professionally? Because if they don't understand the code without comments, many times they will just have to trust the comments. If they used the comments to actually understand the code, pretty quickly they would be competent enough to not require the comments. Which means that at the point where they need it, they are not yet professionals, but rather apprentices.

              • By samplifier 2026-03-0911:231 reply

                def reallyDumbIdeaByManagerWorkaroundMethodToGetCoverageToNinetyPercent(self): """Dont worry, this is a clear description of the method. """ return False

                • By TuxSH 2026-03-0911:48

                  You exaggerate, but in this situation, I think putting a link to a Jira ticket or Slack convo (or whatever) as comment is best

          • By seba_dos1 2026-03-0913:45

            Exactly, that's why a good project will use comments sparingly and have them only where they matter to actually meaningfully augment the code. The rest is noise.

        • By dkersten 2026-03-0911:401 reply

          Code alone can never describe intent or rationale.

          • By ithkuil 2026-03-0911:59

            Indeed, you need both!

            But documentation should not go too deep in the "how" otherwise it risks telling a lie after a while as the code changes but the documentation lags.

      • By WillAdams 2026-03-091:192 reply

        https://diataxis.fr/

        (originally developed at: https://docs.divio.com/documentation-system/) --- divides documentation along two axes:

        - Action (Practical) vs. Cognition (Theoretical)

        - Acquisition (Studying) vs. Application (Working)

        which for my current project has resulted in:

        - readme.md --- (Overview) Explanation (understanding-oriented)

        - Templates (small source snippets) --- Tutorials (learning-oriented)

        - Literate Source (pdf) --- How-to Guides (problem-oriented)

        - Index (of the above pdf) --- Reference (information-oriented)

        • By zenoprax 2026-03-095:32

          I've been trying to implement this as closely as possible from scratch in an existing FOSS project:

          https://github.com/super-productivity/super-productivity/wik...

          Even with a well-described framework it is still hard to maintain proper boundaries and there is always a temptation to mix things together.

        • By ramses0 2026-03-0913:03

              README => AGENTS.md
              HOWTO => SKILLS.md
              INFO => Plan/Arch/Guide
              REFERENCE => JavaDoc-ish
          
          I'm very near the idea that "LLM's are randomized compilers" and the human prompts should be 1000% more treated with care. Don't (necessarily) git commit the whole megabytes of token-blathering from the LLM, but keeping the human prompts:

          "Hey, we're going to work on Feature X... now some test cases... I've done more testing and Z is not covered... ok, now we'll extend to cover Case Y..."

          Let me hover over the 50-100 character commit message and then see the raw discussion (source) that led to the AI-generated (compiled) code. Allow AI.next to review the discussion/response/diff/tests and see if it can expose any flaws with the benefit of hindsight!

      • By AdieuToLogic 2026-03-094:072 reply

        > If good code was enough on its own we would read the source instead of documentation.

        An axiom I have long held regarding documenting code is:

          Code answers what it does, how it does it, when it is used, 
          and who uses it.  What it cannot answer is why it exists.  
          Comments accomplish this.

        • By eru 2026-03-094:272 reply

          An important addendum: code can sometimes, with a bit of extra thinking of part of the reader, answer the 'why' question. But it's even harder for code to answer the 'why not' question. Ie what were other approaches that we tried and that didn't work? Or what business requirements preclude these other approaches.

          • By AdieuToLogic 2026-03-095:06

            > But it's even harder for code to answer the 'why not' question.

            Great point. Well-placed documentation as to why an approach was not taken can be quite valuable.

            For example, documenting that domain events are persisted in the same DB transaction as changes to corresponding entities and then picked up by a different workflow instead of being sent immediately after a commit.

          • By 1718627440 2026-03-095:022 reply

            I don't think this is enough to completely obsolete comments, but a good chunk of that information can be encoded in a VCS. It encodes all past approaches and also contains the reasoning and why not in annotation. You can also query this per line of your project.

            • By eru 2026-03-095:582 reply

              Git history is incredible important, yes, but also limited.

              Practically, it only encodes information that made it into `main`, not what an author just mulled over in their head or just had a brief prototype for, or ran an unrelated toy simulation over.

              • By necovek 2026-03-097:152 reply

                In fairness to GP, they said VCS, not Git, even if they are somewhat synonomous today. Other VCSes did support graph histories.

                Still, "3rd dimension" code reasoning (backwards in time) has never been merged well with code editing.

                • By 1718627440 2026-03-1011:25

                  > In fairness to GP, they said VCS, not Git

                  I did say VCS, but I also don't know what Git is missing in this relation.

                  > Other VCSes did support graph histories.

                  How does Git do not?

                  > Still, "3rd dimension" code reasoning (backwards in time) has never been merged well with code editing.

                  Maybe it's not perfect, but Git seems to do that just fine for my taste. What is missing there?

                • By eru 2026-03-0911:302 reply

                  > Other VCSes did support graph histories.

                  Yes, git ain't the only one, but apart from interface difference, they are pretty much compatible in what they allow you to record in the history, I think?

                  Part of the problem here is that we use git for two only weakly correlated purposes:

                  - A history of the code

                  - Make nice and reviewable proposals for code changes ('Pull Request')

                  For the former, you want to be honest. For the latter, you want to present a polished 'lie'.

                  • By 1718627440 2026-03-1011:26

                    > - A history of the code

                    Which is a causal history, not a editing log. So I don't perceive these to be actually different.

                  • By necovek 2026-03-0916:311 reply

                    Not really. Launchpad.net does not have any public branches I could share atm as an example, but Bazaar (now breezy) allowed having a nested "merge commit": your trunk would have "flattened" merge commits ("Merge branch foo"), and under it you could easily get to each individual commit by a developer ("Prototype", "Add test"...). It would really be shown as a tree, but smartness was wven richer.

                    This was made possible by using a DAG for commit storage and referencing, instead of relying on file contents and series of commits per reference. Merge behaviour was much smarter in case of diverging tip or criss-cross merges. But this ultimately was harder and slower to implement, and developers did not value this enough and they instead accepted the Git trade-offs.

                    So you seamlessly did both with a different VCS without splitting those up: in a sense, computers and software worried about that for us.

                    • By eru 2026-03-0923:281 reply

                      I am not quite sure what you are describing here. Git's underlying commit graph is a DAG.

                      You can use different, custom merge-drivers (or whatever it's called) for Git to get the behaviour you describe here.

                      • By necovek 2026-03-106:041 reply

                        Certainly, but merges are treated differently by default, and getting to this sort of output would require "custom" tooling for things like "git log".

                        Whereas bzr just did the expected thing.

                        • By 1718627440 2026-03-1011:451 reply

                          You can select whether you want the diff to the first or the second parent, which is the difference between collapsing and expanding merges. You can also completely collapse merges by showing first-parent-history.

                          Or I do not understand what you mean with "the expected thing".

                          • By eru 2026-03-111:41

                            Yes, `git log --first-parent` has been a godsend for coping with our team's messy non-cleaned up history.

              • By 1718627440 2026-03-096:252 reply

                If you throw away commit messages, that is on you, it is not a limitation of Git. If I am cleaning up before merging, I'm maybe rephrasing things, but I am not throwing that information away. I regularly push branches under 'draft/...' or 'fail/...' to the central project repository.

                • By eru 2026-03-0911:312 reply

                  Sure, but you are still supposed to clean things up to make the life of the reviewer easier.

                  There's an inherent tension between honest history and a polished 'lie' to make the reviewer's life easier.

                  • By 1718627440 2026-03-1011:091 reply

                    The WIP commits I initially recorded also don't necessarily existed as such in my file system and often don't really work completely, so I don't know why the commit after a rebase is any more a lie then the commit before the rebase.

                    • By eru 2026-03-111:42

                      It's a 'lie' in the sense that you are optimising for telling a convenient and easy to understand story for the reviewer where each commit works atomically.

                  • By seba_dos1 2026-03-0913:52

                    The "honest" historical record of when I decided to use "git commit" while working on something is 100% useless for anyone but me (for me it's 90% useless).

                    git tracks revisions, not history of file changes.

                • By kalaksi 2026-03-097:311 reply

                  Sounds easier (for everybody) to just use comments.

                  • By 1718627440 2026-03-1011:11

                    You put past failed implementation in comments? That sounds like a nightmare. I rather only include a short description in the comment that can then link to the older implementation if necessary.

            • By crazygringo 2026-03-0914:412 reply

              But why would you ever put that into your VCS as opposed to code comments?

              The VCS history has to be actively pulled up and reading through it is a slog, and history becomes exceptionally difficult to retrace in certain kinds of refactoring.

              In contrast, code comments are exactly what you need and no more, you can't accidentally miss them, and you don't have to do extra work to find them.

              I have never understood the idea of relying on code history instead of code comments. It seems like it's all downsides, zero upsides.

              • By 1718627440 2026-03-1011:22

                Because comments are a bad fit to encode the evolution of code. We implemented systems to do that for a reason.

                > The VCS history has to be actively pulled up and reading through it is a slog

                Yes, but it also allows to query history e.g. by function, which to me gets me to understand much faster than wading through the current state and trying to piece information together from the status quo and comments.

                > history becomes exceptionally difficult to retrace in certain kinds of refactoring.

                True, but these refactorings also make it more difficult to understand other properties of code that still refers to the architecture pre-refactoring.

                > I have never understood the idea of relying on code history instead of code comments. It seems like it's all downsides, zero upsides.

                Comments are inherently linear to the code, that is sometimes what you need, for complex behaviour, you rather want to comment things along another dimension, and that is what a VCS provides.

                What I write is this:

                    /* This used to do X, but this causes Y and Z 
                       and also conflicts with the FOO introduced 
                       in 5d066d46a5541673d7059705ccaec8f086415102.
                       Therefore it does now do BAR, 
                       see c7124e6c1b247b5ec713c7fb8c53d1251f31a6af */

              • By eru 2026-03-0923:29

                Both have their place. While I mostly agree with you, there's a clear example where git history is better: delete old or dead or unused code, rather than comment it out.

        • By necovek 2026-03-097:131 reply

          Good naming and good tests can get you 90% of the way to "why" too.

          • By palata 2026-03-0911:47

            Agreed. Tests are documentation too. Tests are the "contract": "my code solves those issues. If you have to modify my tests, you have a different understanding than I had and should make sure it is what you want".

      • By necovek 2026-03-097:11

        Having "grown up" on free software, I've always been quick to jump into code when documentation was dubious or lacking: there is only one canonical source of truth, and you need to be good at reading it.

        Though I'd note two kinds of documentation: docs how software is built (seldom needed if you have good source code), and how it is operated. When it comes to the former, I jump into code even sooner as documentation rarely answers my questions.

        Still, I do believe that literate programming is the best of both worlds, and I frequently lament the dead practice of doing "doctests" with Python (though I guess Jupyter notebooks are in a similar vein).

        Usually, the automated tests are the best documentation you can have!

      • By Verdex 2026-03-0914:191 reply

        I do read the code instead of the documentation, whenever that is an option.

        Interesting factiod. The number of times I've found the code to describe what the software does more accurately than the documentation: many.

        The number of times I've found the documentation to describe what the software does more accurately than the code: never.

        • By crazygringo 2026-03-0914:361 reply

          You seem to misunderstand the purpose of documentation.

          It's not to be more accurate than the code itself. That would be absurd, and is by definition impossible, of course.

          It's to save you time and clarify why's. Hopefully, reading the documentation is about 100x faster than reading the code. And explains what things are for, as opposed to just what they are.

          • By Verdex 2026-03-0914:391 reply

            Clearly.

            Crazy thing.

            Number of times reading the source saved time and clarified why: many.

            Number of times reading the documentation saved time and clarified why: never.

            Perhaps I've just been unlucky?

            EDIT:

            The hilarious part to me is that everyone can talk past each other all day (reading the documentation) or we can show each other examples of good/bad documentation or good/bad code (reading the code) and understand immediately.

            • By crazygringo 2026-03-0914:471 reply

              > Number of times reading the documentation saved time and clarified why: never.

              OK, so let's use an example... if you need to e.g. make a quick plot with Matplotlib. You just... what? Block off a couple weeks and read the source code start to finish? Or maybe reduce it to just a couple days, if you're trying to locate and understand the code just for the one type of plot you're trying to create? And the several function calls you need to set it up and display it in the end?

              Instead of looking at the docs and figuring out how to do it in 5 or 10 min?

              Because I am genuinely baffled here.

              • By palata 2026-03-0922:14

                Literate programming is not about documenting the public API, it's about documenting the implementation details, right? Otherwise no need for a new name, it's just "API documentation".

                > if you need to e.g. make a quick plot with Matplotlib. You just... what?

                Read the API documentation.

                Now if you need to fix a bug in Matplotlib, or contribute a feature to it, then you read the code.

      • By habinero 2026-03-097:443 reply

        > If good code was enough on its own we would read the source instead of documentation.

        Uh. We do. We, in fact, do this very thing. Lots of comments in code is a code smell. Yes, really.

        If I see lots of comments in code, I'm gonna go looking for the intern who just put up their first PR.

        > I believe part of good software is good documentation

        It is not. Docs tell you how to use the software. If you need to know what it does, you read the code.

        • By crazygringo 2026-03-0914:441 reply

          > Lots of comments in code is a code smell. Yes, really.

          No, not really. It's actually a sign of devs who are helping future devs who will maintain and extend the code, so they can understand it faster. It's professionalism and respect.

          > If I see lots of comments in code, I'm gonna go looking for the intern who just put up their first PR.

          And I'm going to find them to say good job, keep it up! You're saving us time and money in the future.

          • By palata 2026-03-0922:161 reply

            > It's professionalism and respect.

            If someone gives me code full of superfluous comments, I don't consider it professional. Sounds like an intern who felt the need to comment everything because ever single line seemed very complex to them.

            • By crazygringo 2026-03-0923:551 reply

              Nobody said anything about "superfluous" comments.

              I'm assuming "lots of comments" means lots of meaningful comments. As complex code often requires. Nobody's talking about `i++; // increment i` here.

              • By palata 2026-03-1012:041 reply

                > I'm assuming "lots of comments" means lots of meaningful comments.

                That's not what literate programming is. Literate programming says that you explain everything in a natural language.

                IMO, good code is largely unsurprising. I don't need comments for unsurprising code. I need comments for surprising code, but that is the exception, not the rule. Literate programming says that it is the rule, and I disagree.

                • By crazygringo 2026-03-1013:521 reply

                  > Literate programming says that you explain everything in a natural language.

                  At a high level. Not line-by-line comments.

                  > IMO, good code is largely unsurprising. I don't need comments for unsurprising code.

                  I've never heard anything like that, and could not disagree more. Twenty different considerations might go into a single line of code. Often, one of them is something non-obvious. So you comment that thing. The idea that "good" code avoids anything non-obvious, that those are "exceptions", is frankly bizarre to me. Unless the code you write is 99% boilerplate or something.

                  • By palata 2026-03-1014:57

                    > So you comment that thing. The idea that "good" code avoids anything non-obvious, that those are "exceptions", is frankly bizarre to me.

                    What I find interesting from the comments here is that there are obviously different perspectives on that. Granted, I cannot say that my way is better. Just as you cannot say that your way is better.

                    But I am annoyed when I have to deal with code following your standards, and I assume you are annoyed when you have to deal with code following mine :-).

                    Or maybe, I imagine that people who defend literate programming mean more comments than I think is reasonable, and people who disagree with me (like you) imagine that I mean fewer comments than you think is reasonable. And maybe in reality, given actual code samples, we would totally agree :-).

                    Communication is hard.

        • By ninalanyon 2026-03-0911:371 reply

          > If you need to know what it does, you read the code.

          True.

          But If you need to know why it does what its does, you read the comments. And often you need that knowledge if you are about to modify it.

          • By palata 2026-03-0911:534 reply

            Do you have an example of such knowledge that you need to get from the comments? I have been programming for 20 years, and I genuinely don't see that much code that is so complex that it needs comments.

            Not that it doesn't exist; sometimes it's needed. But so rarely that I call it "comments", and not a whole discipline in itself that is apparently be called "literate programming". Literate programming sounds like "you need to comment pretty much everything because code is generally hard to understand". I disagree with that. Most code is trivial, though you may need to learn about the domain.

            • By ninalanyon 2026-03-0914:31

              I've never properly tried literate programming, overkill for hobby projects and not practical for a team unless everyone agrees.

              Examples of code that needs comments in my career tend to come from projects that model the behaviour of electrical machines. The longest running such project was a large object oriented model (one of the few places where OOP really makes sense). The calculations were extremely time consuming and there were places where we were operating with small differences between large numbers.

              As team members came and went and as the project matured the team changed from one composed of electrical engineers, physicists, and mathematicians who knew the domain inside out to one where the bulk of the programmers were young computer science graduates who generally had no physical science background at all.

              This meant that they often had no idea what the various parts of the program were doing and had no intuition that would make them stop and think or ask a question before fixing a bug in wat seemed the most efficient way.

              The problem in this case is that sometimes you have to sacrifice runtime speed for correctness and numerical stability. You can't always re-order operations to reduce the number of assignments say and expect to get the same answers.

              Of course you can write unit and functional tests to catch some such errors but my experience says that tests need even better comments than the code that is being tested.

            • By EraYaN 2026-03-0913:271 reply

              Because the why can be completely unrelated to the code (odd business requirements etc). The code can be known to be non-optimal but it is still the correct way because the embedded system used in product XYZ has some dumb chip in it that needs it this weird way etc. Or the CEO loves this way of doing things and fires everyone who touches it. So many possibilities, most technical projects have a huge amount of politics and weird legacy behavior that someone depends on (including on internal stuff, private methods are not guaranteed to not be used by a client for example). And comments can guard against it, both for the dev and the reviewer. Hell we currently have clients depend on the exact internal layout of some PDF reports, and not even the rendered layout but that actual definitions.

              • By palata 2026-03-0914:48

                Again, if it's a comment saying "we need this hack because the hardware doesn't support anything", I don't call it "literate programming".

                Literate programming seems to be the idea that you should write prose next to the code, because code "is difficult to understand". I disagree with that. Most good code is simple to understand (doesn't mean it's easy to write good code).

                And the comments here prove my point, I believe: whenever I ask for examples where a comment is needed, the answer is something very rare and specific (e.g. a hardware limitation). The answer to that is comments where those rare and specific situations arise. Not a whole concept of "literate programming".

            • By Izkata 2026-03-1017:42

              > Literate programming sounds like "you need to comment pretty much everything because code is generally hard to understand".

              You and I read code. Came so naturally for me that I didn't realize others don't. But over the years and with some weird chats I've realized that for a lot of developers it's more like "deciphering code", like they're slowly translating a human language they only vaguely know - and it never even crossed their mind that it was possible to learn a programming language to the point you could just read it.

            • By tonyedgecombe 2026-03-0913:20

              Most of my comments related to the outside world not behaving quite as you would expect.

              Usually something like the spec says this but the actual behaviour is something else.

        • By bottd 2026-03-0912:061 reply

          Not for everything. For code you own, yes this is often the case. For the majority of the layers you still rely on documentation. Take the project you mention going straight to source, did you follow this thread all the way down through each compiler involved in building the project? Of course not.

          • By palata 2026-03-0914:50

            My understanding is that "literate programming" doesn't say "you should document the public API". It says "you should document the implementation details, because code is hard to understand".

            My opinion is that if whoever is interested in reading the implementation details cannot understand it, either the code is bad or they need to improve themselves. Most of the time at least. But I hear a lot of "I am very smart, so if I don't understand it without any effort, it means it's too complicated".

    • By pdntspa 2026-03-096:202 reply

      > because my prompts are in natural languages, and hence ambiguous.

      Legalese developed specifically because natural language was too ambiguous. A similar level of specificity for prompting works wonders

      One of the issues with specifying directions to the computer with code is that you are very narrowly describing how something can be done. But sometimes I don't always know the best 'how', I just know what I know. With natural language prompting the AI can tap into its training knowledge and come up with better ways of doing things. It still needs lots of steering (usually) but a lot of times you can end up with a superior result.

      • By vnorilo 2026-03-097:011 reply

        Yes. LLMs are search engines into the (latent) space or source code. Stuff you put into the context window is the "query". I've had some good results by minimizing the conversational aspect, and thinking in terms of shaping the context: asking the LLM to analyze relevant files, nor because I want the analysis, but because I want a good reading in the context. LLMs will work hard to stay in that "landscape", even with vague prompts. Often better than with weirdly specific or conflicting instructions.

        • By ptx 2026-03-0910:261 reply

          But search engines are not a good interface when you already know what you want and need to specify it exactly.

          See for example the new Windows start menu compared to the old-school run dialog – if I directly run "notepad", then I get always Notepad; but if I search for "notepad" then, after quite a bit of chugging and loading and layout shifting, I might get Notepad or I might get something from Bing or something entirely different at different times.

          • By vnorilo 2026-03-0911:55

            Indeed, which is not all that different from LLM code generation, to be honest.

      • By Copyrightest 2026-03-0911:05

        [dead]

    • By awesome_dude 2026-03-091:293 reply

      > Natural languages are ambiguous. That's the reason why we created programming languages. So the documentation around the code is generally ambiguous as well. Worse: it's not being executed, so it can get out of date (sometimes in subtle ways).

      I loathe this take.

      I have rocked up to codebases where there were specific rules banning comments because of this attitude.

      Yes comments can lie, yes there are no guards ensuring they stay in lock step with the code they document, but not having them is a thousand times worse - I can always see WHAT code is doing, that's never the problem, the problems is WHY it was done in this manner.

      I put comments like "This code runs in O(n) because there are only a handful of items ever going to be searched - update it when there are enough items to justify an O(log2 n) search"

      That tells future developers that the author (me) KNOWS it's not the most efficient code possible, but it IS when you take into account things unknown by the person reading it

      Edit: Tribal knowledge is the worst type of knowledge, it's assumed that everyone knows it, and pass it along when new people onboard, but the reality (for me) has always been that the people doing the onboarding have had fragments, or incorrect assumptions on what was being conveyed to them, and just like the childrens game of "telephone" the passing of the knowledge always ends in a disaster

      • By AdieuToLogic 2026-03-094:131 reply

        > Yes comments can lie ...

        Comments only lie if they are allowed to become one.

        Just like a method name can lie. Or a class name. Or ...

        • By bonesss 2026-03-096:061 reply

          Right.

          The compiler ensures that the code is valid, and what ensures that ‘// used a suboptimal sort because reasons’ is updated during a global refactor that changes the method? … some dude living in that module all day every day exercising monk-like discipline? That is unwanted for a few reasons, notably the routine failures of such efforts over time.

          Module names and namespaces and function names can lie. But they are also corrected wholesale and en-masse when first fixed, those lies are made apparent when using them. If right_pad() is updated so it’s actually left_pad() it gets caught as an error source during implementation or as an independent naming issue in working code. If that misrepresentation is the source of an emergent error it will be visible and unavoidable in debugging if it’s in code, and the subsequent correction will be validated by the compiler (and therefore amenable to automated testing).

          Lies in comments don’t reduce the potential for lies in code, but keeping inline comments minimal and focused on exceptional circumstances can meaningfully reduce the number of aggregate lies in a codebase.

          • By deathanatos 2026-03-096:181 reply

            > what ensures that ‘// used a suboptimal sort because reasons’ is updated during a global refactor that changes the method?

            And for that matter, what ensures it is even correct the first time it is written?

            (I think this is probably the far more common problem when I'm looking at a bug, newly discovered: the logic was broken on day 1, hasn't changed since; the comment, when there is one, is as wrong as the day it was written.)

            • By awesome_dude 2026-03-097:20

              But, you've still got an idea of why things were done the way they were - radio silence is....

              Go ask Steve, he wrote it, oh, he left about 3 years ago... does anyone know what he was thinking?

      • By larusso 2026-03-094:581 reply

        I don’t disagree here. I personally like to put the why into commit messages though. It’s my longtime fight to make people write better commit messages. Most devs I see describe what they did. And in most cases that is visible from the change-set. One has to be careful here as similar to line documentation etc everything changes with size. But I prefer if the why isn’t sprinkled between source. But I’m not dogmatic about it. It really depends.

        • By awesome_dude 2026-03-095:25

          https://conventionalcommits.org/en/v1.0.0/

          I <3 great (edit: improve clarity) commit comments, but I am leaning more heavily to good comments at the same level as the dev is reading - right there in the code - rather than telling them to look at git blame, find the appropriate commit message (keeping in mind that there might have been changes to the line(s) of code and commits might intertwine, thus making it a mission to find the commit holding the right message(s).

          edit: I forgot to add - commit messages are great, assuming the people merging the PR into main aren't squashing the commits (a lot of people do this because of a lack of understanding of our friend rebase)

      • By palata 2026-03-0912:001 reply

        IMHO, you shouldn't have to justify yourself ("yeah yeah, this is not optimal, I know it because I am not an idiot"). Just write your code in O(n) if that's good enough now. Later, a developer may see that it needs to be optimised, and they should assume that the previous developer was not an idiot and that it was fine with O(n), but now it's not anymore.

        Or do you think that your example comment brings knowledge other than "I want you to know that I know that it is not optimal, but it is fine, so don't judge me"?

        • By awesome_dude 2026-03-0919:011 reply

          A little bit of "Don't judge me" and a little bit of "I nearly fell into a trap here, and started writing O(log n) search, but realised that it was a waste of time and effort (and would actually slow things down) - so to save you from that trap here's a note"

          • By palata 2026-03-0922:191 reply

            The risk with that is that because it was not obvious to you does not necessarily mean it's not obvious to others.

            Over the years, I have seen many, many juniors wrapping simple CLI invocations in a script because they just learned about them and thought they weren't obvious.

            - clone_git_repo.sh

            - run_docker_container.sh

            I do agree that something actually tricky should be commented. But that's exceedingly rare.

            • By awesome_dude 2026-03-102:451 reply

              I mean, the whole point of explicit being superior to implicit is because what's obvious to some isn't necessarily obvious to everyone.

              Someone following me could look at it and go.. "well duh" and that's not going to hurt anyone, but if I didn't put that comment and someone refractometer, then we have someone redoing and then undoing, for no good reason.

              There's that meme where people are told to update the number of hours wasted because people try to refactor some coffee and have to undo it because it doesn't work

              • By palata 2026-03-1012:021 reply

                Do you write a comment before every for loop to explain how a for loop works? Do you write a comment above that to remind the reader that the next few lines are written in, say, Go, just like in the rest of the file? Do you write a comment explaining that the text appearing on the screen is actually digital and will disappear when you turn off the computer?

                Obviously you don't, because you assume that the person reading that code has some level of knowledge. You don't say "well, it may not be obvious to everybody, so I need to explain everything".

                I guess where we differ is that to me, a professional software developer should be able to understand good code. If they aren't, they are a junior who needs practice. But I am for designing tools for the professionals, not for the apprentices. The goal of an apprentice is to become a professional, not to remain an apprentice forever.

                • By awesome_dude 2026-03-1020:051 reply

                  > Do you write a comment before every for loop to explain how a for loop works?

                  Thank you for missing the point.

                  It's not about the WHAT, it's about the WHY.

                  For loops are obvious. O(n) being intentional instead of 'lazy' isn't obvious without context. That's what comments preserve - the decision rationale, not the syntax explanation.

                  A professional developer can read code. But they can't read the mind of the author who made a non obvious tradeoff. That's what comments preserve.

                  > I guess where we differ is that to me, a professional software developer should be able to understand good code. If they aren't, they are a junior who needs practice. But I am for designing tools for the professionals, not for the apprentices. The goal of an apprentice is to become a professional, not to remain an apprentice forever.

                  If you are going to make personal attacks, you should know that I work with actual professionals, and they understand that future maintainers, myself included, cannot read their mind on why they chose the path they did.

                  • By palata 2026-03-1023:301 reply

                    > It's not about the WHAT, it's about the WHY.

                    And my point is that I don't care what it is about, I care about whether or not it is useful. I disagree with the literate programming idea that it's always useful to explain why you wrote the code the way you did, and your one example (justifying the O(n)) actually proves to me that I really don't care about your explanation in this particular case. So obviously your one example that I don't find useful won't convince me that all WHY comments are useful.

                    > O(n) being intentional instead of 'lazy' isn't obvious without context.

                    What does such a comment tell me?

                    - That you chose the O(n): it's the "please don't judge me, I know what I am doing" part. It's superfluous, because by default I assume that you know what you are doing.

                    - That you tried to do better and failed. If I believe that we don't need better than O(n), I don't care. If I believe that we need better than O(n), I will reason about doing it myself (no matter what you wrote).

                    - ... I can't see anything else.

                    Now sometimes, of course, there is real knowledge that needs to go into a comment. Like "This is a workaround due to a bug in version 1.4.2 of this proprietary dependency". But that's an exception. I can also totally imagine that some files implement something really tricky and deserve a lot of comments. But in my experience reading and contributing to a lot of open source code from many different projects, most code is not like that. The concept of "literate programming" doesn't say "be pragmatic about comments, use them when it matters", it says "comment the code because it always helps".

                    > If you are going to make personal attacks

                    I am not making personal attacks, I genuinely believe that you are perfectly able to read and understand code that does not follow the "literate programming" paradigm. And if you are not, I still don't see that as a personal attack: with experience you will definitely get there.

                    > cannot read their mind on why they chose the path they did.

                    I just want to repeat it here: it does not matter at the implementation detail level. You may want to document the architecture (including technology choices) of course, but that's not what literate programming is about. You probably want to document the public API (because using an API generally does not require reading the code, and the implementation may be proprietary), but again that's not what literate programming is about. But the implementation details? Unless it's surprising (e.g. a necessary workaround), I don't care about why it was written the way it was, I just care about understanding what it does such that I can reason about it.

                    • By awesome_dude 2026-03-111:541 reply

                      You make a lot of comments for someone that thinks it should be obvious and there's no need for comments.

                      • By palata 2026-03-1110:501 reply

                        Again you prove my point: natural languages are ambiguous and communication is hard.

                        And maybe also that you don't seem to make the difference between natural languages and programming languages: I have not been commenting code here. If you can't make the difference, maybe it explains why you want to mix them.

                        • By awesome_dude 2026-03-1114:28

                          And silence doesn't achieve the goal at all, as you continually prove.

    • By baq 2026-03-096:581 reply

      Docs and code work together as mutually error correcting codes. You can’t have the benefits of error detection and correction without redundant information.

      • By ghywertelling 2026-03-097:49

        > With agents, does it become practical to have large codebases that can be read like a narrative, whose prose is kept in sync with changes to the code by tireless machines?

        I think this is true. Your point supports it. If either the explanation / intention or the code changes, the other can be brought into sync. Beautiful post. I always hated the fact that research papers don't read like novels, eg "ohk, we tried this which was unsuccessful but then we found another adjacent approach and it helped."

        Computer Scientist Explains One Concept in 5 Levels of Difficulty | WIRED

        https://www.youtube.com/watch?v=fOGdb1CTu5c

        Computer scientist Amit Sahai, PhD, is asked to explain the concept of zero-knowledge proofs to 5 different people; a child, a teen, a college student, a grad student, and an expert. Using a variety of techniques, Amit breaks down what zero-knowledge proofs are and why it's so exciting in the world of cryptography.

    • By casey2 2026-03-095:484 reply

      Programming languages are natural and ambiguous too, what does READ mean? you have to look it up to see the types. The power comes from the fact that it's audit-able, but that you don't need to audit it every time you want to write some code. You think you write good code? try to prove it after the compiler gets through with it.

      Natural languages are richer in ideas, it may be harder to get working code going from a purely natural description to code, than code to code, but you don't gain much from just translating code. One is only limited by your imagination the other already exists, you could just call it as a routine.

      You only have a SENSE for good code because it's a natural language with conventions and shared meaning. If the goal of programming is to learn to communicate better as humans then we should be fighting ambiguity not running from it. 100 years from now nobody is going to understand that your conventions were actually "good code".

      • By musicale 2026-03-096:081 reply

        > Programming languages are natural and ambiguous too

        Programming languages work because they are artificial (small, constrained, often based on algebraic and arithmetic expressions, boolean logic, etc.) and have generally well-defined semantics. This is what enables reliable compilers and interpreters to be constructed.

        • By mexicocitinluez 2026-03-0911:59

          Exactly. Programming is the art of removing ambiguity and making it formal. And it's why the timelines between getting an EXACT plan of what I need to implement vs hazy requirements are so out of whack.

      • By palata 2026-03-0912:05

        > Programming languages are natural and ambiguous too, what does READ mean?

        "READ" is part of the "documentation in natural language". The compiler ignores it entirely, it's not part of the programming language per se. It is pure documentation for the developers, and it is ambiguous.

        But the part that the compiler actually reads is non-ambiguous. It cannot deal with ambiguity, fundamentally. It cannot infer from the context that you wrote a line of code that is actually ironic, and it should therefore execute the opposite.

      • By mexicocitinluez 2026-03-0911:58

        > Programming languages are natural and ambiguous too, what does READ mean?

        Not nearly in the same sense actual language is ambiguous.

        And ambiguity in programming is usually a bad thing, whereas in language it can usually be intended.

        Good code, whatever that means, can read like a book. Event-driven architectures is a good example because the context of how something came to be is right in the event name itself.

      • By LEDThereBeLight 2026-03-0915:57

        What is good code now is only good code because of the bad programming languages we’ve had to accept for the last hundred years because we’re tied to incremental improvements. We’re tied to static brittle types. But look at natural systems - they all use dynamic “languages.” When you get a cut, your flesh doesn’t throw an exception because it’s connected to the wrong “thing.” Maybe AI will redefine what good code means, because it’s better able to handle ambiguity.

    • By psychoslave 2026-03-0920:591 reply

      >Natural languages are ambiguous. That's the reason why we created programming languages.

      Programming languages can be ambiguous too. The thing with formal languages is more that they put a stricter and narrower interpretation freedom as a convention where it's used. If anything there are a subset of human expression space. Sometime they are the best tool for the job. Sometime a metaphor is more apt. Sometime you need some humour. Sometime you better stay in ambiguity to play the game at its finest.

      • By palata 2026-03-1011:481 reply

        Programming languages are non-ambiguous, in the sense that there is no doubt what will be executed. It's deterministic. If the program crashes, you can't say "no but this line was a joke, you should have ignored it". Your code was wrong, period.

    • By hosh 2026-03-0823:54

      I don’t have my LLMs generate literate programming. I do ask it to talk about tradeoffs.

      I have full examples of something that is heavily commented and explained, including links to any schemas or docs. I have gotten good results when I ask an LLM to use that as a template, that not everything in there needs to be used, and it cuts down on hallucinations by quite a bit.

    • By k32k 2026-03-091:421 reply

      "But translating my prompts to code is not working as well, because my prompts are in natural languages, and hence ambiguous."

      Not only that, but there's something very annoying and deeply dissatisfying about typing a bunch of text into a thing for which you have no control over how its producing an output, nor can an output be reproduced even if the input is identical.

      Agreed natural language is very ambiguous and becoming more ambiguous by the day "what exactly does 'vibe' mean?".

      People spoke in a particular way, say 60 years ago, that left very little room for interpretation of what they meant. The same cannot be said today.

      • By caseyohara 2026-03-092:321 reply

        > People spoke in a particular way, say 60 years ago, that left very little room for interpretation of what they meant. The same cannot be said today.

        Surely you don’t mean everyone in the 1960s spoke directly, free of metaphor or euphemism or nuance or doublespeak or dog whistle or any other kind or ambiguity? Then why are there people who dedicate their entire life to interpreting religious texts and the Constitution?

        • By k32k 2026-03-092:572 reply

          Compared with today, on average, they did.

          There's a generation of people that 'typ lyk dis'.

          So yes.

          • By jyounker 2026-03-095:54

            Your point is less persuasive than you intended. You complain about linguistic ambiguity, but then you show an example of sensible spelling reform.

          • By ChrisGreenHeur 2026-03-097:46

            that example is regarding syntax, and is actually no worse than any other

    • By gwbas1c 2026-03-0912:22

      > That's the reason why we created programming languages.

      No, we created programming languages because when computers were invented:

      1: They (computers) were incapable of understanding natural language.

      2: Programming languages are easier to use than assembly or writing out machine code by hand.

      LLMs are a quite recent invention, and require significantly more computing power than early computers had.

    • By alkonaut 2026-03-099:42

      Maybe if we had a really terse and unambiguous form of English? Whenever there is ambiguity we insert parentheses and operators to really make it clear what we mean. We can enclose different sentences in brackets to make sure that the scope of a logical condition and so on. Oh wait

  • By CharlieDigital 2026-03-0912:274 reply

    The easiest thing to do is to have the LLM leave its own comments.

    This has several benefits because the LLM is going to encounter its own comments when it passes this code again.

        > - Apply comments to code in all code paths and use idiomatic C# XML comments
        > - <summary> be brief, concise, to the point
        > - <remarks> add details and explain "why"; document reasoning and chain of thought, related files, business context, key decisions.
        > - <params> constraints and additional notes on usage
        > - inline comments in code sparingly where it helps clarify behavior
    
    (I have something similar for JSDoc for JS and TS)

    Several things I've observed:

    1. The LLM is very good at then updating these comments when it passes it again in the future.

    2. Because the LLM is updating this, I can deduce by proxy that it is therefore reading this. It becomes a "free" way to embed the past reasoning into the code. Now when it reads it again, it picks up the original chain-of-thought and basically gets "long term memory" that is just-in-time and in-context with the code it is working on. Whatever original constraints were in the plan or the prompt -- which may be long gone or otherwise out of date -- are now there next to the actual call site.

    3. When I'm reviewing the PR, I can now see what the LLM is "thinking" and understand its reasoning to see if it aligns with what I wanted from this code path. If it interprets something incorrectly, it shows up in the `<remarks>`. Through the LLM's own changes to the comments, I can see in future passes if it correctly understood the objective of the change or if it made incorrect assumptions.

    • By zozbot234 2026-03-0915:181 reply

      In my experience, LLM-added comments are too silly and verbose. It's going to pollute its own context with nonsense and its already limited ability to make sense of things will collapse. LLMs have plenty of random knowledge which is occasionally helpful, but they're nowhere near the standard of proper literacy of even an ordinary skilled coder, let alone Dr. Knuth who defined literate programming in the first place.

      • By CharlieDigital 2026-03-0915:472 reply

        The output of an LLM is a reflection of the input and instructions. If you have silly and verbose comments, then consider improving your prompt.

        • By astrange 2026-03-0918:481 reply

          Almost nothing in a Claude Code session has to do with "your prompt", it works for an hour afterwards and mostly talks to itself. I've noticed if you give it small corrections it will leave nonsensical comments referring to your small correction as if it's something everyone knows.

          • By CharlieDigital 2026-03-0919:021 reply

            It has everything to do with your prompt and why Claude Code has a plan mode: because the quality of your planning, prompting, and inputs significantly affects the output.

            Your assertion, then, is that even a 1 sentence prompt is as good as a 5 section markdown spec with detailed coding style guidance and feature, by feature specification. This is simply not true; the detailed spec and guidance will always outperform the 1 sentence prompt.

            • By astrange 2026-03-0923:49

              No, I use plan mode and have several rounds of conversation with it, but lately I've been doing tasks where it does tons of independent research and finds complicated conclusions in an existing old codebase. I don't really feel like either of those count as "a prompt".

              The plan mode is useful because if you do corrections during development mode it does that silly thing where it leaves comments referring to your corrections.

        • By just6979 2026-03-1114:14

          "then consider improving all your training data and reinforcement feedback"

          Fixed that for you.

          The input is sooo much more than your prompt, that's kind of the point.

    • By solarkraft 2026-03-0912:443 reply

      How do you deal with the comments sometimes being relatively noisy for humans? I tend to be annoyed by comments overly referring to a past correction prompt and not really making sense by themselves, but then again this IS probably the highest value information because these are exactly the things the LLM will stumble on again.

      • By CharlieDigital 2026-03-0912:56

            > How do you deal with the comments sometimes being relatively noisy for humans?
        
        To extents, that is a function of tweaking the prompt to get the level of detail desired and signal/vs noise produced by the LLM. e.g. constraining the word count it can use for comments.

        We have a small team of approvers that are reviewing every PR and for us, not being able to see the original prompt and flow of interactions with the agent, this approach lets us kind of see that by proxy when reviewing the PR so it is immensely useful.

        Even for things like enum values, for example. Why is this enum here? What is its use case? Is it needed? Having the reasoning dumped out allows us to understand what the LLM is "thinking".

        (Of course, the biggest benefit is still that the LLM sees the reasoning from an earlier session again when reading the code weeks or months later).

      • By stingraycharles 2026-03-0914:23

        Inline comments in function body: for humans.

        Function docs: for AI, with clear trigger (“use when X or Y”) and usage examples.

      • By JamesSwift 2026-03-0914:35

        I really hate its tendency to leave those comments as well. I seem to have coached it out with some claude.md instructions but they still happen on occasion.

    • By ulrikrasmussen 2026-03-0914:491 reply

      Interesting observation. After a human is done writing code, they still have a memory of why they made the choices they made. With an LLM, the context window is severely limited compared to a brain, so this information is usually thrown away when the feature is done, and so you cannot go back and ask the LLM why something is the way it is.

      • By CharlieDigital 2026-03-0915:08

        Yup; in the moment, you can just have the LLM dump its reasoning into the comments (we use idiomatic `<remarks></remarks>` for C# and JSDoc `@remarks`).

        Future agents see the past reasoning as it `greps` through code. Good especially for non-obvious context like business and domain-level decisions that were in the prompt, but may not show in the code.

        I can't prove this, but I'm also guessing that this improves the LLM's output since it writes the comment first and then writes the code so it is writing a mini-spec right before it outputs the tokens for the function (would make an interesting research paper)

    • By 3371 2026-03-102:28

      Somehow made me think I should enforce a rule agents should sign their conment so it's identifiable at first glance

  • By rustybolt 2026-03-0821:297 reply

    I have noticed a trend recently that some practices (writing a decent README or architecture, being precise and unambiguous with language, providing context, literate programming) that were meant to help humans were not broadly adopted with the argument that it's too much effort. But when done to help an LLM instead of a human a lot of people suddenly seem to be a lot more motivated to put in the effort.

    • By zdragnar 2026-03-0821:363 reply

      In my years of programming, I find that humans rarely give documentation more than a cursory glance up until they have specific questions. Then they ask another person if one is available rather than read for the answer.

      The biggest problem is that humans don't need the documentation until they do. I recall one project that extensively used docblock style comments. You could open any file in the project and find at least one error, either in the natural language or the annotations.

      If the LLM actually uses the documentation in every task it performs- or if it isn't capable of adequate output without it- then that's a far better motivation to document than we actually ever had for day to day work.

      • By 1718627440 2026-03-095:11

        I think this really depends on culture. If you target OS APIs or the libc, the documentation is stellar. You have several standards and then conceptual documentation and information about particular methods all with historic and current and implementation notes, then there is also an interactive hypertext system. I solve 80% of my questions with just looking at the official documentation, which is also installed on my computer. For the remaining I often try to use the WWW, but these are often so specific, that it is more successful to just read the code.

        Once I step out of that ecosystem, I wonder how people even cope with the lack of good documentation.

      • By suzzer99 2026-03-091:18

        The other problem is that documentation is always out of date, and one wrong answer can waste more time than 10 "I don't knows".

      • By ijk 2026-03-090:02

        I have discovered that the measure of good documentation is not whether your team writes documentation, but is instead determined by whether they read it.

    • By ptak_dev 2026-03-0910:11

      This is the pattern I keep noticing too. A lot of "good engineering hygiene" that got dismissed as overhead is now paying dividends specifically because agents can consume it.

      Detailed commit messages: ignored by most humans, but an agent doing a git log to understand context reads every one. Architecture decision records: nobody updates them, but an agent asked to make a change that touches a core assumption will get it wrong without them.

      The irony is that the practices that make code legible to agents are the same ones that make it legible to a new engineer joining the team. We just didn't have a strong enough forcing function before.

    • By hinkley 2026-03-0822:091 reply

      Paraphrasing an observation I stole many years ago:

      A bunch of us thought learning to talk to computers would get them out of learning to talk to humans and so they spent 4 of the most important years of emotional growth engaging in that, only to graduate and discover they are even farther behind everyone else in that area.

      • By analog31 2026-03-094:412 reply

        This raises an interesting point. I've speculated that if someone has a hard time expressing themselves to other humans verbally or in writing, they're also going to have a hard time writing human-readable code. The two things are rooted in the same basic abilities. Writing documentation or comments in the code at least gives someone two slim chances at understanding them, instead of just one.

        I have the opposite problem. Granted, I'm not a software developer, but only use code as a problem solving tool. But once again, adding comments to my code gives me two slim chances of understanding it later, instead of one.

        • By 1718627440 2026-03-095:151 reply

          > I've speculated that if someone has a hard time expressing themselves to other humans verbally or in writing

          I don't think they have actually problems with expressing themselves, code is also just a language with a very formal grammar and if you use that approach to structure your prose, it's also understandable. The struggle is more to mentally encode non-technical domain knowledge, like office politics or emotions.

          • By analog31 2026-03-0914:401 reply

            That's true. But people have had formal language for millennia, so why don't we use it?

            Here's my hunch. Formal specifiation is so inefficient that cynics suspect it of being a form of obstructionism, while pragmatic people realize that they can solve a problem themselves, quicker than they can specify their requirements.

            • By 1718627440 2026-03-1011:55

              > But people have had formal language for millennia, so why don't we use it?

              In case you don't refer to the mathematical notion of formal, then we use formal language all the time. Every subject has its formal terms, contracts are all written in a formal way, specifications use formal language. Anything that really matters or is read by a large audience is written in formal language.

        • By hinkley 2026-03-096:58

          I think there’s some of that, but it’s also probably a thing where people who make good tutors/mentors tend to write clearer code as well, and the Venn diagram for that is a bit complicated.

          Concise code is going to be difficult if you can’t distill a concept. And that’s more than just verbal intelligence. Though I’m not sure how you’d manage it with low verbal intelligence.

    • By jpollock 2026-03-0822:091 reply

      Documentation rots a lot more quickly than the code - it doesn't need to be correct for the code to work. You are usually better off ignoring the comments (even more so the design document) and going straight to the code.

      • By hinkley 2026-03-0822:131 reply

        I maintain you’re either grossly misappropriating the time and energy of new and junior devs if this is the case on your project, or you have gone too long since hiring a new dev and your project is stagnating because of it.

        New eyes don’t have the curse of knowledge. They don’t filter out the bullshit bits. And one of the advantages of creating reusable modules is you get more new eyes on your code regularly.

        This may also be a place where AI can help. Some of the review tools are already calling us out on making the code not match the documentation.

        • By habinero 2026-03-097:55

          No, they're 100% correct. This has been my experience at every place I've worked at in SV, from startup to FAANG.

          You write the code so you can scan it easily, and you build tools to help, and you ask for help when you need it, but you still gotta build that mental map out

    • By cmrdporcupine 2026-03-090:141 reply

      I've had LLMs proactively fix my inline documentation. Rather pleasant surprise: "I noticed the comment is out of date and does not reflect the actual implementation" even asking me if it should fix it.

      • By jimbokun 2026-03-093:13

        I find LLMs more diligent about keeping the documentation than any human developer, including myself.

    • By jimbokun 2026-03-093:11

      Well maybe if those people were managing one or more programmers and not writing the code themselves, they would have worked similarly.

    • By what 2026-03-094:25

      The difference is that they’re using the LLM to write those readmes and architecture and whatever else documents. They’re not putting any effort in.

HackerNews