Try CodeCompanion if you're using neovim. I have a keybind set up and takes the highlighted region, prepends some context which says roughly "if you see a TODO comment, do it, if you see a WTF comment, try to explain it", and presents you an inline diff to accept/reject edits. It's great for tactical LLM use on small sections of code.
For strategic use on any larger codebase though, it's more productive to use something like plan mode in Claude code.
Considering LLMs are models of language, investing in the clarity of the written word pays off in spades.
I don't know whether "literate programming" per se is required. Good names, docstrings, type signatures, strategic comments re: "why", a good README, and thoughtfully-designed abstractions are enough to establish a solid pattern.
Going full "literate programming" may not be necessary. I'd maybe reframe it as a focus on communication. Notebooks, examples, scripts and such can go a long way to reinforcing the patterns.
Ultimately that's what it's about: establishing patterns for both your human readers and your LLMs to follow.
Wow, the world is getting much faster at exploiting CVEs
> 67.2% of exploited CVEs in 2026 are zero-days, up from 16.1% in 2018
But the exploit rate (the pct of all published CVEs that are actually exploited in the wild) has dropped from a high of 2.11% in 2021 to 0.64% in 2026. Meaning we're either getting worse at exploitation (not likely) or reporting more obscure, pragmatically not-really-an-issue issues that can't be replicated IRL.
So we're in a weird situation:
The vast majority 99.4% of CVEs will never see the light of day as an actual attack. Lots of noise, and getting noisier.
But those that do will happen with increasing speed! So there are increased consequences for missing the signal.
Don't worry about it. But don't blink.
The entire zeitgeist of software technology revolves around the assumption that making things efficient, easy, and quick is inherently good. Most people who are "sitting in front of rectangles, moving tiny rectangles" have sometime grandiose notions of their works' importance; we're making X work better for the good of Y to enable Z. Abstract shit like that.
No man, you're just making X easier. If the world needs more X, fine. If not, woops.
The detachment from reality makes it all too easy to deceive yourself into thinking "hey this actually helps people".
I agree. I think this is the LLM superpower: making quick prototypes that allow us to speak concretely about technical tradeoffs.
My comment was pointed at people who use AI specifically with the goal of making anything easier and faster. Doesn't matter what it is. "Faster and easier is better". as though doing more of the same shit are primary goals in themselves.
If you're using AI to explore better technical decisions, you're doing it right! AI can be a catalyst for engineering and science. But not if we treat it like a mere productivity tool. The quality of the thing enabled by the AI very much matters.