https://stillpointlab.com
> We all know that the industry has taken a step back in terms of code quality by at least a decade. Hardly anyone tests anymore.
I see pseudo-scientific claims from both sides of this debate but this is a bit too far for me personally. "We all know" sounds like Eternal September [1] kind of reasoning. I've been in the industry about as long as the article author and I think he might be looking with rose-tinted glasses on the past. Every aging generation looks down at the new cohort as if they didn't go through the same growing pains.
But in defense of this polemic, and laying out my cards as an AI maximalist and massive proponent of AI coding, I've been wondering the same. I see articles all the time about people writing this and that software using these new tools and it so often is the case they never actually share what they built. I mean, I can understand if someone is heads-down cranking out amazing software using 10 Claude Code instances and raking in that cash. But not even to see one open source project that embraces this and demonstrates it is a bit suspicious.
I mean, where is: "I rewrote Redis from scratch using Claude Code and here is the repo"?
I want to consider the higher-level claims in the article. In between the historical context helpfully provided by the article there is also some speculation about Merkaba, Platonic solids, Flower of Life and other sacred geometry.
There is a premise hidden in those speculations that there is some strong connection between the structure of the universe itself and the structures humans find pleasing when listening to music. And I detect a suggestion that studying the output of our most genius musicians might reveal some kind of hidden information about the universe, specifically related to some kind of "spirituality".
This was a sentiment shared, in some sense, by the deists of the enlightenment. They rejected the scriptures and instead believed that studying the physical universe might reveal the "mind of God".
If we are looking for correspondences between these things - why limit ourselves to Euclidean geometry? Modern physics leans on Riemannian geometry, symmetry, and topology. It appears the topology of the universe, under a wide array of experiments, is way more complicated than the old geometric ideas. Most physicists talk about Lie Groups, fiber bundles, etc.
If you take "as above, so below" seriously and you want to find connections between cosmology and music, I believe you have to use modern mathematical tools. I think we need to expand beyond geometry and embrace topology. Can we think of the chromatic scale tones as a Group? What operators would we need? etc.
It's interesting to try to get into the head of a guy like Coltrane and his mathematical approach, but perhaps we could be pushing new boundaries based on new understanding.
According to the spec, yes a grammar checker would be subject to disclosure:
> ai-modified Indicates AI was used to assist with or modify content primarily created by humans. The source material was not AI-generated. Examples include AI-based grammar checking, style suggestions, or generating highlights or summaries of human-written text.
My experience is: AI written prompts are overly long and overly specific. I prefer to write the instructions myself and then direct the LLM to ask clarifying questions or provide an implementation plan. Depending on the size of change I go 1-3 rounds of clarifications until Claude indicates it is ready and provides a plan that I can review.
I do this in a task_descrtiption.md file and I include the clarifications in its own section (the files follow a task.template.md format).
I'm still calibrating myself on the size of task that I can get Claude Code to do before I have to intervene.
I call this problem the "goldilocks" problem. The task has to be large enough that it outweighs the time necessary to write out a sufficiently detailed specification AND to review and fix the output. It has to be small enough that Claude doesn't get overwhelmed.
The issue with this is, writing a "sufficiently detailed specification" is task dependent. Sometimes a single sentence is enough, other times a paragraph or two, sometimes a couple of pages is necessary. And the "review and fix" phase again is totally dependent and completely unknown. I can usually estimate the spec time but the review and fix phase is a dice roll dependent on the output of the agent.
And the "overwhelming" metric is again not clear. Sometimes Claude Code can crush significant tasks in one shot. Other times it can get stuck or lost. I haven't fully developed an intuition for this yet, how to differentiate these.
What I can say, this is an entirely new skill. It isn't like architecting large systems for human development. It isn't like programming. It is its own thing.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io