Claude Code feels like magic because it is iterative

2025-06-179:5310185omarabid.com

It takes these very simple-minded instructions - 'Go fetch a number, add it to this number, put the result there, perceive if it's greater than this other number' - but executes them at a rate of,…

It takes these very simple-minded instructions - 'Go fetch a number, add it to this number, put the result there, perceive if it's greater than this other number' - but executes them at a rate of, let's say, 1,000,000 per second. At 1,000,000 per second, the results appear to be magic. — Steve Jobs

Claude Code feels like magic because it is iterative. The solution to any problem is random. You just have to iterate through the whole possible space until you find one that works.

Here, let me illustrate:

intelligence = heuristic * attempt

If your attempts are purely random, you need roughly the size of the search space to find a solution. A heuristic cuts that down significantly. That is essentially what an LLM is.

Claude Code uses the same models provided through the API or the web interface. Yet, users feel a boost in intelligence. The model didn't get smarter but because Claude Code can make several attempts on its own, its overall intelligence increases for the end user.

As LLMs performance plateaus, intelligence can be derived from the second factor. In this regard, AI tools can have value on their own.

I have been using Claude Code for the last week or so. I completely disregarded it at first because I thought a Chat window where I manually go back and forth is enough. But there is something to be gained from speed and autonomy.

New Era?

I've used LLMs extensively but remained skeptical of their practical value. Claude Code changed that perspective through one concrete test: updating dependencies on a project with compilation and extensive tests. The tool iterated back and forth dozens of times over 30-40 minutes. I intervened occasionally, but mostly watched it work.

Consider the implications of scale. What if Claude Code operated autonomously with massive parallel compute? Could it compress that 40-minute task into 10 minutes? 5 minutes? 1 minute?

If 1 minute proves to be feasible, is it possible to go back to the old way of updating dependencies? What about other tasks? What other tasks could be automated today with the current LLMs performance?

Get the latest posts from this blog delivered to your inbox. No spam.


Read the original article

Comments

  • By bgwalter 2025-06-1711:451 reply

    > What other tasks could be automated today with the current LLMs performance?

    CEO speeches and pro-LLM blogs come to mind.

    Again, there is a vague focus on "updating dependencies" where allegedly some time was saved. Take that to the extreme and we don't need any new software. Freeze Linux and Windows, do only security updates and fire everyone. Because the ultimate goal of LLM shills or self-hating programmers appears to be to eliminate all redundant work.

    Be careful what you wish for. They won't reward you for shilling or automating, they'll just fire you.

    • By msgodel 2025-06-1711:511 reply

      The primary use seems to be satisfying administrative demands that were never productive anyway.

      • By Eddy_Viscosity2 2025-06-1712:031 reply

        This. They've been pushing these at my workplace and the only thing I can think to use it for is have the LLMs generate empty long-winded corporate-speak emails that I can send to managers when they ask for things that seem best answered by an empty long-winded corporate-speak email. Like "How are using using all these AI tools we are forcing on you without asking if you needed or wanted them?"

        • By hammyhavoc 2025-06-219:00

          And so how exactly are you using them? ;- )

  • By GardenLetter27 2025-06-1711:131 reply

    This feels a bit too optimistic, in practice it often gets stuck going down a rabbit hole (and burning up your requests / tokens doing it!).

    Like even when I tested it on a clean assessment (albeit with Cursor in this case) - https://jamesmcm.github.io/blog/claude-data-engineer/ - it did very well in agent mode, but the questions it got wrong were worrying because they're the sort of things that a human might not notice either.

    That said I do think you could get a lot more accuracy between the agent checking and running its own answers, and then also sending its diff to a very strong LLM like o3 or Gemini Pro 2.5 to review it - it's just a bit expensive to do that atm.

    The main issue on real projects is that just having enough context to even approach problems, and build and run tests is very difficult when you have 100k+ lines of code and it takes 15 minutes to clean build and run tests. And it feels like we're still years away from having all of the above, plus a large enough context window that this is a non-issue, for a reasonable price.

    • By cyanydeez 2025-06-1812:09

      Like, its a nerd slot machine: shows you small wins, gets you almost big wins and seduces you into thinking "just one more perfect prompt and surely ill hit the jackpot"

  • By vital_beach 2025-06-1813:002 reply

    I really enjoyed Claude Code. I was using it on some side projects for about a month with API credits, and I signed up for the Max subscription shortly after it started working with Code. Overnight, my account was banned, and I have no idea why.

    It sucks getting banned from such a cool and helpful tool :(

    • By tbcj 2025-06-1912:21

      I had two accounts banned - one for Claude and one for the API. I tried to appeal both asking for more information. The response from Anthropic was non-specific and only that it violates usage. One account had only been minimally used. One never used. The accounts used email addresses using a domain I control - e.g., anthropic-claude@domain.xyz for example. I think that might have something to do with it.

      I have a new account now using a Google account and it hasn’t been banned.

    • By bn-l 2025-06-1813:261 reply

      Did the program need to kill child processes a lot?

      • By vital_beach 2025-06-1813:40

        nope, just running and stopping dev servers. It may have done a pfkill once or twice if something was hanging?

        Either way, using it was the API credits was fine for a little over a month, so I don't know if it was that. I got autobanned only a few hours after paying for Max and reauthing the client to use the subscription. My actual usage of it didn't change.

HackerNews