Claude Code is all you need

2025-08-1114:03851504dwyer.co.za

These are only a few things I've built with Claude Code since using it. Most of them are experimental and I've read some reports of it not doing as well on massive real-world code bases, but from what…

These are only a few things I've built with Claude Code since using it. Most of them are experimental and I've read some reports of it not doing as well on massive real-world code bases, but from what I've seen I'd be surprised it wasn't still useful in those contexts given enough guidance. I'm still surprised by how much better it is as a tool when given a lot of context and input. Here are a few other toy projects I've had it spit out - all things that I've wanted to build for months or years but never found the time. Now you can do stuff like this in a few minutes or hours instead of days or weeks.

Building a HackerNews comment ranker plugin

I've often been annoyed by comments on HackerNews that are not at all about the article they're commenting on. "Bitcoin adopts a new FlibbityGippity Protocol and can now handle 2.3 transactions per day" and someone will comment that all Crypto projects are scams or something. Note that I don't care about the quality of the comment, or whether or not I agree with it, but I'd wanted a visual way to skip over the 'noise' comments that aren't actually about the article at all.

I tried to build this before but got distracted by more important stuff, so I figured I'd start over with Claude Code.

HackerNews comment ranking plugin demo

It took a few tries before it could actually display the badges correctly within HN's (pretty simple) HTML structure, but after a few rounds of 'no try again' or 'add more debugging so I can paste the errors to you', it created almost exactly what I had envisioned.

I was surprised by how good it looked (much better than my normal hacky frontends), and the details it had added unprompted (like the really nice settings page, even with a nod to the HN orange theme).

HackerNews plugin settings page

The actual ranking (which I'm using OpenAI for, not Anthropic) is not that good. It could probably be improved with a better prompt and some more examples of what I think is a '1' comment or a '5' comment, but it works and looks at least directionally accurate so far.

Building Poster Maker - A Minimal Canva Replacement

AI is getting good at graphic design, and I knew people who were using it to generate basic posters. They liked that the AI could choose good background images, and generally make things look nice with well-sized fonts etc, but they were frustrated that the AI was still only 80% good at generating images of text, and often had spelling errors or other artifacts.

I was going to tell them to use Canva or Slides.new or another alternative. I tried them out so I could do a quick tutorial on how to use them and realised they were all kinda bad. Either enshittified to death, or lacking the basic AI features, or too complicated for non-technical people to use.

LFG!

This was the project that felt a bit more like engineering and less like vibe coding than the others. I knew what I wanted: a really simple interface to combine images and text and get an A4 PDF out. I'd tried to build something like this before and looked at different PDF creation libraries, HTML→PDF flows, and seen that it's not the easiest problem to solve.

Last time I solved a similar problem (in 2018) I ended up hacking in Google Docs to create A4 PDFs, but that was more of a templating problem and Google Docs isn't great for layout stuff.

So I built posters.dwyer.co.za - it lets you generate the background image with AI (I used Claude Code to build everything, but I told it to use GPT for image generation as that's what I'd used before and I think it's better? I don't even know if Anthropic has image generation APIs to be honest and it seemed easier to just use what I knew).

Poster Maker interface

This project took a few hours of back and forth. I was really impressed with some of Claude's UI knowledge (it one shotted the font selection when I told it what I wanted) and also saw the limitations in other aspects (it kept overlaying elements in a very un-userfriendly way, sidebar would hide and show and move everything around, it clearly has no idea what it's like to be a human and use something like this).

But after telling it exactly where to put elements and what they should do, I got more or less exactly what I had envisioned. I was surprised at how well the PDF export worked after the 6th or 7th attempt of blank files or cut off files - now it seems really great at giving me a PDF that is exactly like the preview version which for anyone not in tech seems like a really basic piece of functionality and anyone who has actually tried to do it before knows is like the XKCD bird problem:

XKCD: Tasks

XKCD #1425: Why seemingly simple tasks can be surprisingly hard for computers

Doing admin with Claude Code

This isn't really a project I built, but I'm using Claude Code more and more to do non-coding related tasks. I needed to upload bank statements for my accountant, but my (shitty) South African banks don't name the files well. I can download each month from the web app, but it calls them all "Unknown (5)" or whatever with no extension so it's a pain to go and name them correctly.

I asked Claude to rename all the files and I could go do something else while it churned away, reading the files and figuring out the correct names.

I then took it a step further and told it to merge them all into a single CSV file (which also involved extracting random header tabs off the badly formatted XLSX files that my bank provides), and classifying all expenses into broad and specific categories. I told it a few things like the roles of specific people in the team and I think it one-shotted that too. I'm not going to fire my bookkeepers yet, but if I were a bookkeeper I'd definitely make sure to be upskilling with AI tooling right now.

Bank statements renaming tool

Using Claude Code as my Text Editor

I'm a die-hard vanilla vim user for all writing, coding, configuration and anything else that fits. I've tried nearly every IDE and text editor out there, and I was certainly happy to have a real IDE when I was pushing production Java for AWS, but vim is what I've always come back to.

Switching to Claude Code has opened a lot of new design possibilities. Before (did I mention I suck at front end coding), I was restricted to whatever output was produced by static site generators or pandoc templates. Now I can just tell Claude to write an article (like the one you're currently reading) and give it some pointers regarding how I want it to look, and it can generate any custom HTML and CSS and JavaScript I want on the fly.

This article being written

I wrote this entire article in the Claude Code interactive window. The TUI flash (which I've read is a problem with the underlying library that's hard to fix) is really annoying, but it's a really nice writing flow to type stream of consciousness stuff into an editor, mixing text I want in the article, and instructions to Claude, and having it fix up the typos, do the formatting, and build the UX on the fly.

Nearly every word, choice of phrase, and the overall structure is still manually written by me, a human. I'm still on the fence about whether I'm just stuck in the old way by preferring to hand-craft my words, or if models are generally not good at writing.

When I read answers to questions I've asked LLMs, or the long research-style reports they create, the writing style is pretty good and I've probably read more LLM-generated words than human-generated words in the last few months.

But whenever I try to get them to produce the output I want to produce, they fail hard unless I spend as much effort on the prompt as I would have on writing the output myself.

Simon Willison calls them 'word calculators' and this is still mainly how I think of them. Great at moving content around (if you want a summary of this now very long article, an LLM will probably do a great job) but pretty useless at generating new stuff.

Maybe us writers will be around for a while still - let's see, and lfg.


Read the original article

Comments

  • By epiccoleman 2025-08-1114:308 reply

    I love this article just for the spirit of fun and experimentation on display. Setting up a VPS where Claude is just asked to go nuts - to the point where you're building a little script to keep Claude humming away - is a really fun idea.

    This sort of thing is a great demonstration of why I remain excited about AI in spite of all the hype and anti-hype. It's just fun to mess with these tools, to let them get friction out of your way. It's a revival of the feelings I had when I first started coding: "wow, I really can do anything if I can just figure out how."

    Great article, thanks for sharing!

    • By bkettle 2025-08-120:1615 reply

      > “wow, I really can do _anything_ if I can just figure out how

      Except this time it’s “if I can just figure out how and pay for the Claude API usage”.

      This is one of the sadder things about AI usage getting more standard that I haven’t seen discussed much—-the barrier to entry is now monetary rather than just knowledge-based, which will make it _much_ harder for young people with no money to pick up.

      Yes, they can still write code the manual way, but if the norm is to use AI I suspect that beginner’s guides, tutorials, etc. will become less common.

      • By infecto 2025-08-1212:523 reply

        There has generally always been some barrier. Computer access, internet access, books etc. If AI coding stays around, which looks like it will, it will just be the current generations barrier.

        I don’t think it is sad at all. There are barriers to all aspects of life, life is not fair and at least in our lifetimes will never be. The best anyone can do is to help those around them and not get caught up the slog of the bad things happening in the world.

        • By rurp 2025-08-1215:542 reply

          Having been a poor person learning how to code I'd say there's a huge difference between just needing a computer vs needing that plus a dozens per month subscription.

          I don't know that there's much we can do about that potentially becoming the new normal in the future, but it bums me out.

          • By reidjs 2025-08-1216:012 reply

            There are free and offline options, like Llama.cpp, but you will have to pay by giving up your privacy to Meta (or similar large companies)

            • By Revisional_Sin 2025-08-1719:57

              How does using an offline model give up your privacy?

              Also, running models locally requires good hardware to get acceptable performance. It's still a large barrier to entry.

            • By rurp 2025-08-1216:59

              Sure, for now, and maybe in the future. But it's possible that paid models will end up greatly outpacing free ones, and at some point the companies controlling them will stop burning billions of dollars per month and jack up prices.

          • By infecto 2025-08-1310:19

            No it’s not much different. I grew up poor. It was a struggle to have internet access.

        • By xnorswap 2025-08-1213:353 reply

          But traditional barriers have been able to be knocked down more easily with charity, because it's easier to raise charity money for capex than opex.

          It was common to have charity drives to get computers into schools, for example, but it's much harder to see people donating money for tokens for poor people.

          Previous-generation equipment can be donated, and can still spark an interest in computing and programming. Whereas you literally now can't even use ChatGPT-4.

          • By conradev 2025-08-1213:52

            Small models and processors are going to continue improving to the point that you’ll be able to vibe code locally on your phone at some point.

            When the iPhone came out, not everyone had a smartphone. Now 90% of the US has a smartphone, and many of these smartphones run generative local models.

          • By pc86 2025-08-1214:081 reply

            "It's harder to convince other people to pay for this for me" is an insane criticism. Not every AI model needs a premium account, you can even run many excellent models locally if you don't want to pay for an internet connection.

            At some point you just have to accept that yes things are easier if you have a little bit of spending money for things. That's not "sad" it's a basic fact of life.

            • By xnorswap 2025-08-1214:231 reply

              You have been mean with your interpretation of my statement.

              I am not saying, "It's harder to convince other people to pay for this for me".

              I am saying, "It is harder for me to pay for this for someone else".

              • By pc86 2025-08-1214:321 reply

                It probably is harder to convince others to pay for opex than capex - maybe that's a good thing, maybe not.

                But it's certainly not any harder for you to donate this money to someone because they want to spend that money on tokens instead of a computer, for example.

                • By Zambyte 2025-08-1215:57

                  I think you're still misinterpreting them. It's harder to donate tokens than it is to donate books or a computer. Not all donations are cash.

                  Though, in a sense, I am donating tokens. I run an LLM box at my apartment, and I let some people in my community have a free account through my hosted Open WebUI.

          • By infecto 2025-08-1310:23

            This feels like picking a random thing against LLMs to complain about. These tools are not even necessary today I am not sure why they would be necessary tomorrow beyond efficiency. If that day does come though, you would have to assume open source models would also be coming a long way.

        • By michaelrpeskin 2025-08-1217:36

          I attribute my barriers to entry as things that forced me to really learn. All my family could afford was a 386 with 16MB of ram when 486s where pretty common. I had to really hack to make things work. Working under constraints meant I was exploring limits and understanding efficiency.

          I still carry that in my day job which I _think_ helps me write better code - even in unconstrained systems, thinking in terms of memory and instruction efficiency can still help make better/faster code.

      • By mickael-kerjean 2025-08-121:002 reply

        Yep, I used to spend a lot of time learning PHP on a web server which was part of my internet subscription. Without it being free, I would never have learn how to create websites and would have never got in programming, the trigger was that free web hosting with PHP that was part of the internet connection my parents were already paying for

        • By alwillis 2025-08-123:583 reply

          There are plenty of free models available; many that rival their paid counterparts.

          A kid interested in trying stuff can use Qwen Coder for free [1].

          If the kid's school has Apple Silicon Macs (or iPads), this fall, each one of them will have Apple's 3 billion parameter Foundation Models available to them for free [2].

          Swift Playground [3] is a free download; Apple has an entire curriculum for schools. I would expect an upgrade to incorporate access to the on-board LLM

          [1]: https://openrouter.ai/qwen/qwen3-coder:free

          [2]: https://developer.apple.com/videos/play/wwdc2025/286

          [3]: https://developer.apple.com/swift-playground/

          • By yoz-y 2025-08-128:112 reply

            I guess hardware being able to run a local model will eventually get cheap enough, but for a lot of people even buying an Apple device or something with a good enough GPU is prohibitive.

            • By PeterStuer 2025-08-1210:34

              True, it will get cheap to run today's frontier models. But, by that time, how much more advanced will the frontier models of that time be.

              It is a real question. It all depends on whether the AI future is linear or exponential.

            • By hadlock 2025-08-1218:06

              I think we are already there. You can run a pretty ok LLM on a 4gb raspberry pi that will write most any simple 20-150 line bash script today, or toy application in python/rust. Old laptops pulled out of the trash are probably capable of running smaller LLMs and can explain how functions work. They're no claude code but you probably want a rough-around-the-edges LLM that can't do everything for you, if you're planning on using it to learn to code.

          • By andai 2025-08-1218:36

            Speaking of free models on OpenRouter, DeepSeek R1 0528 is also available for free.

            https://openrouter.ai/deepseek/deepseek-r1-0528:free

          • By socalgal2 2025-08-128:15

            Swift and swift playground might be a good introduction to programming, but it feels likely not to lead to as many opportunities as a more popular system. And I don’t just mean job opportunities.

        • By pc86 2025-08-1214:04

          "Already being paid for by someone else" is very different than "free."

      • By rurp 2025-08-1215:51

        Very true. One of the greatest aspects of the field is how accessible it is, and that is certainly going to get worse with LLM usage.

        I'd probably be toiling away in a less productive industry if I hadn't been able to easily download Python and start learning it for free.

      • By nostrademons 2025-08-122:343 reply

        They're not that expensive for anyone that has the tech skills to actually make good use out of them. I've been paying around with Claude Code, using API credits rather than the monthly fee. It costs about $5 per one-hour session. If you're going to be doing this professionally it's worth springing for the $100/month membership to avoid hitting credit limits, but if you just want to try it out, you can do so without breaking the bank.

        A bigger question for me is "Does this actually increase my productivity?" The jury is still out on that - I've found that you really need to babysit the algorithm and apply your CS knowledge, and you also have to be very clear about what you're going to tell it later, don't let it make bad assumptions, and in many cases spell out the algorithm in detail. But it seems to be very good at looking up API details, writing the actual code, and debugging (if you guide it properly), all things that take a non-trivial amount of tedium in everyday programming.

        • By eloisius 2025-08-122:492 reply

          12-year-old me wasn’t putting my tech skills to good use enough to pay $5 every time I sat down at the computer. I was making things though, and the internet was full of tutorials, chat rooms, and other people you could learn from. I think it would be sad if the same curious kid today was told “just pay $5 and ask Claude” when pestering someone in IRC about how to write a guestbook in Perl.

          • By nostrademons 2025-08-122:554 reply

            12-year-old me wasn't either, but he was noodling around on a computer that cost $2500 (more like $5500 in today's dollars). I think our parents loved us very much and must have had some means to afford the capital cost of a computer back then.

            I don't see my 7-year-old paying $5 for each hour he wants to program (and no way in hell would I give him my credit card), but I could easily envision paying $20/month for a Claude subscription and letting him use it. We pay more than that for Netflix & Disney+.

            • By kelnos 2025-08-1212:063 reply

              > noodling around on a computer that cost $2500 (more like $5500 in today's dollars)

              Wow! 12-year-old me was noodling around on a computer that my dad brought home from work because it would have otherwise ended up in landfill. We had very little money for computers back then, and I was thrilled when my parents gave me a budget to buy parts to build my own from scratch when I was about to go off to college (I'd saved up a bit myself, but not nearly enough).

              I think your experience is pretty privileged, and not at all common.

              • By nostrademons 2025-08-1220:42

                There's always more to the story than the Internet assumes.

                We were quite possibly less privileged than you were, if your dad brought a computer home from work. I grew up with a teacher and a househusband for parents; single-income, and that income made about 1/3 of what engineers or other computer professionals made. My kid had more passport stamps at age 2 than I did at age 18.

                It was $2500 because it was a Mac LC, and it was a Mac LC because that could take an Apple 2E card and run both Mac and Apple software, and that was important because my mom was a teacher and had a large library of educational software at school that she could take home. Recall that in those days, software was sold in retail stores (no Internet), and cost $50-100 for kiddie stuff, and like $400 for productivity and compilers. 25 titles and the cost of the computer paid for itself in free software. I think we used about that.

                It's a matter of priorities. My parents always prioritized my education: they bought a computer, and whatever software didn't come from my mom's workplace, and any books I wanted, and paid for my college education in full. We didn't have a whole lot other than that: we didn't take a lot of vacations or plane trips (and a single plane trip would cost more than that $2500 in those days), ran our cars into the ground (my mom owned 2 cars over my entire childhood), wore hand-me-downs.

              • By matwood 2025-08-1213:38

                Everyone has some level of privilege. I didn't get my first PC until I was a freshman in college. I had to spend part of my college loan buying one (~$3k IIRC). Up to that point I had only played with the Apple IIc's and the few Macs they had at my high school..

                Information on programming also wasn't as readily available as it is now. I used to go the book stores and use pencil and paper and copy out solutions since $50+ for a book was way more money than I could spend.

                Everything today is crazy inexpensive for the value.

              • By pc86 2025-08-1214:15

                So what?

                That sounds dismissive, and maybe it is, but I'm being serious here. What is the point of coming here and saying "when I was 12, my parents had less money than yours did when you were 12?" Privilege is relative, "common" is relative, and constantly being dragged into oppression Olympics of who has or had things slightly worse is exhausting and not conducive to conversation.

                Better keep in mind that someone here almost certainly had it even worse than you when they were in elementary school, lest you go a few seconds without acknowledging your privileged upbringing, for some reason, in a conversation where it bears absolutely no relation to anything.

            • By jack_pp 2025-08-125:14

              12-year-old me was mostly procrastinating but sometimes fired up Pascal which required me to insert a floppy disk in my 486 compaq machine for it to work. the machine was a donation from my aunt, could only run DOS.

              However chatgpt or gemini free tier is more than enough for a kid to figure out how python works and build some simple software. While I have the Gemini subscription I only got it because my family drive storage was nearly full. I could've probably got by with ChatGPT free to just stop using stackoverflow.

            • By piva00 2025-08-128:221 reply

              10-year-old me was programming on a salvaged 386 that my dad got from a friend after the company they worked at went bankrupt, and left the machine as trash. Instead of Christmas gifts I asked for programming books and pitched in some of the birthday money my grandparents would give me (about US$ 2 every birthday).

              Not everyone was privileged, some of us were just lucky.

            • By dingnuts 2025-08-124:14

              I'm extremely privileged and I had a quarter of what you did growing up. Your experience and your kids' is not typical.

          • By sgarland 2025-08-1213:42

            12-year-old me had (or rather, my family had) a Celeron 333 MHz and a Pentium III 550 MHz, both from Gateway, because that was the sole awesome perk my dad got from working there: literally free computers, with a required number of years of employment to pay them off. In 2000, the P3 was still pretty hot shit. I dual-booted them with every Linux distro under the sun. Since we had dial-up, the only way I had those distros was from 4-H [0], which at the time in Nebraska had a partnership with University of Nebraska to do tech instruction; once a quarter, we’d drive down to a campus (usually UNL) and spend a weekend learning something (LAMP stack, hardware troubleshooting, etc.), and having a LAN party at night. Also we had free access to their (at the time) screamingly fast internet, so I would download distros and packages to try out later.

            My online upbringing was very much of the RTFM variety, and I am convinced that was and is a good method to learn. It’s not like the grumpy graybeards were cruel, they just didn’t want to waste their time answering the same “how do I…” questions from noobs. If you explained what you were experiencing, what you had read, and what you had tried, they were more than happy to help out. I don’t think that’s an unreasonable approach.

            [0]: https://4-h.org/

        • By hdjrudni 2025-08-123:172 reply

          I think you said it. $100/mo and you're not even sure if it'll increase your productivity. Why on earth would I pay that? Do I want to flush $100 down the toilet and waste several days of my life to find out?

          • By nostrademons 2025-08-123:51

            You don't have to pay $100 to find out, you can do that for ~$5-20 by directly buying API credits.

            I don't know for sure whether it's worth it yet. Further experimentation is needed, as well as giving it an honest shot and trying to learn the nuances of the tool. But the way I look at it - if this actually is a future career path, the net present value of its payoff is measured in the millions of dollars. It's worth spending ~$20 and a few nights of my time to figure that out, because the odds can be pretty damn low and still have the expected value pencil out. It's sorta like spending $200 on 1/4 of a Bitcoin in 2013 because I was curious about the technology - I fully expected it to be throwing money down the toilet, but it ended up being quite worth it. (I wish I'd had the same mindset when I could've bought into the Ethereum ICO at a penny or so an ETH.)

          • By handfuloflight 2025-08-123:551 reply

            Are you spending every $100 absolutely efficiently? How do you know?

            • By jrflowers 2025-08-128:381 reply

              Can I have a hundred dollars? I assure you giving it to me is efficient

              • By kyleee 2025-08-1215:19

                And it’s even more efficient to give me $90

        • By barrell 2025-08-1210:02

          I have the tech skills to use them. In my 30s and I could not spend $5 on a one hour coding session even if it 10xed my productivity. 1-2 hours would literally break the bank for me

      • By mark_l_watson 2025-08-123:391 reply

        yes indeed, who will pay? I run a lot through open models locally using LM Studio and Ollama, and it is nice to only be spending a tiny amount of extra money for electricity.

        I am retired and not wanting to spend a ton of money getting locked long term into using an expensive tool like Claude Code is a real thing. It is also more fun to sample different services. Don’t laugh but I am paying Ollama $20/month just to run gpt-oss-120b very fast on their (probably leased) hardware with good web search tooling. Is it worth $20/month? Perhaps not but I enjoy it.

        I also like cheap APIs: Gemini 2.5-flash, pro when needed, Kimi K2, open models on Groq, etc.

        The AI, meaning LLM, infrastructure picture is very blurred because of so many companies running at a loss - which I think should be illegal because long term I think it is misleading consumers.

        • By piva00 2025-08-128:151 reply

          > The AI, meaning LLM, infrastructure picture is very blurred because of so many companies running at a loss - which I think should be illegal because long term I think it is misleading consumers.

          In a sense it is illegal, even though the whole tech scene has been doing it for decades, price dumping is an illegal practice and I still don't understand why it has never been considered as such with tech.

          Most startups with VC investors work only through price dumping, most unicorns came to be from this bullshit practice...

          • By nl 2025-08-129:572 reply

            "Price dumping" isn't an economic term in common use.

            "Dumping" in international trade is somewhat similar but the reasons that is illegal are very different: https://en.m.wikipedia.org/wiki/Dumping_(pricing_policy)

            Pricing at a loss by VC funded companies is great for consumers. It rarely is at a loss though - they look at the lifetime value.

            Pricing at a loss by big tech could be viewed as anticompetitive. Personally I like that Gemini keeps OpenAI prices lower but one could argue it has stopped OpenAIs growth.

            • By piva00 2025-08-1211:081 reply

              > Pricing at a loss by VC funded companies is great for consumers. It rarely is at a loss though - they look at the lifetime value.

              It's great for consumers only in the short term, the strategy to drive out competition that are not as well-funded has only one goal: to remove competition in the long-term to drive up prices at your will since most competitors won't have the chance to exist.

              Edit: yes, technically dumping is a specific type of predatory pricing, so swap "price dumping" on my first comment to "predatory pricing" instead.

              • By nl 2025-08-1213:012 reply

                It doesn't have one goal.

                In fact driving out competition is rarely the goal at all.

                Instead the goal is usually to reduce the barrier to people trying the thing - especially when it is a developer API which you hope developers will incorporate into their product.

                • By piva00 2025-08-1214:09

                  > In fact driving out competition is rarely the goal at all.

                  Driving out competition is definitely a goal, the further you can snowball that makes your company a much more attractive investment since your competition will be bleeding money, attrition is definitely used as a tactic by VCs when a startup gets traction. Hell, it's one of the arguments they use to run further rounds of investments to others "this startup is very well capitalised and the competition has 1/10th of their funds, investing elsewhere is a losing proposition".

                  > Instead the goal is usually to reduce the barrier to people trying the thing - especially when it is a developer API which you hope developers will incorporate into their product.

                  I thought we were talking about unicorns such as Uber, AirBnb, etc., not some dev startup packaging APIs to serve other startups which is a whole other incestuous industry.

                • By guappa 2025-08-137:01

                  AI company founder and CTO defends those practices… yawn.

            • By Filligree 2025-08-1210:25

              I wouldn’t assume Gemini is being run at a loss, though. At least not that, if it weren’t, that would help OpenAI much.

              Google uses Google hardware, which costs them 1/10 what nvidia hardware costs everyone else.

      • By coldtea 2025-08-1211:163 reply

        >the barrier to entry is now monetary rather than just knowledge-based, which will make it _much_ harder for young people with no money to pick up.

        Considering opportunity cost, a young person paying $20 or $100 per month to Claude API access is way cheaper than a young person spending a couple of years to learn to code, and some months coding something the AI can spit in 10 minutes.

        AI coding will still create generations that even programming graduates know fuck all about how to code, and are also bad at reasoning about the AI produced code they depend on or thinking systematically (and that wont be getting any singularity to bail them out), but that's beside the point.

        • By sdenton4 2025-08-1212:07

          Applying opportunity cost to students is a bit strange...

          People need to take time to get good at /something/. It's probably best to work with the systems we have and find the edge where things get hard, and then explore from there. It's partly about building knowledge, but also about gumption and getting some familiarity with how things work.

        • By typewithrhythm 2025-08-1213:23

          But all the other students are doing the same, so the expectation will quickly become use of tools for potentially years.

          My introduction to programming was through my dad's outdated PC and an Arduino, and that put me on par with the best funded.

      • By palata 2025-08-1211:571 reply

        > This is one of the sadder things about AI usage getting more standard that I haven’t seen discussed much—-the barrier to entry is now monetary

        Agreed. And on the one hand you have those who pay an AI to produce a lot of code, and on the other hand you have those who have to review that code. I already regularly review code that has "strange" issues, and when I say "why does it do this?" the answer is "the AI did it".

        Of course, one can pay for the AI and then review and refactor the code to make it good, but my experience is that most don't.

        • By guappa 2025-08-136:56

          At my work I'm just never reviewing code of certain team members and I let the team leader do it. Their hire, their problem.

          If the code becomes a mess and fixing anything becomes slower… well who cares I'm paid per hour not per task done.

      • By noelwelsh 2025-08-128:09

        I agree that access is a problem now, but I think it is one that hardware improvements will solve very quickly. We are a few generations of Strix Halo type hardware away from effortlessly running very good LLMs locally. (It's already possible, but the hardware is about $2000 and the LLMs you can run are good but not very good.) AFAIK AMD have not released the roadmap for Medusa Halo, but the rumours [1] are increased CPU and GPU performance, and increased bandwidth. Another iteration or two of this will make Strix Halo hardware more affordable, and the top-of-the-line models will be beasts for local LLMs.

        [1]: https://www.notebookcheck.net/Powerful-Zen-6-Medusa-Halo-iGP...

      • By block_dagger 2025-08-129:522 reply

        LLMs are quickly becoming cheaper. Soon they will be “cheap as free,” to quote Homestar Runner. Then programming will be solved, no need for meatbags. Enjoy the 2-5 years we have left in this profession.

        • By barrell 2025-08-1210:001 reply

          You say that, but subscription prices keep going up. Token price goes down but token count goes up. Companies are burning billions to bring you the existing prices, and multiple hundreds per month is not enough to clear the bar to use these tools.

          I’m personally hoping for a future with free local LLMs, and I do hope the prices go down. I also recognize I can do things a little cheaper each year with the API.

          However it is far from a guaranteed which direction we’re heading in, and I don’t think we’re on track to get close to removing the monetary barrier anytime soon.

          • By frognumber 2025-08-1210:101 reply

            My bill for LLMs is going up over time. The more capable, higher-context models dramatically increase my productivity.

            The spend prices most of the developing world out -- an programmer earning $10k per year can't pay for a $200/month Claude Max subscription..

            And it does better than $6k-$10k programmers in Africa, India, and Asia.

            It's the mainframe era all over again, where access to computing is gated by $$$.

            • By achenet 2025-08-1212:17

              > The spend prices most of the developing world out -- an programmer earning $10k per year can't pay for a $200/month Claude Max subscription..

              No, but a computer earning $10k per year can probably afford a $200 used ThinkPad, install Linux on it, build code that helps someone, rent a cheap server from a good cloud provider, advertise their new SaaS on HN, and have it start pulling in enough revenue to pay for a $200 Claude Max subscription.

              > It's the mainframe era all over again, where access to computing is gated by $$$.

              It's still the internet era, where access to $$$ is gated by computing skill :)

        • By achenet 2025-08-1212:14

          Did you read the original article?

          LLM code still needs to be reviewed by actual thinking humans.

      • By miohtama 2025-08-124:561 reply

        One can create a free Google account and use Gemini for free.

        Or think it this way: It's easy to get base level free LLM (Toyota) but one should not expect free top of the shelf (Porsche).

        • By ubercow13 2025-08-127:171 reply

          Previously most Porsche development tools were available to everyone though, such as GCC.

          • By ijidak 2025-08-127:32

            Software development costs hundreds of dollars in the 90s. My parents bought VB 6 for $600.

            Only in tech are we shocked when things cost money. I don't know that any other industry expects such a reality.

      • By mosselman 2025-08-1213:122 reply

        Does anyone have a good recommendation of a claude code like tool that uses locally hosted models?

      • By sbarre 2025-08-121:251 reply

        Eh back in the day computers were expensive and not everyone could afford one (and I don't mean a library computer that you can work on, one you can code and hack on). The ubiquity of computing is not something that's been around forever.

        There have always been costs and barriers for the cutting edge.

        • By horacemorace 2025-08-121:581 reply

          The problem isn’t cost, it’s reproducibility and understanding. If rely on a service you can’t fully understand to get something done, you’re beholden to the whims of its provider.

          • By sbarre 2025-08-1211:40

            Sure but that's not what the person I was replying to was talking about, nor what I was talking about.

            Cost of access is absolutely a problem in tech.

            The problem can certainly be multi-faceted though.

      • By dirkc 2025-08-128:33

        Maybe local models can address this, but for me the issue is that relying on LLMs for coding introduces gatekeepers.

        > Uh oh. We're getting blocked again and I've heard Anthropic has a reputation for shutting down even paid accounts with very few or no warnings.

        I'm in the slack community where the author shared their experiment with the autonomous startup and what stuck out to me is that they stopped the experiment out of fear of being suspended.

        Something that is fun should not go hand-in-hand with fear of being cut off!

      • By Arisaka1 2025-08-124:58

        You made me realize exactly why I love skill-based video games, and shun the gacha games (especially those with PvP). You swiped to gain power over players who don't. Yay?

        The knowledge check will also slowly transfer towards the borders of fast iteration and not necessarily knowledge depth. The end goal is to make a commodity out of the myth of the 10x dev, and take more leverage away from the devs.

      • By noobermin 2025-08-127:181 reply

        This is a pro for a lot of the people whom AI people are targeting: idiots with money.

        • By icemelt8 2025-08-127:551 reply

          be careful maybe the idiots will be the only one left with money, and the smart people like you could be homeless.

          • By kubb 2025-08-128:39

            If the trend of the last four decades continues, this is the outcome.

    • By georgeburdell 2025-08-1115:3718 reply

      For me, I can’t get into using AI tools like Claude Code. As far as I go is chat style where I’m mostly in control. I enjoy the actual process of crafting code myself. For similar reasons, I could never be a manager.

      Agents are a boon for extraverts and neurotypical people. If it gets to the point where the industry switches to agents, I’ll probably just find a new career

      • By fsloth 2025-08-1115:598 reply

        I strongly disagree agents are for extroverts.

        I do agree it’s definetly a tool category with a unique set of features and am not surprised it’s offputting to some. But it’s appeal is definetly clear to me as an introvert.

        For me LLM:s are just a computer interface you can program using natural language.

        I think I’m slightly ADD. I love coding _interesting_ things but boring tasks cause extreme discomfort.

        Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!

        It’s a great time to be a software engineer!

        • By pron 2025-08-1116:523 reply

          > For me LLM:s are just a computer interface you can program using natural language.

          I wish they were, but they're not that yet because LLMs aren't very good at logical reasonsing. So it's more like an attempt to program using natural language. Sometimes it does what you ask, sometimes not.

          I think "programming" implies that the machine will always do what you tell it, whatever the language, or reliably fail and say it can't be done because the "program" is contradictory, lacks sufficient detail, or doesn't have the necessary permissions/technical capabilities. If it only sometimes does what you ask, then it's not quite programming yet.

          > Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!

          I wish that, too, were true, and maybe it will be someday soon. But if I need to manually review the agent's output, then it doesn't feel like offloading much aside from the typing. All the same concentration and thought are still required, even for the boring things. If I could at least trust the agent to tell me if it did a good job or is unsure that would have been helpful, but we're not even there yet.

          That's not to say the tools aren't useful, but they're not yet "programming in a natural language" and not yet able to "offload" stuff to.

          • By keeda 2025-08-1118:104 reply

            > ... LLMs aren't very good at logical reasonsing.

            I'm curious about what experiences led you to that conclusion. IME, LLMs are very good at the type of logical reasoning required for most programming tasks. E.g. I only have to say something like "find the entries with the lowest X and highest Y that have a common Z from these N lists / maps / tables / files / etc." and it spits out mostly correct code instantly. I then review it and for any involved logic, rely on tests (also AI-generated) for correctness, where I find myself reviewing and tweaking the test cases much more than the business logic.

            But then I do all that for all code anyway, including my own. So just starting off with a fully-fleshed out chunk of code, which typically looks like what I'd pictured in my head, is a huge load off my cognitive shoulders.

            • By pron 2025-08-1119:422 reply

              The experience was that I once asked an LLM to write a simple function and it produced something very wrong that nothing with good reasoning abilities should ever do. Of course, a drunk or very tired human could have done the same mistake, but they would have at least told me that they were impaired and unsure of their work.

              I agree that most of the time it does most simple tasks mostly right, but that's not good enough to truly "offload" my mental effort. Again, I'm not saying it's not useful, but more than working with a junior developer it's like working with a junior developer who may or may not be drunk or tired and doesn't tell you.

              But mostly my point is that LLMs seem to do logical reasoning worse than other things they do better, such as generating prose or summarising a document. Of course, even then you can't trust them yet.

              > But then I do all that for all code anyway, including my own

              I don't, at least not constantly. I review other people's code only towards the very end of a project, and in between I trust that they tell me about any pertinent challenge or insight, precisely so that I can focus on other things unless they draw my attention to something I need to think about.

              I still think that working with a coding assistant is interesting and even exciting, but the experience of not being able to trust anything, for me at least, is unlike working with another person or with a tool and doesn't yet allow me to focus on other things. Maybe with more practice I could learn to work with something I can't trust at all.

              • By darkerside 2025-08-1210:13

                > working with a junior developer who may or may not be drunk or tired and doesn't tell you.

                Bad news, friend.

                Overall though, I think you're right. It's a lot like working with people. The things you might be missing are that you can get better at this with practice, and that once you are multiplexing multiple Claudes, you can become hyper efficient. These are things I'm looking into now.

                Do I know these for a fact? Not yet. But, like any tool, I'm sure that the investment won't pay off right away.

              • By kenjackson 2025-08-1123:002 reply

                What was the simple function?

                • By throwaway31131 2025-08-1123:33

                  I’m not sure what their simple function was but I tried to use Claude to recreate C++ code to implement the algorithms in this paper as practice for me in LLM use and it didn’t go well. But I’ll be the first to admit that I’m probably holding it wrong.

                  https://users.cs.duke.edu/~reif/paper/chen/graph/graph.pdf

                • By pron 2025-08-1123:562 reply

                  Can't remember, but it was something very basic - a 10/15-line routine that a first-year student would write in 3 minutes if they knew the relevant API. The reason I asked the model in the first place is because I didn't know the API. If memory serves, the model inverted an if or a loop condition.

                  • By p1esk 2025-08-121:052 reply

                    Did you use one of the latest frontier reasoning models? If not, how is your experience relevant?

                    • By totallykvothe 2025-08-124:362 reply

                      In what world is this an appropriate thing to say to someone?

                      • By p1esk 2025-08-1218:04

                        In the world where you do not claim that LLMs suck today based on your attempt to use some shitty model three years ago.

                      • By guappa 2025-08-127:10

                        In the creed of "AI is perfect, if you claim otherwise you're broken" that so many here embrace.

                  • By jama211 2025-08-125:341 reply

                    So you tried it once and then gave up?

                    • By pron 2025-08-1210:491 reply

                      I didn't give up, I just know that I can only use a model when I have the patience to work with something I can't trust at all on anything. So that's what I do.

                      • By jama211 2025-08-1219:32

                        Sounds like the spirit of my question remains intact

            • By foobarbecue 2025-08-121:492 reply

              In your example, you didn't ask the LLM to do any logic. You asked it to translate your logic into code.

              Asking an LLM to do logic would be saying something like: "I have a row of a million light switches. They all start off. I start at the beginning and flip on every fourth one. Then I flip on every eighth one, then sixteen, and all the powers of two until I'm over a million. Now I do the same for the powers of three, then four, then five, and so on. How many light switches are on at the end? Do not use any external coding tools for this; use your own reasoning."

              Note that the prompt itself is intentionally ambiguous -- a human getting this question should say "I don't understand why you started with every fourth instead of every second. Are you skipping the first integer of every power series or just when the exponent is two?"

              When I asked GPT5 to do it, it didn't care about that; instead it complimented me on my "crisp statement of the problem," roughly described a similar problem, and gave a belivable but incorrect answer 270,961 .

              I then asked it to write python code to simulate my question. It got the code correct, and said "If you run this, you’ll see it matches the 270,961 result I gave earlier." except, that was a hallucination.

              Running the code actually produced 252711.

              I guess it went with 270,961 because that was a lexically similar answer to some lexically similar problems in the training data.

              • By cma 2025-08-122:092 reply

                I would posit that most randomly selected AGIs (people) you ask this of with no tools allowed (pencil and paper) won't get closer on average. This doesn't prove people don't use logic

                And allowing python shells for both I think the randomly selected human would do worse on average.

                And I also think e.g. Google's IMO model would be able to pass but I have no way to verify that.

                • By pron 2025-08-1210:561 reply

                  At work, you can trust people to either get the right answer or tell you they may not have the right answer. If someone is not trustworthy, you don't work with them again. The experience of trying to work with something that is completely not trustworthy on all fronts is novel and entirely dissimilar to working with either people or tools.

                  • By keeda 2025-08-1221:16

                    People themselves don't know when they are wrong, and that is why high-functioning organizations have all sorts of guardrails in place. Trivial example, code reviews. Now, code reviews are multi-purpose, and their primary benefit is not just catching bugs, but they do catch bugs pretty often (there are actual studies about this.)

                    So my experience in working with AI is actually much more similar to working with people, except I have to correct the AI much less frequently.

                    I always say, AI is technology that behaves like people, and so the trick to working with it effectively is to approach it like working with a colleague, with all their specific quirks and skillsets, rather than a tool.

                • By foobarbecue 2025-08-122:191 reply

                  That's true about people with pencils and paper. But this is a computer, and my point is that with LLMs, you have a computer program that doesn't compute or do logic -- and for some reason people act like it does.

                  I mean, this is a super basic python task that I could write in 2 minutes. The naiive implementation is a couple of basic for loops. I wouldn't hire a junior dev that couldn't write it.

                  My point here is that for someone who can program, it's easier and better to write this in python than to write it in english and have an LLM translate it to python (and LLMs themselves are incapable of logic or reasoning, so the only option is for them to translate it to real code).

                  • By cma 2025-08-123:25

                    I added this line to the end of your prompt so it knew it could clarify and didn't need an immediate answer:

                    "You can ask any clarifying questions first before starting on the problem."

                    Here's what it asked (Claude Opus, web search off so it didn't find your post):

                    > Looking at this problem, I want to make sure I understand the process correctly before diving into the solution. Let me clarify a few things:

                    > When you say "flip on every fourth one" for the powers of two, do you mean:

                    > We flip switches at positions 4, 8, 12, 16, 20, 24, ... (every 4th position) Then flip switches at positions 8, 16, 24, 32, 40, 48, ... (every 8th position) And so on?

                    > When you move to "powers of three", do you mean:

                    > Start with every 3rd switch (3, 6, 9, 12, ...) Then every 9th switch (9, 18, 27, 36, ...) Then every 27th switch, etc.?

                    > Or does "powers of three" mean we start with 3² = 9 as the first increment, similar to how we started with 2² = 4 for powers of two?

                    > When you say "then four, then five, and so on" - do you mean we continue this process for all integers k ≥ 2, using powers of k as our increments?

                    > The final state of each switch depends on whether it gets flipped an odd or even number of times total, so I want to make sure I understand the flipping pattern correctly before calculating the answer.

              • By keeda 2025-08-1221:221 reply

                That may be the case, but how often do you have to provide these kinds of puzzles as programming tasks? And even then, by your own example, it did product correct code, so it did "reason" through the problem statement!

                • By foobarbecue 2025-08-1317:551 reply

                  The point I'm making is that translating from english to python does not involve logical reasoning or any form of problem solving. You essentially replace english phrases with their python equivalents; it's a fuzzy version of looking things up in a dictionary.

                  It can translate, but it cannot reason or calculate. The python can calculate.

                  • By keeda 2025-08-1319:14

                    I've seen countless examples where it does do a lot of logical reasoning from a very high-level requirement. I only gave that example because that is the more common form of programming logic required, but there is a huge range of logical and coding tasks it handles seamlessly. It figures out the problem, the high-level approach, the appropriate algorithms, any intermediate book-keeping data structures required, any edge cases, what functions and classes from the contextual code matters, and how it should integrate with the rest of the codebase.

                    Here is one recent exchange (some proprietary details redacted) with Claude Sonnet 3.7 via the Cody VSCode Extension (RIP):

                    ---

                    Me: At this point, "segments" represents a list of bounding boxes that can overlap. Write a function to coalesce only those boxes that are substantially (i.e. within some tolerance) horizontally aligned along one or both edges or their centers, either: 1) overlapping completely or partially; or 2) very closely spaced like the words of a sentence should be (you can assume the threshold for spacing could be a reasonable fraction of the height of the box).

                    Claude: I'll create a function to coalesce horizontally aligned boxes based on your requirements. This function will identify boxes that are either overlapping or closely spaced and horizontally aligned (by top edge, bottom edge, or vertical center).

                    ```javascript function coalesceHorizontallyAlignedSegments(segments, options = {}) { // Default options const { verticalAlignmentTolerance = 0.1, // 10% tolerance for vertical alignment horizontalSpacingThreshold = 0.5, // Max horizontal spacing as a fraction of box height overlapThreshold = 0.2 // Minimum overlap required (20% of the smaller box) } = options;

                      if (!segments || segments.length <= 1) {
                        return segments;
                      }
                    
                      // Sort segments by x position (left to right)
                      const sortedSegments = [...segments].sort((a, b) => a.x - b.x);
                    
                      // Track which segments have been merged
                      const merged = new Array(sortedSegments.length).fill(false);
                      const mergedSegments = [];
                    
                      for (let i = 0; i < sortedSegments.length; i++) {
                        // Skip if this segment has already been merged
                        if (merged[i]) continue;
                    
                        const currentSegment = sortedSegments[i];
                        let mergedSegment = { ...currentSegment };
                        merged[i] = true;
                    
                        // Check for potential merges with subsequent segments
                        let didMerge = true;
                    
                        while (didMerge) {
                          didMerge = false;
                    
                          for (let j = 0; j < sortedSegments.length; j++) {
                            if (merged[j]) continue;
                    
                            const nextSegment = sortedSegments[j];
                    
                            // Check if horizontally aligned (top, bottom, or center)
                            const topAligned = Math.abs(mergedSegment.y - nextSegment.y) <=
                                               verticalAlignmentTolerance * Math.min(mergedSegment.height, nextSegment.height);
                    
                            const bottomAligned = Math.abs((mergedSegment.y + mergedSegment.height) -
                                                  (nextSegment.y + nextSegment.height)) <=
                                                  verticalAlignmentTolerance * Math.min(mergedSegment.height, nextSegment.height);
                    
                            const centerAligned = Math.abs((mergedSegment.y + mergedSegment.height/2) -
                                                  (nextSegment.y + nextSegment.height/2)) <=
                                                  verticalAlignmentTolerance * Math.min(mergedSegment.height, nextSegment.height);
                    
                            const isVerticallyAligned = topAligned || bottomAligned || centerAligned;
                    
                            if (!isVerticallyAligned) continue;
                    
                            // Check for overlap or close spacing
                            const mergedRight = mergedSegment.x + mergedSegment.width;
                            const nextLeft = nextSegment.x;
                    
                            // Calculate horizontal spacing
                            const horizontalSpacing = nextLeft - mergedRight;
                            const maxAllowedSpacing = horizontalSpacingThreshold * Math.min(mergedSegment.height, nextSegment.height);
                    
                            // Check for overlap
                            const isOverlapping = mergedRight >= nextLeft;
                    
                            // Calculate overlap percentage if overlapping
                            let hasSignificantOverlap = false;
                            if (isOverlapping) {
                              const overlapWidth = Math.min(mergedRight, nextSegment.x + nextSegment.width) -
                                                  Math.max(mergedSegment.x, nextSegment.x);
                              const smallerWidth = Math.min(mergedSegment.width, nextSegment.width);
                              hasSignificantOverlap = overlapWidth / smallerWidth >= overlapThreshold;
                            }
                    
                            // Merge if overlapping significantly or closely spaced
                            if (hasSignificantOverlap || (horizontalSpacing >= 0 && horizontalSpacing <= maxAllowedSpacing)) {
                              // Create a new bounding box that encompasses both segments
                              const newX = Math.min(mergedSegment.x, nextSegment.x);
                              const newY = Math.min(mergedSegment.y, nextSegment.y);
                              const newRight = Math.max(mergedSegment.x + mergedSegment.width,
                                                       nextSegment.x + nextSegment.width);
                              const newBottom = Math.max(mergedSegment.y + mergedSegment.height,
                                                        nextSegment.y + nextSegment.height);
                    
                              mergedSegment = {
                                x: newX,
                                y: newY,
                                width: newRight - newX,
                                height: newBottom - newY,
                                // Preserve the virtual path from the leftmost segment
                                imagePath: mergedSegment.x <= nextSegment.x ? mergedSegment.imagePath : nextSegment.imagePath
                              };
                    
                              merged[j] = true;
                              didMerge = true;
                              break;
                            }
                          }
                        }
                    
                        // Add the topLeft and bottomRight properties for consistency
                        mergedSegment.topLeft = { x: mergedSegment.x, y: mergedSegment.y };
                        mergedSegment.bottomRight = {
                          x: mergedSegment.x + mergedSegment.width,
                          y: mergedSegment.y + mergedSegment.height
                        };
                    
                        mergedSegments.push(mergedSegment);
                      }
                    
                      return mergedSegments;
                    } ```

                    This function works as follows:

                    1. It sorts segments from left to right to process them in reading order 2. For each segment, it looks for potential merges with other segments based on: - Vertical alignment (top edges, bottom edges, or centers within tolerance) - Either significant overlap (controlled by overlapThreshold) or - Close horizontal spacing (controlled by horizontalSpacingThreshold as a fraction of box height) 3. When segments are merged, it creates a new bounding box that encompasses both 4. It continues merging until no more merges are possible

                    You can integrate this function into your segmentation pipeline by adding it after the segment extraction:

                    ```javascript function segmentImage(ctx, imageData, options = {}) { // ... existing code ...

                      const segments = boundingRects.map(toSegment);
                    
                      // Coalesce horizontally aligned segments
                      const coalescedSegments = coalesceHorizontallyAlignedSegments(segments, options.coalesceOptions);
                    
                      return { segments: coalescedSegments, contourTrees };
                    } ```

                    Me: Do not write any code yet, but would it help if the segments were first grouped into approximate "rows" and then coalesced, rather than simply sorted by the x-coordinate?

                    Claude: Yes, grouping segments into approximate "rows" before coalescing would be significantly more effective than simply sorting by x-coordinate. Here's why:

                    1. *Better handling of multi-column layouts*: When a page has multiple columns of text or elements, sorting purely by x-coordinate would attempt to merge segments across different columns if they happen to be horizontally aligned.

                    2. *Respecting natural reading flow*: Text typically flows in rows, and grouping by rows first would respect this natural organization of content.

                    3. *More accurate merging decisions*: By first establishing which segments belong to the same "row" or line of content, we can make more informed decisions about which segments should be considered for merging.

                    4. *Handling of varying vertical positions*: Text lines on a webpage aren't always perfectly aligned. Pre-grouping into rows would allow us to handle slight variations in vertical positioning within what should be considered the same line.

                    5. *Efficiency*: By limiting the coalescing operations to segments within the same row, we reduce the number of comparisons needed.

                    The approach would work like this: 1. Group segments into rows based on vertical overlap or proximity 2. Within each row, sort segments by x-coordinate 3. Apply the coalescing logic only to segments within the same row

                    This approach would better preserve the semantic structure of the content while still merging segments that represent parts of the same logical unit (like words in a sentence).

                    Looking at the codebase, this approach would align well with how visual content is typically structured and would produce more meaningful segment groupings for downstream processing.

                    ---

                    And then it proceeded to update the code as discussed. Sure this is not a very novel problem and it required a bit of back and forth, but look at the ratio of prompt to code. This exchange took a couple of minutes; I'd estimate it would have taken me an hour to get that code down with all edge cases handled. Look at the exposition, the quality of code, the choice to use optional parameters for tolerances, and edge-case handling.

                    It's very, very hard for me to not see this as reasoning. I mean, how is this not mind-blowing?

            • By __MatrixMan__ 2025-08-122:081 reply

              I'm not who you're replying to but I had a scenario where I needed to notice that a command had completed (exit code received) but keep listening for any output that was still buffered and only stop processing tokens after it had been quiet for a little bit.

              Trying to get Claude to do this without introducing a deadlock and without exiting too early and leaving valuable output in the pipe was hellish.

              It's very good at some kinds of reasoning and very bad at others. There's not much it's mediocre at.

              • By keeda 2025-08-1221:26

                > It's very good at some kinds of reasoning and very bad at others. There's not much it's mediocre at.

                Yes, one of the terms for this phenomenon is "jagged frontier" from here:

                https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the...

                This is also the main reason people have such diverging experiences and the discussion around AI is so bimodal in nature.

          • By yomismoaqui 2025-08-1117:191 reply

            You can view Claude Code as a non-deterministic compiler where you input english and get functioning code on the other end.

            The non-determinism is not as much as a problem because you are reading over the results and validating that what it is created matches what you tell it to do.

            I'm not talking about vibe-coding here, I'm grabbing the steering wheel with both hands because this car allows me to go faster than if I was driving myself, but sometimes you have to steer or brake. And the analogy favors Claude Code here because you don't have to react in milliseconds while programming.

            TL;DR: if you do the commit you are responsible for the code it contains.

            • By pron 2025-08-1117:482 reply

              Sure, and that may be valuable, but it's neither "programming" nor "offloading mental effort" (at least not much).

              Some have compared it to working with a very junior programmer. I haven't done that in a long while, but when I did, it didn't really feel like I was "offloading" much, and I could still trust even the most junior programmer to tell me whether the job was done well or not (and of any difficulties they encountered or insight they've learnt) much more than I can an agent, at least today.

              Trust is something we have, for the most part, when we work with either other people or with tools. Working without (or with little) trust is something quite novel. Personally, I don't mind that an agent can't accomplish many tasks; I mind a great deal that I can't trust it to tell me whether it was able to do what I asked or not.

              • By fsloth 2025-08-1210:29

                ”it's neither "programming"

                Sure it is.

                Modern ecosystem is sadly full of API:s like WPF on Windows that are both verbose and configuration heavy. Now, some people may be able to internalize xaml with little effort but not all us - and then you basically move forward iteratively, looking for code example, trying this or that … basically random walking towards something usable.

                Or you use an agentic LLM and it does this peeking and poking for you, and with decades old APIs like WPF likely has enough context to do the thing you asked it to do far more competently than you could train yourself to program WPF in a few days.

                Of course in the context of this example WPF was your main duty, you _would_ learn the ins and outs of it.

                In quite many jobs a task like this may not infact be even among your top ten duties but you need to do it.

                In these sort of situations a LLM is really nice. The worst it provides is a good first guess how to do something.

                If your contex is something like radiology treatment then no, don’t use LLM!

                But there are thoushands of miserable non-critical but necessary components in production for which LLM is just fine.

              • By gf000 2025-08-1119:21

                There is probably a significant factor of what domain you work in, that might be part of the reason why you have different experiences.

                I don't know your current domain, but stuff like Loom requires very complex reasoning capabilities, most of which lives outside the actual codebase itself. Business code on the other hand often has a more direct mapping from idea to code, so LLMs might show more promise there.

          • By nostrademons 2025-08-122:431 reply

            So here's a transcript of a recent Claude Code interaction I had, trying to build an MP3 organizer that relies on heuristics and fuzzy matching to make sense out of all the random MP3s I downloaded in the 90s. This all happened in-between prompts: I gave it a correct example and an incorrect example and asked it why it was giving the incorrect example.

            ● Let me trace through what happens with the "Jordan Rudess" example:

              1. Directory: Jordan Rudess/
              2. file_rel_path would be Path("Jordan Rudess") (after removing MP3 Disk prefix)
              3. If Jordan Rudess matches in path component 0, scope_path would be Path("Jordan Rudess")
              4. _path_is_in_scope checks if "Jordan Rudess".startswith("Jordan Rudess") → True
            
              This looks correct. Let me check the inference logic:
            
            ● Read(mp3_organizer_v2.py) ⎿ Read 15 lines (ctrl+r to expand)

            ● The inference priority looks correct - path_component_match is indeed the highest priority. But I think I see the issue now. Let me look at the actual logic flow in the main organize method:

            ● Read(mp3_organizer_v2.py) ⎿ Read 10 lines (ctrl+r to expand)

            ● The flow looks right. Let me examine the actual problem more carefully. Looking back at the _apply_path_component_matching method, I think there might be an issue with when the matching happens. Let me check the logic:

            ● Update Todos ⎿ Check _path_is_in_scope logic for correctness Verify artist inference priority logic works with path_component_match Test and fix any remaining issues with artist propagation

            ● I think I see the issue now. The problem is that the current code only looks for matches in files that already have file_id3 tags, but it needs to propagate those matches to all files in the directory. However, the current implementation is still only applying matches to files that have matching candidates, not ALL files. Let me check the exact logic:

            ● Read(mp3_organizer_v2.py) ⎿ Read 8 lines (ctrl+r to expand)

            ● Actually, looking at this more carefully, the logic should work. The matched_artists dictionary contains the matches found from ANY file that had a file_id3 tag matching a path component, and then it applies that match to ALL files in the file_infos list via the loop at line 737.

              Wait, let me understand the scoping issue better. Let me check what file_rel_path and scope_path would actually be in your example:
            
            ● Read(mp3_organizer_v2.py) ⎿ Read 5 lines (ctrl+r to expand)

            ● Ah! I think I see a potential issue. The file_rel_path is calculated as the directory path, not the individual file path. So for all files in the same directory, file_rel_path will be the same. This should actually work correctly.

              But let me double-check the scope matching logic. If the directory is "Jordan Rudess" and the scope_path is also "Jordan Rudess", then _path_is_in_scope should return
              True.
            
              Let me debug this by adding some debug output to see what's actually happening. But first, let me look at a potential issue: maybe the match isn't being found in the first place?
            
            It's doing a pretty complex chain of step-by-step reasoning here, looking in detail at the code. It ended up by printing out more debug info and having me re-run it again, then paste in the debug info, then add even more debug info and run it again. But it did eventually get the bug, which was non-trivial to identify (it was an aliasing problem where Claude was mutating state on a list and that list was shared with all the other files in the directory).

            • By Applejinx 2025-08-1210:201 reply

              Huh. Alternate explanation: there's a layer of indirection, drawing upon the unthinkable size of the source data, so rather than 'issue forth tokens as if there is a person answering a question', you've got 'issue forth tokens as if there is a person being challenged to talk about their process', something that's also in the training data but in different contexts.

              I'm not sure statements of 'aha, I see it now!' are meaningful in this context. Surely this is just the em-dash of 'issue tokens to have the user react like you're thinking'?

              • By nostrademons 2025-08-1221:13

                I wonder if something else is going on, and perhaps Claude is using the LLM to identify the likely culprits within the codebase, sending the code around them to execute with an actual Python interpreter on their servers, feeding both the code and the result as the context window to another LLM query with a system prompt something like "What is this code doing, when it runs on this input and this output?", feeding the result of that back to the user, and then repeating as long as the overall bug remains unsolved. I've found that feedback is a very effective technique with LLMs, asking them to extract some data, testing that data through out-of-band mechanisms, then feeding the test results and the original context back into the LLM to explain its reasoning and why it got the result. The attention mechanisms in the transformer model function very well when they're prompted with specifics and asked to explain their reasoning.

                Only an Anthropic engineer would know for sure. I'm pretty sure that it was making multiple queries on my behalf during the chat transcript - each "Read ... mp3organizer_v2.py" is a separate network round-trip.

        • By thewebguyd 2025-08-1117:291 reply

          > For me LLM:s are just a computer interface you can program using natural language.

          Sort of. You still can't get a reliable output for the same input. For example, I was toying with using ChatGPT with some Siri shortcuts on my iPhone. I do photography on the side, and finding a good time for lighting for photoshoots is a usecase I use a lot so I made a shortcut which sends my location to the API along with a prompt to get the sunset time for today, total amount of daylight, and golden hour times.

          Sometimes it works, sometimes it says "I don't have specific golden hour times, but you can find those on the web" or a useless generic "Golden hour is typically 1 hour before sunset but can vary with location and season"

          Doesn't feel like programming to me, as I can't get reproducible output.

          I could just use the LLM to write some API calling script from some service that has that data, but then why bother with that middle man step.

          I like LLMs, I think they are useful, I use them everyday but what I want is a way to get consistent, reproducible output for any given input/prompt.

          • By stirfish 2025-08-1119:35

            For things where I don't want creativity, I tell it to write a script.

            For example, "write a comprehensive spec for a script that takes in the date and a location and computes when golden hour is." | "Implement this spec"

            That variability is nice when you want some creativity, e.g. "write a beautiful, interactive boids simulation as a single file in html, css, and JavaScript."

            Words like "beautiful" and interactive" are open to interpretation, and I've been happy with the different ways they are interpreted.

        • By klipklop 2025-08-1117:294 reply

          >I think I’m slightly ADD. I love coding _interesting_ things but boring tasks cause extreme discomfort. >Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!

          I agree and I feel that having LLM's do boilerplate type stuff is fantastic for ADD people. The dopamine hit you get making tremendous progress before you get utterly bored is nice. The thing that ADD/ADHD people are the WORST at is finishing projects. LLM will help them once the thrill of prototyping a green-field project is over.

          • By 80hd 2025-08-1118:071 reply

            Seconding this. My work has had the same problem - by the time I've got things all hooked up, figured out the complicated stuff - my brain (and body) clock out and I have to drag myself through hell to get to 100%. Even with ADHD stimulant medication. It didn't make it emotionally easier, just _possible_ lol.

            LLMs, particularly Claude 4 and now GPT-5 are fantastic at working through these todo lists of tiny details. Perfectionism + ADHD not a fun combo, but it's way more bearable. It will only get better.

            We have a huge moat in front of us of ever-more interesting tasks as LLMs race to pick up the pieces. I've never been more excited about the future of tech

            • By r_lee 2025-08-129:08

              Same here, especially for making bash scripts or lots of if this if that with logging type stuff, error handling etc..

              Oh and also, from what I know, ADHD and perfectionism is a very common combination, I'm not sure if everyone has that but I've heard it's the case for many with ADD. Same with "standards" being extremely high for everything

          • By whartung 2025-08-1118:11

            I'm kind of in this cohort. While in the groove, yea, things fly but, inevitably, my interest wanes. Either something too tedious, something too hard (or just a lot of work). Or, just something shinier shows up.

            Bunch of 80% projects with, as you mentioned, the interesting parts finished (sorta -- you see the line at the end of the tunnel, it's bright, just don't bother finishing the journey).

            However, at the same time, there's conflict.

            Consider (one of) my current projects, I did the whole back end. I had ChatGPT help me stand up a web front end for it. I am not a "web person". GUIs and what not are a REAL struggle for me because on the one hand, I don't care how things look, but, on the other, "boy that sure looks better". But getting from "functional" to "looks better" is a bottomless chasm of yak shaving, bike shedding improvements. I'm even bad at copying styles.

            My initial UI was time invested getting my UI to work, ugly as it was, with guidance from ChatGPT. Which means it gave me ways to do things, but mostly I coded up the actual work -- even if it was blindly typing it in vs just raw cut and paste. I understood how things were working, what it was doing, etc.

            But then, I just got tired of it, and "this needs to be Better". So, I grabbed Claude and let it have its way.

            And, its better! it certainly looks better, more features. It's head and shoulders better.

            Claude wrote 2-3000 lines of javascript. In, like, 45m. It was very fast, very responsive. One thing Claude knows is boiler plate JS Web stuff. And the code looks OK to me. Imperfect, but absolutely functional.

            But, I have zero investment in the code. No "ownership", certainly no pride. You know that little hit you get when you get Something Right, and it Works? None of that. Its amazing, its useful, its just not mine. And that's really weird.

            I've been striving to finish projects, and, yea, for me, that's really hard. There is just SO MUCH necessary to ship. AI may be able to help polish stuff up, we'll see as I move forward. If nothing else it may help gathering up lists of stuff I miss to do.

          • By brailsafe 2025-08-1122:02

            Ironically, I find greenfield projects the least stimulating and the most rote, aside from thinking about system design.

            I've always much preferred figuring out how to improve or build on existing messy systems and codebases, which is certainly aided by LLMs for big refactoring type stuff, but to be successful at it requires thinking about how some component of a system is already used and the complexity of that. Lots of edge cases and nuances, people problems, relative conservativeness.

          • By skc 2025-08-128:29

            Looks like the definition of boilerplate will continue to shift up the chain

        • By lsaferite 2025-08-1123:11

          I find Claude great at all of the boilerplate needed to get testing in place. It's also pretty good at divining test cases to lock in the current behavior, even if it's buggy. I use Claude as a first pass on tests, then I run through each test case myself to make sure it's a meaningful test. I've let it loose on the code coverage loop as well, so it can drill in and get those uncommon lines covered. I still don't have a good process for path coverage, but I'm not sure how easy that is in go as I haven't checked into it much yet.

          I'm with you 100% on the boring stuff. It's generally good at the boring stuff *because* it's boring and well-trod.

        • By WhyOhWhyQ 2025-08-1117:45

          It's interesting that every task in the world is boring to somebody, which means nothing left in the world will be done by those interested in it, because somebody will gladly shotgun it with an AI tool.

        • By amelius 2025-08-1123:011 reply

          Last week there was this post about flow state, and pretty much explains the issue:

          https://news.ycombinator.com/item?id=44811457

          • By vlan0 2025-08-1123:17

            What is described in that post is a skill issue. The attention issues spoke about can be worked through. Because attention is a skill like anything else. And if one does not learn to practice. One becomes victim to it

        • By sleepybrett 2025-08-1118:33

          Programming implies that it's going to do what i say. I wish it did.

        • By kiitos 2025-08-1118:46

          > For me LLM:s are just a computer interface you can program using natural language. ... boring tasks cause extreme discomfort ... Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!

          The problem with this perspective, is that when you try to offload exactly the same boring task(s), to exactly the same LLM, the results you get back are never even close to being the same. This work you're offloading via natural language prompting is not programming in any meaningful sense.

          Many people don't care about this non-determinism. Some, because they don't have enough knowledge to identify, much less evaluate, the consequent problems. Others, because they're happy to deal with those problems, under the belief that they are a cost that's worth the net benefit provided by the LLM.

          And there are also many people who do care about this non-determinism, and aren't willing to accept the consequent problems.

          Bluntly, I don't think that anyone in group (1) can call themselves a software engineer.

      • By filoleg 2025-08-1117:57

        > Agents are a boon for extraverts and neurotypical people.

        This sounds like a wild generalization.

        I am in neither of those two groups, and I’ve been finding tools like Claude Code becoming increasingly more useful over time.

        Made me much more optimistic about the direction of AI development in general too. Because with each iteration and new version it isn’t getting anywhere closer to replacing me or my colleagues, but it is becoming more and more useful and helpful to my workflow.

        And I am not one of those people who are into “prompt engineering” or typing novels into the AI chatbox. My entire interaction is typically short 2-3 sentences “do this and that, make sure that XYZ is ABC”, attach the files that are relevant, let it do its thing, and then manual checks/adjustments. Saves me a boatload of work tbh, as I enjoy the debugging/fixing/“getting the nuanced details right” aspect of writing code (and am pretty decent at it, I think), but absolutely dread starting from a brand new empty file.

      • By Terretta 2025-08-1116:031 reply

        > I can’t get into using AI tools like Claude Code. As far as I go is chat style where I’m mostly in control.

        Try aider.chat (it's in the name), but specifically start with "ask" mode then dip a toe into "architect" mode, not "code" which is where Claude Code and the "vibe" nonsense is.

        Let aider.chat use Opus 4.1 or GPT-5 for thinking, with no limit on reasoning tokens and --reasoning-effort high.

        > agents are a boon for extraverts and neurotypical people.

        On the contrary, I think the non-vibe tools are force multipliers for those with an ability to communicate so precisely they find “extraverts and neurotypical people” confounding when attempting to specify engineering work.

        I'd put both aider.chat and Claude Code in the non-vibe class if you use them Socratically.

        • By dionian 2025-08-1118:021 reply

          thanks for this, going to try it out - i need to use paid api and not my claude max or gpt pro subn, right?

          • By victorbjorklund 2025-08-1118:551 reply

            Aider actually has a mode that is called "copy-paste" where it basically gives you a context to paste in an LLM chat and then you copy-paste back the reply to aider

      • By wredcoll 2025-08-1121:30

        > Agents are a boon for extraverts and neurotypical people.

        Please stop with this kind of thing. It isn't true, it doesn't make sense and it doesn't help anyone.

      • By taftster 2025-08-1121:16

        For me (an introvert), I have found great value in these tools. Normally, I kind of talk to myself about a problem / algorithm / code segment as I'm fleshing it out. I'm not telling myself complete sentences, but there's some sort of logical dialog I am having with myself.

        So I just have to convert that conversation into an AI prompt, basically. It just kind of does the typing for the construct already in my head. The trick is to just get the words out of my head as prompt input.

        That's honestly not much different than an author writing a book, for example. The story line is in their head, they just have to get it on paper. And that's really the tricky part of writing a novel as much as writing code.

        I therefore don't believe this is an introvert/extrovert thing. There are plenty of book authors which are both. The tools available as AI code agents are really just an advanced form of dictation.

      • By MrDarcy 2025-08-1123:27

        For what it’s worth I’m neurodivergent, introverted and have avoided management up to the staff+level. Claude Code is great I use it all day every day now.

      • By kevinsync 2025-08-1116:571 reply

        I kind of think we will see some industry attrition as a result of LLM coding and agent usage, simply because the ~vIbEs~ I'm witnessing boil down to quite a lot of resistance (for multiple reasons: stubbornness, ethics, exhaustion from the hype cycle, sticking with what you know, etc)

        The thing is, they're just tools. You can choose to learn them, or not. They aren't going to make or break your career. People will do fine with and without them.

        I do think it's worth learning new tools though, even if you're just a casual observer / conscientious objector -- the world is changing fast, for better or worse, and you'll be better prepared to do anything with a wider breadth of tech skill and experience than with less. And I'm not just talking about writing software for a living, you could go full Uncle Ted and be a farmer or a carpenter or a barista in the middle of nowhere, and you're going to be way better equipped to deal with logistical issues that WILL arise from the very nature of the planet hurtling towards 100% computerization. Inventory management, crop planning, point of sale, marketing, monitoring sensors on your brewery vats, whatever.

        Another thought I had was that introverts often blame their deficits in sales, marketing and customer service on their introversion, but what if you could deploy an agent to either guide, perform, or prompt (the human) with some of those activities? I'd argue that it would be worth the time to kick the tires and see what's possible there.

        It feels like early times still with some of these pie in the sky ideas, but just because it's not turn-key YET doesn't mean it won't be in the near future. Just food for thought!

        • By HardCodedBias 2025-08-1117:442 reply

          "ethics"

          I agree with all of your reasons but this one sticks out. Is this a big issue? Are many people refusing to use LLMs due to (I'm guessing here): perceived copyright issues, or power usage, or maybe that they think that automation is unjust?

          • By kevinsync 2025-08-1118:50

            I can't tell how widespread any of is, to be honest.. mostly because it's anecdata, and impossible to determine if what I'm seeing is just ragebait, or shallow dunks by reply-guys in comment sections, or particularly-loud voices on social media that aren't representative of the majority opinion, etc

            That said, the amount of sort-of-thoughtless, I'm-just-repeating-something-I-heard-but-don't-really-understand outrage towards AI that I'm seeing appears to be increasing -- "how many bottles of water did that slop image waste??", "Clanker"-adjacent memes and commentary (include self-driving + robots in this category), people ranting about broligarchs stealing art, music, movies, books to train their models (oddly often while also performatively parroting party lines about how Spotify rips artists off), all the way to refusing to interact with people on dating apps if they have anything AI in their profiles hahaha (file "AI" alongside men holding fish in their pics, and "crypto" lol)

            It's all chronically-online nonsense that may well just be perception that's artificially amplified by "the algorithm".

            Me, I have no fundamental issue with any of it -- LLMs, like anything else, aren't categorically good or bad. They can be used positively and negatively. Everything we use and consume has hidden downsides and unsavory circumstances.

          • By codingdave 2025-08-120:57

            Yes, people are refusing for those reasons. I don't know how many, but I'd say about half of the the people I know who do not work in tech are rejecting AI, with ethics being the primary reason. That is all just anecdata, but I suspect the tech bubble around AI is making people in tech underestimate how many people in the world simply are not interested in it being part of their lives.

      • By burnte 2025-08-1120:321 reply

        > Agents are a boon for extraverts and neurotypical people.

        As an extrovert the chances I'll use an AI agent in the next year is zero. Not even a billion to one but a straight zero. I understand very well how AI works, and as such I have absolutely no trust in it for anything that isn't easy/simple/solved, which means I have virtually no use for generative AI. Search, reference, data transformation, sure. Coding? Not without verification or being able to understand the code.

        I can't even trust Google Maps to give me a reliable route anymore, why would I actually believe some AI model can code? AI tools are helpers, not workers.

        • By ragequittah 2025-08-121:181 reply

          >no trust in it for anything that isn't easy/simple/solved

          I'm not sure what part of programming isn't generally solved thousands of times over for most languages out there. I'm only using it for lowly web development but I can tell you that it can definitely do it at a level that surprises me. It's not just "auto-complete" it's actually able to 'think' over code I've broken or code that I want improved and give me not just one but multiple paths to make it better.

          • By burnte 2025-08-1214:39

            In the case of programming is not quite as problematic with unsolved problems as much as others, like completeness. In the case of programming, it's context and understanding. It's great for small chunks of code but people think you can vibe code entire interactive applications with no programming knowledge, but LLMs simply don't understand, so they can't keep a cohesive idea of what the end goal is in mind. The larger the codebase it needs to work on the more likely it is to make catastrophic errors, create massive security flaws, or just generate nonfunctional code.

            Programming LLMs will become awesome when we create more narrowly targeted LLMs rather than these "train on everything" models.

      • By sixo 2025-08-1121:23

        At one point in my life I liked crafting code. I took a break, came back, and I no longer liked it--my thoughts ranged further, and the fine-grained details of implementations were a nuisance rather than ~pleasurable to deal with.

        Whatever you like is probably what you should be doing right now. Nothing wrong with that.

      • By joshred 2025-08-1117:59

        I think they're fantastic at generating the sort of thing I don't like writing out. For example, a dictionary mapping state names to their abbreviations, or extracting a data dictionary from a pdf so that I can include it with my documentation.

      • By block_dagger 2025-08-129:55

        I bet your code sucks in quality and quantity compared to the senior+ engineer who uses the modern tools. My code certainly did even after 20 years of experience, much of that as senior/staff level at well paying companies.

      • By mock-possum 2025-08-126:04

        It is effin nutzo that you would try to relate chatting with AI and agentic LLM codegen workflows to the intra/extra vert dichotomy or to neuro a/typicality - you so casually lean way into this absolute spectrum that I don’t even think associates the way you think it does, and it’s honestly kind of unsettling, like - what do you think you know about me, and about My People, that apparently I don’t know??

        If it doesn’t work for you that’s fine, but turning it into some tribalised over-generalization is just… why, why would you do that, who is that kind of thing useful for??

      • By wahnfrieden 2025-08-1116:231 reply

        You are leaving a lot of productivity on the table by not parallelizing agents for any of your work. Seemingly for psychological comfort quirks rather than earnestly seeking results.

        Automation productivity doesn’t remove your own agency. It frees more time for you to apply your desire for control more discerningly.

        • By dexterlagan 2025-08-129:161 reply

          I can imagine there are plenty of use cases, but I could not find one for myself. Can you give an example?

          • By wahnfrieden 2025-08-1215:11

            I personally use it for a lot of SwiftUI work. I parallelize it across at least 3 projects at once. I use only the largest models on highest thinking modes. I give instruction on implementation and provide reference implementations.

            I also use it for adding features/feature polish that address user pain points but that I can't prioritize for my own manual work just yet. There are a variety of user requests which sometimes LLMs are able to get done very suddenly when I give it a shot quickly. For these tasks, it's ok to abandon & defer them if the LLM spins its wheels.

      • By klipklop 2025-08-1117:27

        >Agents are a boon for extraverts and neurotypical people.

        I completely disagree. Juggling several agents (and hopping from feature-to-feature) at once, is perfect for somebody with ADHD. Being an agent wrangler is great for introverts instead of having to talk to actual people.

      • By cpldcpu 2025-08-1116:57

        I think you misunderstand what this does. It is not only a coding agent. It is an abstraction layer between you and the computer.

      • By starfallg 2025-08-1115:46

        Pretty sure we can make LLM agents to transform declarative inputs to agentic action.

      • By garciasn 2025-08-1116:21

        Agents are boon for introverts who fucking hate dealing with other people (read: me). I can iterate rapidly with another 'entity' in a technical fashion and not have to spend hours explaining in relatable language what to do next.

        I feel as if you need to work with these things more, as you would prefer to work, and see just how good they are.

      • By bastawhiz 2025-08-1116:471 reply

        > Agents are a boon for extraverts and neurotypical people

        As a neurodivergent introvert, please don't speak for the rest of us.

        • By antonvs 2025-08-1120:261 reply

          That stuck out to me as well. People will make up all sorts of stories to justify their resistance to change.

          • By bastawhiz 2025-08-1120:59

            It's the same as saying that writing good commit messages is a boon for extroverts and neurotypicals. It's a computer. You're giving it instructions, and the only difference to traditional coding is that the input is English text.

    • By joks 2025-08-1223:47

      > It's just fun to mess with these tools

      I think this is the main sentiment I can't wrap my head around. Using Claude Code or Cursor has been entirely a mind-numbingly tedious experience to me (even when it's been useful.) It's often faster, but 80% of the time is spent just sitting there waiting for it to finish working, and I'm not proud of the result because I didn't do anything except come up with the idea and figure out how to describe it well. It just ends up feeling like the coding equivalent of...like...copying down answers to cheat on a test. Not in the sense that it feels gross and wrong and immoral, but in the sense that it's unsatisfying and unfulfilling and I don't feel any pride in the work I've done.

      For things where I just want something that does something I need as quickly as possible, sure, I wasn't going to care either way, but personal projects are where I find myself least wanting to vibe code anything. It feels like hiring someone else to do my hobbies for me.

    • By pyrale 2025-08-1114:408 reply

      On one hand, I agree with you that there is some fun in experimenting with silly stuff. On the other hand...

      > Claude was trying to promote the startup on Hackernews without my sign off. [...] Then I posted its stuff to Hacker News and Reddit.

      ...I have the feeling that this kind of fun experiments is just setting up an automated firehose of shit to spray places where fellow humans congregate. And I have the feeling that it has stopped being fun a while ago for the fellow humans being sprayed.

      • By the__alchemist 2025-08-1114:503 reply

        This is an excellent point that will immediately go off-topic for this thread. We are, I believe, committed, into a mire of CG content enveloping the internet. I believe we will go through a period where internet communications (like HN, Reddit, and pages indexed by search engines) in unviable. Life will go on; we will just be offline more. Then, the defense systems will be up to snuff, and we will find a stable balance.

        • By mettamage 2025-08-1115:031 reply

          I hope you're right. I don't think you will be, AI will be too good at impersonating humans.

          • By lukan 2025-08-1115:16

            "we will just be offline more"

            I think it will be quite some time into the future, before AI can impersonate humans in real life. Neither hardware, nor software is there, maybe something to fool humans for a first glance maybe, but nothing that would be convincing for a real interaction.

        • By theshrike79 2025-08-1115:362 reply

          My theory (and hope) is the rise of a web of trust system.

          Implemented so that if a person in your web vouches for a specific url (“this is made by a human”) you can see it in your browser.

          • By Analemma_ 2025-08-1116:541 reply

            If your solution to this problem is the web of trust, to be blunt, you don't have a solution. I am techie whose social circle is mostly other techies, and I know precisely zero people who have ever used PGP keys or any other WoT-based system, despite 30 years of evangelism. It's just not a thing anybody wants.

            • By theshrike79 2025-08-126:18

              It's 99.99% an UI issue.

              If Google wouldn't have let perfect be the enemy of good and had added PGP support to Gmail early on (even just the shittiest signatures that are automatically applied and verified), the world would be a completely different place. Scams just wouldn't exist at this scale when signing mails with a known key would be the standard.

              The tech is there, now we have Matrix and XMPP and PubSub and god knows how many protocols to share keys. Even Keybase.io still kind of exists.

              What is lacking is a browser ecosystem for people to use their known identities to vouch for a specific url (with smart hashing so that changing the content would invalidate the trust).

              We have the technology. Someone(tm) "just" needs to build it :)

          • By gowld 2025-08-1115:521 reply

            "Web of Trust" has been the proposed answer for, what, 30 years now? But everyone is too lazy to implement and abide by it.

            • By vineyardmike 2025-08-1117:24

              Don’t worry, it’s coming for real this time. The governments have been proposing a requirement that web companies connect accounts to government IDs.

              If that isn’t exciting enough, Sam Altman (yea the one who popularized this LLM slop) will gladly sell you his WorldCoin to store your biometric data on the blockchain!

        • By johnecheck 2025-08-1115:03

          Indeed. I worry though. We need those defense systems ASAP. The misinformation and garbage engulfing the internet does real damage. We can't just tune it out and wait for it to get better.

      • By epiccoleman 2025-08-1115:121 reply

        I definitely understand the concern - I don't think I'd have hung out on HN for so long if LLM generated postings were common. I definitely recognize this is something you don't want to see happening at scale.

        But I still can't help but grin at the thought that the bot knows that the thing to do when you've got a startup is to go put it on HN. It's almost... cute? If you give AI a VPS, of course it will eventually want to post its work on HN.

        It's like when you catch your kid listening to Pink Floyd or something, and you have that little moment of triumph - "yes, he's learned something from me!"

      • By sixhobbits 2025-08-1115:212 reply

        (author here) I did feel kinda bad about it as I've always been a 'good' HNer until that point but honestly it didn't feel that spammy to me compared to some human generated slop I see posted here, and as expected it wasn't high quality enough to get any attention so 99% of people would never have seen it.

        I think the processes etc that HN have in place to deal with human-generated slop are more than adequate to deal with an influx of AI generated slop, and if something gets through then maybe it means it was good enough and it doesn't matter?

        • By felixgallo 2025-08-1115:251 reply

          That kind of attitude is exactly why we're all about to get overwhelmed by the worst slop any of us could ever have imagined.

          The bar is not 'oh well, it's not as bad as some, and I think maybe it's fine.'

          • By taude 2025-08-1115:47

            well, he was arguing that it's not worse than 99% of the human slop that gets posted, so where do you draw the line?

            * well crafted, human only? * Well crafted, whether human or AI? * Poorly crafted, human * well crafted, AI only * Poorly crafted, AI only * Just junk?

            etc.

            I think people will intuitively get a feel for when content is only AI generated. If people spend time writing a prompt that doesn't make it so wordy, and has personality, and it OK, then fine.

            Also, big opportunity going to be out there for AI detected content, whether in forums, coming in inmail inboxes, on your corp file share, etc...

        • By AtlasBarfed 2025-08-1116:37

          Did you?

          Spoiler: no he didn't.

          But the article is interesting...

          It really highlights to me the pickle we are in with AI: because we are at a historical maximum already of "worse is better" with Javascript, and the last two decades have put out a LOT of javascript, AI will work best with....

          Javascript.

          Now MAYBE better AI models will be able to equivalently translate Javascript to "better" languages, and MAYBE AI coding will migrate "good" libraries in obscure languages to other "better" languages...

          But I don't think so. It's going to be soooo much Javascript slop for the next ten years.

          I HOPE that large language models, being language models, will figure out language translation/equivalency and enable porting and movement of good concepts between programming models... but that is clearly not what is being invested in.

          What's being invested in is slop generation, because the prototype sells the product.

      • By DrSiemer 2025-08-1114:572 reply

        I'm not a fan of this option, but it seems to me the only way forward for online interaction is very strong identification on any place where you can post anything.

        • By postexitus 2025-08-1115:373 reply

          Back in FidoNet days, some BBSs required identification papers for registering and only allowed real names to be used. Though not known for their level headed discussions, it definitely added a certain level of care in online interactions. I remember the shock seeing the anonymity Internet provided later, both positive and negative. I wouldn't be surprised if we revert to some central authentication mechanism which has some basic level of checks combined with some anonymity guarantees. For example, a government owned ID service, which creates a new user ID per website, so the website doesn't know you, but once they blacklist that one-off ID, you cannot get a new one.

          • By sleepybrett 2025-08-1118:441 reply

            Smaller communities too.

            I grew up in... slightly rural america in the 80s-90s, we had probably a couple of dozen local BBSes the community was small enough that after a bit I just knew who everyone was OR could find out very easily.

            When the internet came along in the early 90s and I started mudding and hanging out in newsgroups I liked them small where I could get to know most of the userbase, or at least most of the posing userbase. Once mega 'somewhat-anonymous' (i.e. posts tied to a username, not like 4chan madness) communities like slashdot, huge forums, etc started popping up and now with even more mega-communities like twitter and reddit. We lost something, you can now throw bombs without consequence.

            I now spend most of my online time in a custom built forum with ~200 people in it that we started building in an invite only way. It's 'internally public' information who invited who. It's much easier to have a civil conversation there, though we still do get the occasional flame-out. Having a stable identity even if it's not tied to a government name is valuable for a thriving and healthy community.

            • By DrSiemer 2025-08-129:26

              Sounds good!

              A German forum I'm on allows members limited invites based on participation. The catch is, you are responsible for the people you invite. If they get in trouble, you will share a part of the punishment.

          • By benterix 2025-08-1115:501 reply

            Honestly, having seen how it can be used against you, retroactively, I would never ever engage in a discussion under my real name.

            (The fact that someone could correlate posts[0] based on writing style, as previously demonstrated on HN and used to doxx some people, makes things even more convoluted - you should think twice what you write and where.)

            [0] https://news.ycombinator.com/item?id=33755016

            • By postexitus 2025-08-1215:52

              This is a subset of "I don't have anything to hide" argument - if we use our real names, I think we'll have more responsibility about what we say. Of course, that's assuming our seemingly democratic governments don't turn authoritarian all of a sudden, as a Turkish citizen, I know that's not a given.

          • By andoando 2025-08-1115:42

            id.me?

            Not government owned, but even irs.gov uses it

        • By xnorswap 2025-08-1115:054 reply

          That can be automated away too.

          People will be more than willing to say, "Claude, impersonate me and act on my behalf".

          • By withinboredom 2025-08-1119:11

            I do this every time I find myself typing something I could get written up over or even fired for.

            1. I'm usually too emotional to write out why I feel that way instead of saying what I feel.

            2. I really don't like the person (or their idea) but I don't want to get fired over it.

            Claude is really great at this: "Other person said X, I think it is stupid and they're a moron for suggesting this. Explain to them why this is a terrible idea or tell me I'm being an idiot."

            Sometimes it tells me I'm being an idiot, sometimes it gives me nearly copy-pasta text that I can use and agree with.

          • By pyrale 2025-08-1115:311 reply

            > People will be more than willing to say, "Claude, impersonate me and act on my behalf".

            I'm now imagining a future where actual people's identities are blacklisted just like some IP addresses are dead to email, and a market develops for people to sell their identity to spammers.

            • By simonw 2025-08-1115:49

              That's always been the biggest flaw in the Worldcoin idea in my opinion: if you have a billion+ humans get their eyeball scanned in exchange for some kind of cryptographic identity, you can guarantee that a VERY sizable portion of those billion people will happily sell that cryptographic identity (which they don't understand the value of) to anyone who offers them some money.

              As far as I can tell the owner of the original iris can later invalidate an ID that they've sold, but if you buy an ID from someone who isn't strongly technically literate you can probably extract a bunch of value from it anyway.

          • By zoeysmithe 2025-08-1115:381 reply

            I mean, that's fine I guess as long as its respectable and respects the forum.

            "Claude write a summary of the word doc I wrote about x and post it as a reply comment," is fine. I dont see why it wouldnt be. Its a good faith effort to post.

            "Claude, post every 10 seconds to reddit to spam people to believe my politics is correct," isn't but that's not the case. Its not a good faith effort.

            The moderation rules for 'human slop' will apply to AI too. Try spamming a well moderated reddit and see how far you get, human or AI.

            • By antonvs 2025-08-1122:53

              The problem is speed and quantity. Humans weren't able to fight off the original email spam, it took automated systems. Forums will have to institute much stronger rate limiting and other such measures.

          • By gowld 2025-08-1115:54

            That's fine, because once someone is banned, the impersonations are also banned.

      • By bookofjoe 2025-08-1114:58

        See also: https://news.ycombinator.com/item?id=44860174 (posted 12 hours ago)

      • By zoeysmithe 2025-08-1115:36

        I mean I can spam HN right now with a script.

        Forums like HN, reddit, etc will need to do a better job detecting this stuff, moderator staffing will need to be upped, AI resistant captchas need to be developed, etc.

        Spam will always be here in some form, and its always an arms race. That doesnt really change anything. Its always been this way.

      • By kbar13 2025-08-1114:53

        it's annoying but it'll be corrected by proper moderation on these forums

        as an aside i've made it clear that just posting AI-written emoji slop PR review descriptions and letting claude code directly commit without self reviewing is unacceptable at work

      • By bongodongobob 2025-08-1115:421 reply

        The Internet is already 99% shit and always has been. This doesn't change anything.

        • By zanellato19 2025-08-1115:49

          It's gotten much worse. Before it was shit from people, now it's corporate shit. Corporate shit is so much worse.

    • By Lerc 2025-08-1118:28

      This is the kind of thing people should be doing with AI. Weird and interesting stuff that has a "Let's find out!" Attitude.

      Often there's as much to be learned from why it doesn't work.

      I see the AI hype to be limited to a few domains.

      People choosing to spend lots of money on things speculatively hoping to get a slice of whatever is cooking, even if they don't really know if it's a pie or not.

      Forward looking imagining of what would change if these things get massively better.

      Hyperbolic media coverage of the above two.

      There are companies taking about adding AI for no other reason than they feel like that's what they should be doing, I think that counts as a weak driver of hype, but only because cumulatively, lots of companies are doing it. If anything I would consider this an outcome of hype.

      Of these the only one that really affects me is AI being shoehorned into places it shouldn't

      The media coverage stokes fires for and against, but I think it only changes the tone of annoyance I have to endure. They would do the same on another topic in the absence of AI. It used to be crypto,

      I'm ok with people spending money that is not mine on high risk, high potential reward. It's not for me to judge how they calculate the potential risk or potential reward. It's their opinion, let them have it.

      The weird thing I find is the complaints about AI hype dominating. I have read so many pieces where the main thrust of their argument is about the dominance of fringe viewpoints that I very rarely encounter. Frequently they take the stance that anyone imagining how the world might change from any particular form of AI as a claim that that form is inevitable and usually imminent. I don't see people making those claims.

      I see people talking about what they tried, what they can do, and what they can't do. Everything they can't do is then held up by others as if it were a trophy and proof of some catestrophic weakness.

      Just try stuff, have fun, if that doesn't interest you, go do something else. Tell us about what you are doing. You don't need to tell us that you aren't doing this particular thing, and why. If you find something interesting tell us about that, maybe we will too.

    • By dizlexic 2025-08-1120:343 reply

      every vibe coded thing I've built is trash, but it's amazingly fun to do.

      I've tried to explain it to other devs that it's like dumping out a 10000 piece jigsaw puzzle and trying to put it together again.

      it's just fun.

      • By mmcconnell1618 2025-08-1211:39

        There was a time when everyone hand-coded HTML. Then came Macromedia Dreamweaver and Microsoft FrontPage which promised a WYSIWYG experience. No one would ever need to "learn HTML and CSS" because the tool could write it for them. Those tools could crank out a website in minutes.

        When those tools created some awful, complex and slow output, only the people who knew HTML could understand why it wasn't working and fix things.

        Vibe coding is in a similar place. It demos really well. It can be powerful and allows for quick iteration on ideas. It works, most of the time. Vibe coding can produce some really terrible code that is not well architected and difficult to maintain. It can introduce basic logic errors that are not easily corrected through multiple prompts back to the system.

        I don't know if they will ever be capable of creating production quality systems on par with what senior engineers produce or if they will only get incrementally better and remain best for prototypes and testing ideas.

      • By cesarvarela 2025-08-1215:20

        It is addicting

      • By throwaway31131 2025-08-1123:44

        > it’s just fun

        For some definitions of fun… :)

    • By indigodaddy 2025-08-1115:471 reply

      Not sure if I'd want Claude doing whatever on a production vps/node, but I like the idea of a way to use Claude Code on the go/wherever you are. I'm going to setup KASM workspaces on my free OCI server and see how it works there.

      https://hub.docker.com/r/linuxserver/kasm

      • By prashantsengar 2025-08-1117:161 reply

        Thanks for sharing this! I have been trying on and off to run RooCode on a VPS to use it on the go. I tried Code Server but it does not share "sessions". KASM seems interesting for this. Do share if you write a blog post on setting it up

    • By j45 2025-08-121:18

      Maintaining scheduled playing with what's changed/new/different is mandatory with the tools one already uses, let alone any new ones.

  • By cultofmetatron 2025-08-1118:149 reply

    All this AI coding stuff is scaring the shit out of me. a few months ago my team were hiring for a new engineer. of the 9 candidates we ran technical interviews with, only two could work without the ai. The rest literally just vibe coded their way though the app. as soon as it was taken away, they couldn't even write a basic sql query in ecto (we're a phoenix app). when questioned about tradeoffs inherent in the ai generated implementation, all but one was completely in the dark.

    • By runako 2025-08-1118:5911 reply

      > couldn't even write a basic sql query

      Not the point at all, but I have found it quite common among younger professional engineers to not know SQL at all. A combination of specialization (e.g. only work on microservices that do not directly touch a database) and NoSQL has made the skill of SQL more obscure than I would have thought possible as recently as 5 years ago.

      • By abustamam 2025-08-124:237 reply

        I've been a full stack engineer for 10 years and I know SQL syntax but a few years ago I was asked at an interview "make a relation between users and posts" and I went "rails generate user" or something, and he's like, "not that," so I was like "OK I'll add it to a prisma file" and he's like "not that, write the SQL. I dunno what to do because this has never happened before."

        Needless to say, I did not get the job, but several years later I still don't know how to answer his question.

        I've worked with NOSQL (Mongo/Mongoose, Firebase) and I've worked with ORMs (Prisma, drizzle, Hasura), and I've been able to implement any feature asked of me, across several companies and projects. Maybe there's a subset of people who really do need to know this for some really low level stuff, but I feel like your average startup would not.

        I think maybe it's similar to "can you reverse a linked list" question in that maybe you won't need the answer to that particular question on the job, but knowing the answer will help you solve adjacent problems. But even so, I don't think it's a good qualifier for good vs bad coders.

        • By jon-wood 2025-08-1210:515 reply

          Maybe this makes me a grumpy old man, but I feel like if you're primary role is to write software which interacts with a SQL database you should understand how to interact directly with that database. Not because you're going to do it frequently, but because understanding the thing your ORM is abstracting away for you allows you to more intelligently use those abstractions without making whoever runs the underlying database cry.

          • By switchbak 2025-08-1216:211 reply

            One of the things that taught me the most about SQL is trying to use Hibernate (There's not enough expletives in the english language to describe that thing).

            It's one thing to have a query language (DDL and DML no less) that was built for a different use case than how it's used today (eg: it's not really composable). But then you stack a completely different layer on top that tries to abstract across many relational DBs - and it multiplies the cognitive surface area significantly. It makes you become an expert at Hibernate (JPA), then learn a lot about SQL, then learn even more about how it maps into a particular dialect of SQL.

            After a while you realize that the damn ORM isn't really buying you very much, and that you're often just better off writing that non-composable boring SQL by hand.

            - assuming you have a decent testing infrastructure in place. Much of the supposed benefit of ORMs is about a form of psuedo-type safety, and making it easier to add more fields. If you have fast running tests that exercise the SQL layer, you might find those benefits aren't very compelling since you have such rapid feedback for your plain SQL anyway.

            I've almost never changed the vendor of DB in a project, so that's another supposed benefit that doesn't buy me much. I have often wanted to use vendor-specific functionality however, and often find an ORM gets in the way of that.

            To sum it up - I agree completely. If it's your job to wrangle an SQL DB - you ought to learn some SQL.

            • By abustamam 2025-08-1221:28

              Can you explain this part to me?

              > assuming you have a decent testing infrastructure in place. Much of the supposed benefit of ORMs is about a form of psuedo-type safety, and making it easier to add more fields. If you have fast running tests that exercise the SQL layer, you might find those benefits aren't very compelling since you have such rapid feedback for your plain SQL anyway.

              "decent testing infrastructure" is kinda doing a lot of heavy lifting — I love TDD but none of the startups I've worked at agreed with my love of TDD. There are tests, but I suspect they wouldn't fall under your label of decent testing infrastructure.

              But let's say we do have a decent testing infrastructure — how does this solve the type safety benefit that you mentioned?

          • By hobofan 2025-08-1214:41

            > because understanding the thing your ORM is abstracting away for you allows you to more intelligently use those abstractions

            If that's required, then you are working with a bad abstraction. (Which in the case of ORMs you'll probably find many people arguing that they are often bad abstractions.)

          • By abustamam 2025-08-1213:441 reply

            I agree in a sense but I'm primarily a front end dev. Our backend devs (whose SQL skills are probably on par with mine) maintain the DB tables and such using our ORM but I would still need to use the ORM to query the DB. I'll hop in and make a DB update on the backend if I need to, but my point is that many front end devs are probably just using the ORM to query and run mutations. Such queries could certainly be optimized a bit by knowing a bit of SQL (adding indices to commonly queried cols for example) but I'd still consider solid knowledge in SQL to be a solely backend thing.

            That said, I do know enough basic SQL to understand what ORMs are doing at a high level, but because I almost never write SQL I wouldn't consider myself proficient in it.

            • By cutemonster 2025-08-1216:201 reply

              It'd been nice if the interviewer had informed you in advance that they were going to ask you to write SQL, so you could have prepared? Or you could have saved some time by cancelling? Why don't more companies do that

              • By abustamam 2025-08-1221:24

                The screening call had some sql questions which I was able to answer. Basic sql. I feel like I could have saved a day of interviewing if this simple fkey question was in the screening.

                Companies should do that more!

          • By Fervicus 2025-08-1216:182 reply

            > if you're primary role is to write software which interacts with a SQL database you should understand how to interact directly with that database.

            I agree that there should be a general understanding one should be able to interact with it when needed. But at the same time I don't think devs need to be able to spit out queries with the right syntax on the spot in an interview setting.

            • By jon-wood 2025-08-1313:15

              Unless I'm doing an exercise where the candidate is actually writing software (in which case they can have Google, their favourite IDE, and if they must an LLM) I never expect exactly correct syntax in an interview. I'm assessing whether they demonstrate the sort of thought processes you'd expect from someone who knows what they're talking about, and can get in the same ballpark.

            • By cultofmetatron 2025-08-1219:20

              in our particular case, they could use google. that said, I didn't ask them anything particularly complex. the AI depndant coders(I saw nothing in their abilities that demonstrated engineering) broke down trying to update a single record of known id with a new value.

          • By Rapzid 2025-08-1218:30

            Not everyone wants to make that much money.

        • By myaccountonhn 2025-08-1210:381 reply

          I guess this is my first old-senior moment, but even if you use an ORM then you should know basic SQL. That table structure will be the most long-living thing in your system and probably outlive the ORMs and a bad table structure is going to create a mess later on.

          • By abustamam 2025-08-1213:31

            I agree. I know _basic_ SQL but because I don't work with it every day I don't consider myself proficient. I read the migration files generated by my ORMs just out of curiosity and to make sure it's not doing anything crazy, but I wouldn't be able to write the migration file myself.

        • By vbezhenar 2025-08-129:331 reply

          All projects I worked with, that used ORM, were burning pile of shit and ORM was a big part of this. I hate ORM and would prefer SQL any day. In my projects I almost never choose ORM.

          I trust that some people can deal with ORM, but I know that I can't and I didn't see anyone who can do it properly.

          So, I guess, there are some radical views on this issue. I wouldn't want to work with person who prefers to use ORM and avoids know SQL, and they probably hold similar opinion.

          It is really weird to me that someone would call SQL low level. SQL is the highest level language available in the industry, definitely level above ordinary programming languages.

          • By abustamam 2025-08-1213:482 reply

            Interesting. The reason I like ORMs is because of type generation in TypeScript. I've never worked in a company that didn't use an ORM which is likely why I prefer it. But if I ever did work in a company that used raw SQL I'd probably just suck it up and learn better SQL. Maybe then I'd be able to make a more informed opinion.

            With regards to SQL being low level, I primarily work with TypeScript so a language that talks directly with the DB (SQL) seems pretty low level compared to TS. I'm not sure what you mean by an ordinary programming language though (obviously not machine code).

            • By withinboredom 2025-08-138:251 reply

              I spent 5 years working at a place without an ORM. Due to sharding and scalability issues, an ORM wasn't possible (the tech was nearly 20 years old when I left, so they didn't get some of the later database scaling tech). When I went to a company with an ORM, I had problems.

              Namely, the ORM got in my way so much. I knew exactly which query to run and how to word it efficiently, but getting the ORM to generate sane SQL was nearly impossible. I eventually had to accept my fate of generating shitty SQL at every company since then...

              That being said, I'll always advocate for ditching an ORM if given the chance and the expertise is available. If nobody knows why you generally wouldn't want to put an index on a boolean column, we're probably good. If people think it will help performance on a randomly set boolean field, we should probably stick with an ORM.

              • By abustamam 2025-08-1314:081 reply

                Most ORMs I've worked with have a special method that lets you write raw SQL. Probably not the most ergonomic way, but it is an escape hatch. Which ORM were you using?

                • By withinboredom 2025-08-1317:571 reply

                  Most teams I've been on will automatically reject a PR writing raw sql due to maintainability concerns. I would never consider it in a professional context unless it can be done in a way that guarantees it is easy to maintain without using concatenation.

                  Most ORMs don't have the SQL tools we did to sanitize variables when putting them into queries. Some do, but not all.

            • By vbezhenar 2025-08-1215:031 reply

              Java, Python, JavaScript, TypeScript are more or less the same and on level below SQL (when it comes to querying structured data).

              The SQL is declarative query language. You describe the query, and database engine automatically builds a plan to execute the query. This plan automatically uses statistics, indices and so on. You don't generally specify that this query must use this index, then iterate over this table, then sort it, sort another table, merge them, the database engine does it for you.

              Imagine that you have few arrays of records in JavaScript and you need to aggregate them, sort them, in an efficient way. You'll have to write your logic in an imperative way. You'll have to write procedures to maintain indices, if necessary. SQL does it better.

              It it an interesting exercise to imagine programming in a language with built-in RDBMS (or object database system) for local or global variables. For example React Redux uses structures, which are somewhat similar to database. I don't really know if it would be useful or not, to write SQL instead of functional API (and get performant execution, not just dumb "table scan") but I'd like to try. C# have similar feature (LINQ), but it's just API, no real engine behind it.

        • By Zizizizz 2025-08-125:562 reply

          I would assume he means creating a foreign key relationship from the posts and users table. Doesn't rails or prisma have a SQL migration tool? (Upon looking it looks like it's Active Record)

          So the equivalent of

          `rails db:migrate` after doing what you suggested in the interview. You could write in SQL as..

          ``` ALTER TABLE posts ADD COLUMN user_id INT, ADD CONSTRAINT fk_user FOREIGN KEY (user_id) REFERENCES users(id); ```

          I don't know if that's what he was after but that's what my mind jumped to immediately. I'd recommend learning a bit as sometimes I've found that orms can be a lot slower than writing plain SQL for some more complex data fetching.

          • By jama211 2025-08-128:501 reply

            I’ve written manual SQL for years in previous roles, but because I haven’t touched it in 6 months I’d have had to double check how to write that with a quick google. It’s just a bad interview technique to require write learned syntax.

            • By abustamam 2025-08-1213:531 reply

              I agree but lots of companies do similar things in their interview processes so we just have to know everything I guess.

              • By jama211 2025-08-1219:311 reply

                What’s more likely is you get lucky one day and the company that hires you is the one that happens to have asked you the questions you brushed up on recently, rather than the company that’s the best fit.

                Or if you have the luxury to choose, which can happen later in your software engineering career, you can simply turn down companies with bad interview processes. Personally I’m a fan of this method, but it’s a luxury for sure.

                • By abustamam 2025-08-1221:561 reply

                  I've been fortunate in that I haven't had to do a formal interview for any job I've ever had, except for my first job. I've just been working either with people with whom I've worked, or referrals of those people. I know that's an extremely privileged place to be, but it's the best place to be IMO.

                  • By jama211 2025-08-1419:56

                    100%. Your experience and references should be able to speak to your abilities better than some challenge would anyways.

          • By abustamam 2025-08-1213:52

            Thanks! I think I was (conceptually) missing the constraint/references part. Prior to that I had only worked with firebase and Mongo so I was just like "OK so I just put userID column on post table right?" and apparently no, not right, lol.

            What's nice about prisma and hasura is that you can actually read the sql migration files generated, and you can set the logging to a level where you can read the sql being run when performing a query or mutation. I found that helpful to understand how sql is written, but since I'm not actually writing it I can't claim proficiency. But I can understand it.

        • By sjapkee 2025-08-1211:411 reply

          Wait, people still unironically use ORMs instead of writing queries directly? Not surprising then that everything works like shit

          • By abustamam 2025-08-1213:55

            Every startup I've worked at has at least done their first few MVPs using an ORM. I imagine because it's just a quick way to bootstrap a project. Since performance isn't really an issue before scale, sql vs ORM is just a matter of dev velocity, but I guess at scale most companies just never repaid the tech debt.

        • By gabrieledarrigo 2025-08-1210:221 reply

          One thing is reversing a linked list during a white board interview. Another write a simple JOIN between two tables.

          Come on guys, working on backend applications and not having a clue about writing simple SQL statements, even for extracting some data from a database feels...awkward

          • By abustamam 2025-08-1214:07

            With NOSQL becoming more ubiquitous (for better or worse), it's not unfathomable that someone simply never had an opportunity to do something as simple write a join between two tables. Someone replied to my comment and taught me how in 5 lines of code. I read it and I'm like, oh that makes sense. Cool. I won't remember it exactly but I understand it. I wouldn't hold it against a front-end developer who's only ever worked with Vue to understand what happens when a React node rerenders.

            My point is that there are acceptable levels of abstraction in all parts of software. Some companies will have different tolerances for understanding of that abstraction. Maybe they want a front-end dev to understand the CSS generated from tailwind. Or maybe they want them to know exactly what happens when a React node is rerendered. Or maybe the company doesn't care as long as the person is demonstrably productive and efficient at building stuff. What some consider basic knowledge can be considered irrelevant to others. Whether or not that has lasting consequences is to be seen, but that just brings us full circle back to the original problem at hand (is it good that people can vibe code something and not understand the code it generates)

        • By Rapzid 2025-08-1218:32

          It would be a crap shoot if I nailed the syntax for creating the tables and the foreign key constraint.

          This might be something I'd ask about in an interview, but I'd be looking for general knowledge about the columns, join, and key constraint. Wouldn't expect anyone to write it out; that's the boring part.

      • By ASinclair 2025-08-120:441 reply

        I'm nearly guilty of this. I've been in industry for a bit over 10 years and I can barely write SQL. That's despite writing a bunch of queries by hand in my undergrad databases course. I almost never deal with databases myself outside of some ad-hoc queries.

        • By phito 2025-08-128:351 reply

          Same here, mostly because I avoid it because I really do not like writing queries. Something about the syntax rubs me the wrong way, especially if I have to switch from MySQL/Postgres/MSSQL regularly. I'll use an ORM whenever I can, if performances do not matter.

          • By mattmanser 2025-08-1212:062 reply

            It's because it's logically in the wrong order, it should be:

               FROM Users 
               WHERE Type = 'Foo'
               SELECT id, name
            
            They use the right order in a lot of ORMs and as I was a SQL expert (but not master), I found it so jarring at first.

            You probably have the reverse problem, it doesn't fit your mental model which is in fact the right logical model.

            It gets even worse when you add LIMIT/TOP or GROUP BY. SQL is great in a lot of ways, but logically not very consistent. And UPDATE now I think about it, in SQL Server you get this bizarreness:

                UPDATE u
                SET u.Type = 'Bar'
                FROM Users u
                JOIN Company c on u.companyId = c.id
                WHERE c.name = 'Baz'

            • By skydhash 2025-08-1212:39

              That's because in the relational model, you deal mostly in terms of projections. There's an action and then the rest of the statement is creating the projection of the data the action will apply to. The action always applies to the whole of the projection (IIRC).d

              The semantics of SQL and a standard programming language are quite different as they are based on different computing/data model.

            • By cultofmetatron 2025-08-1215:541 reply

              you would LOVE ecto. its an elixir dsl for writing sql and fixes all the issues I have with sql including what you just mentioned here

              • By mattmanser 2025-08-1216:151 reply

                I actually still love writing SQL to be honest,

                I can't have explained myself well, I find the SQL way "normal" even though it's logically/semantically a bit silly.

                Because that's how I learnt.

                My point was, if you learnt on ORMs, the SQL way must be jarring.

                • By cultofmetatron 2025-08-1216:411 reply

                  can't stand orms and I wrote one myself a long time ago.

                  BUT

                  ecto isnt' an orm. its a sql dsl and it take a lot of pain out of writing your sql while being very easy to map what you're writing to teh output dsl

                  so instead of

                  ``` select Users.id, count(posts.id) as posts_count from Users left join Posts on Posts.user_id = Users.id group by users.id ```

                  you can write ``` from(u in User) |> join(:left, [u], p in Post, on: u.id = p.user_id, as: :posts) |> select([u, posts: p], %{ id: u.ud, posts_count: count(p.id) }) |> group_by([u], u.id)

                  ```

                  the |> you see here is a pipe operator. I've effectively decomposed the large block query into a series of function calls.

                  you can assign subqueries as separate values and join into those as well. it doesn't try to change sql. it just makes it vastly more ergonomic to write

                  • By mattmanser 2025-08-1311:31

                    That's pretty much identical to an ORM:

                        db.Users
                          .Inlude(u => Posts)
                          .Select(u => new {
                            u.Id,
                            Count = u.Posts.Count()});

      • By ElCapitanMarkla 2025-08-1119:321 reply

        I started to notice this in a big way at my last job which I started in 2013. We were a rails shop and by about 2016 I was noticing most new hires would have no idea how to write a SQL query.

        • By cultofmetatron 2025-08-1120:232 reply

          > most new hires would have no idea how to write a SQL query.

          probably why people think rails is slow. our integration partners and our customers are constantly amazed by how fast and efficient our system is. The secret is I know how to write a damn query. you can push a lot of logic that would otherwise be done in the api layer into a query. if done properly with the right indexes, its going to be WAY faster than pulling the data into the api server and doing clumsy data transformations there.

          • By strtok 2025-08-128:50

            1000%. It’s all about limiting those round trips to the database…

          • By hollowturtle 2025-08-1122:443 reply

            You actually confirmed that rails is slow if the optimization is on the database server and doing data mangling in ruby is less efficient

            • By runako 2025-08-1123:311 reply

              Constructively, I would suggest some areas for study:

              - relative speeds of programming languages (https://github.com/niklas-heer/speed-comparison)

              - database indexing (https://stackoverflow.com/questions/1108/how-does-database-i...)

              - numbers everyone should know (https://news.ycombinator.com/item?id=39658138)

              And note that databases are generally written in C.

              • By hollowturtle 2025-08-126:101 reply

                Constructively, I just wanted to say that you can't claim that something is fast if speed is thanks to something else. OP said people thinks rails is slow but if you have a fast query it's a solved problem. Even python would be fast in this instance with an optimized query

                • By cultofmetatron 2025-08-1217:17

                  > Even python would be fast in this instance with an optimized query

                  I wasn't trying to argue that ruby is slow (it objectively is). I was arguing that its slowness is irrelevant for most webapps because you should be offloading most of the load to your database with efficient queries.

            • By closeparen 2025-08-120:03

              Unless the database is in your process's address space (SQLite, Datomic, etc) your first problem is going to be shipping the data from the database server to the application process.

            • By richwater 2025-08-1122:551 reply

              You've correctly identified that filtering a list is slower than looking up from an index. Congratulations.

              • By hollowturtle 2025-08-126:08

                Thank you, let me give you the eli5: I just wanted to say that you can't claim that something is fast if speed is thanks to something else

      • By nevir 2025-08-122:30

        I see this too, also for engineers that have only interacted with relational dbs via ORMs & query builders

      • By ggregoire 2025-08-1122:342 reply

        That's so weird to me, SQL is the very first language they taught me in college 20 years ago, before even learning how to write a for loop in pseudo code. Nowadays it's still the language I use the most on a daily basis.

        • By sampullman 2025-08-128:16

          I learned it ~15 years ago, and when I use it a lot it sticks with my pretty well. But if I go a month or two without writing raw queries I lose anything more advanced than select/update/delete/join. I think I forget it faster than other things because none of syntax/semantics aren't shared with anything else I use.

        • By ramchip 2025-08-125:50

          It's a wide field so it depends on the specialization. I did computer engineering 15+ years ago and we never touched SQL, but I think the software engineering people did have a class on it.

      • By bapak 2025-08-1216:271 reply

        I don't deal with SQL and my knowledge of it is limited to what I learned in high school a long time ago, but… isn't SQL like super easy? What's so difficult about it that people don't know how to use it? To me git is harder and I use that tool daily.

        • By slightwinder 2025-08-1316:46

          It's simple if you've learned it, but a bit hard to remember on specific details if you are not using it regulary. And every database is also a bit different on the advanced features.

      • By closeparen 2025-08-120:24

        You should at least know how to query your data warehouse environment to debug your services / find out if they're working!

      • By shortrounddev2 2025-08-1213:23

        I think ORM did a lot of that too

      • By chadcmulligan 2025-08-121:29

        I dont think they teach SQL or relational algebra any more, or at least its easy to get an IT degree and avoid it altogether.

      • By gavinray 2025-08-1214:55

        Terrifying.

    • By pryelluw 2025-08-1119:541 reply

      This was my experience prior to any of the llm tools. It’s hard to find people with all around knowledge. Plus someone good in one context is awful in another. Your hiring process should find people who are a good fit and not look for people with just certain technical skills. The basics of SQL can be learned quickly. Fit cannot be learned.

      • By jama211 2025-08-128:53

        Well said. Some of the best engineers I know looked up syntax whenever they needed it because there’s not much point in wrote learning everything. As long as they understand what they’re doing, that’s the main point.

        I’m honestly so sick of interviews filled with gotcha questions that if you’d happened to study the right thing you could outperform a great experienced engineer who hadn’t brushed up on a couple of specific googlable things before the interview. It’s such a bad practice.

    • By withinboredom 2025-08-1119:14

      Same. One candidate out of 6.

      I use claude code quite liberally, but I very often tell it why I won't accept it's changes and why; sometimes I just do it myself if it doesn't "get it".

    • By lvl155 2025-08-1121:541 reply

      AI can also help you learn new things much faster. It’s just a tool.

      • By hollowturtle 2025-08-1122:411 reply

        I'd say "Learn the wrong things much faster". But I'd actually argue that learning isn't a fast process, it's rather a very slow journey, takes time and dedication to master deep knowledge. You won't learn anything that will stay with llms, if they got the output correct

        • By lvl155 2025-08-1212:292 reply

          Sorry to be harsh but that just sounds ignorant. LLM can be a great tool to speed up your learning process especially for devs. It can help me get over blocks that used to take me half a day of digging through docs and codes.

          • By skydhash 2025-08-1212:49

            Having a quick answer is different than internalizing some domain, especially when you may need to argue about it or propose some novel method. The former can work if you know the right question, but that's just as difficult as knowing the answer.

            And most things that are useful daily is not pure knowledge. It's adapting the latter to the current circumstances (aka making tradeoffs). Coding is where pure knowledge shines and it's the easiest part. Before that comes designing a solution and fitting it to the current architecture and that's where judgement and domain knowledge are important.

          • By hollowturtle 2025-08-1213:191 reply

            I may sound ignorant but if you delegate your knowledge to LLMs you'll stay ignorant. Learning is a never ending journey, you can chat about a problem with a non deterministic stocastic compressor with stale knowledge, no problem but don't call it education. Core domain knowledge still makes people successful not chatting skills. Sure if you have deep domain knowledge you can still benefinit AI filling the tedious details, I'm not anti AI

            • By stronglikedan 2025-08-1217:321 reply

              > I may sound ignorant but if you delegate your knowledge to LLMs you'll stay ignorant.

              That's kind of an ironic statement given the context. AI is just a glorified search engine that makes it very easy to find relevant information on a topic, just like a search engine but faster. One must still verify the results to be true, just like a search engine. AI is a tool to help you do your work faster, not do it for you, and should be trusted as much as any other anonymous source.

              • By hollowturtle 2025-08-1220:43

                Search engines index content, LLMs "compress" content during training and regurgitate it back wrong or in non deterministic way

    • By paffdragon 2025-08-123:521 reply

      We have also seen this about a year ago when hiring. But only a couple of them made it to the live interview and then it was evident. Most of them were quickly filtered out based on the coding submissions. We are soon about to hire again, with the uptick in LLM usage and newer more up to date models, I'm not looking forward too much having to deal with all of this.

      • By hopelite 2025-08-1216:421 reply

        Maybe it’s time for you to update your perspective. You strike me as the old guy who curses all the young’ens who use the fancy electric power tools and never learn how to use a manual saw … and you don’t like it much.

        • By paffdragon 2025-08-1217:15

          I don't know why you make up such stories for yourself. It's completely pointless and has zero value.

          Let me share you something, maybe it helps to update your perspective.

          We reject people not because they help themselves with AI, everyone on the team uses AI in some form. Candidates are mostly rejected, because they don't understand what they write and can't explain what they just dumped into the editor from their AI assistant. We don't need colleagues who don't know what they ship and can't reason about code and won't be able to maintain it and troubleshoot issues. I can get the same level from an AI assistant without hiring. It's not old vs. young, we have plenty of young people on the team, it's about our time and efforts spent on people trying to fake their skills with AI help and then eventually fail and we wasted our time. This is the annoying part, the waste, because AI makes it easier to fake the process longer for people without the required skills.

    • By jama211 2025-08-128:52

      I’ve worked for years in the past on huge complex sql. I wouldn’t have been able to remember exactly what that looks like in sql without a quick search. Your interview questions are bad if they require wrote learned syntax. Great programmers exist who barely bother to remember anything they can’t just look up.

    • By trenchpilgrim 2025-08-144:19

      I know SQL but rarely have to use it because my career has mostly been nonrelational data (etcd, prometheus, object storage, dataframes). So for anything more than the basics I have to read the manual to refresh my memory.

    • By danielbln 2025-08-1118:414 reply

      Now take Google away, and LSP. And the computer. Write CTEs with a pencil or bust.

      I'm exaggerating of ourse, and I hear what you're saying, but I'd rather hire someone who is really really good at squeezing the most out of current day AI (read: not vibe coding slop) than someone who can do the work manually without assistance or fizz buzz on a whiteboard.

      • By dnoberon 2025-08-1119:063 reply

        I think the point is how can you squeeze anything out of the AI without knowing the stuff at a deep enough level?

        • By kenjackson 2025-08-1123:122 reply

          Ask most folks about the code generated by the compiler or interpreter and you’ll get blank stares. Even game devs now barely know assembly, much less efficient assembly.

          There is still a place for someone who is going to rewrite your inner-loops with hand-tuned assembly, but most coding is about delivering on functional requirement. And using tools to do this, AI or not, tend to be the prudent path in many if not most cases.

          • By jamesmccann 2025-08-1123:38

            I don't think these are comparable though. Compiler generation is deterministic and more or less provably correct. LLM code is a mile away from that.

          • By skydhash 2025-08-1212:571 reply

            Apart from the whole argument about compilers being deterministic and not LLMS.

            You don't collaborate on compiled code. They are artifacts. But you're collaborating on source code, so whatever you write, someone else (or you in the future) will need to understand it and alter it. That's what the whole maintainability, testability,... is about. And that's why code is a liability, because it takes times for someone else to understand it. So the less you write, the better it is (there's some tradeoffs about complexity).

            • By kenjackson 2025-08-1219:47

              You can make LLMs deterministic, but that's not a priority right now. In the same way we used to not capture dev environments and end up in situations where you couldn't rebuild a binary exactly because the OS version, the compiler version, the CRT version, etc... all changed -- of course that's a 20 year old problem now, but was a legitimate problem as recently as 2000.

              And again, we're at a point in time where we do collaborate on the source code artifacts. But maybe we won't in the future. It assumes that we see AI progress, I can see a world where asking questions of the AI about the code is better than 99% of developers. There will be the John Carmack's of the world though who know better than the AI, but the common case is that we eventually move away from looking at code directly. But this does rely on continued progress that we may not get.

        • By jama211 2025-08-128:541 reply

          Being able to memorise things that are easily looked up (like syntax) doesn’t demonstrate deep knowledge. It’s a bad interview question.

          • By myaccountonhn 2025-08-1210:45

            I mean maybe these juniors are geniuses, but I often find it very non-obvious why LLM-generated code it wrong and it requires me to have an even deeper knowledge. Sometimes the code is correct, but overly complicated.

            One small example was a coworker that generated random numbers with AI using `dd count=30 if=/dev/urandom | tr -c "[a-z][A-Z]" | base64 | head -c20` instead of just `head -c20 /dev/urandom | base64`. I didn't actually know `dd` beyond that it's used for writing to usb-sticks, but I suddenly became really unsure if I was missing something and needing to double check the documentation. All that to say that I think if you vibe-code, you really need to know what you're generating and to keep in mind that other will need to be able to read and understand what you've written.

      • By instig007 2025-08-1119:08

        > I'd rather hire someone [...] than someone who can do the work manually without assistance or fizz buzz on a whiteboard

        and the reason for you to do that would be to punish the remaining bits of competence in the name of "the current thing"? What's your strategy?

      • By kfajdsl 2025-08-120:331 reply

        For your examples, honestly yeah. A dev should familiar with the basic concepts of their language and tech stack. So yes, they should be able to understand a basic snippet of code without Google, an LSP, or even a computer. They should even be able to "write CTEs with a pencil and paper". I don't expect them to get the syntax perfect, but they should just know the basic tools and concepts enough to have something at least semantically correct. And they certainly should be able to understand the code produced by an AI tool for a take home toy project.

        I say this as someone who would definitely be far less productive without Google, LSP, or Claude Code.

        • By jama211 2025-08-128:56

          I’ve written huge queries and CTE’s in my career. But I haven’t done it recently. Personally, I’d need 10 minutes of google time to refresh my memory before being able to write much sql on paper, even with bad syntax. It doesn’t mean I’m a bad engineer because I don’t bother to memorise stuff that’s easily googleable.

      • By timeon 2025-08-1121:121 reply

        Used to write Perl scripts with pencil while waiting at the airport.

        • By christophilus 2025-08-1122:041 reply

          And then later, when you couldn’t read your own handwriting, you said, “This damn thing is illegible— so it’s probably valid Perl.”

          • By yen223 2025-08-1123:53

            Perl is just Python with bad handwriting

  • By js2 2025-08-1115:435 reply

    > export IS_SANDBOX=1 && claude --dangerously-skip-permissions

    FYI, this can be shortened to:

      IS_SANDBOX=1 claude --dangerously-skip-permissions
    
    You don't need the export in this case, nor does it need to be two separate commands joined by &&. (It's semantically different in that the variable is set only for the single `claude` invocation, not any commands which follow. That's often what you want though.)

    > I asked Claude to rename all the files and I could go do something else while it churned away, reading the files and figuring out the correct names.

    It's got infinite patience for performing tedious tasks manually and will gladly eat up all your tokens. When I see it doing something like this manually, I stop it and tell it to write a program to do the thing I want. e.g. I needed to change the shape of about 100 JSON files the other day and it wanted to go through them one-by-one. I stopped it after the third file, told it to write a script to import the old shape and write out the new shape, and 30 seconds later it was done. I also had it write me a script to... rename my stupidly named bank statements. :-)

    • By jama211 2025-08-128:571 reply

      This. I had a 10000 line css file, and told it to do a find and replace on some colours. It was hilariously bad at this and started chewing tokens. Asking it to write a script to swap it out and then execute that script for me and it was done instantly. Knowing the right questions to ask an AI is everything.

      • By jofzar 2025-08-1212:50

        I actually have noticed it do this by itself a couple of times, it's where I got the idea to do the same

    • By Dragonai 2025-08-1120:20

      > It's got infinite patience for performing tedious tasks manually and will gladly eat up all your tokens. When I see it doing something like this manually, I stop it and tell it to write a program to do the thing I want.

      This is so funny. Thank you for sharing :)

    • By indigodaddy 2025-08-1116:031 reply

      Does it even work with the &&? Iirc, I've never had luck putting env vars before the && and always had to do it the way you describe

      • By DiabloD3 2025-08-1116:271 reply

        It works because they exported it. VAR=foo bar only sets it for the env passed to that exec or subshell, export VAR=foo && bar adds it to the current env then executes bar.

        export VAR=foo && bar is dangerous because it stays set.

        • By indigodaddy 2025-08-1116:28

          Ah, that's what I had done wrong, thank you! And agree I wouldn't want to just one-off export it and have it be set, better to not export it for one-liner one-offs for sure

    • By kiitos 2025-08-1120:082 reply

      make it work more generally via `env`

          env IS_SANDBOX=1 claude --dangerously-skip-permissions
      
      not all shells support FOO=bar prefixes, in particular fish does not, but the above works everywhere

      • By rirze 2025-08-1120:26

        This might have been the case for fish shell; but not anymore, it works in current version. I myself have used the popular syntax without specifying `env` in my aliases.

    • By ActionHank 2025-08-1117:45

      Can shorten further to rm -rf /

HackerNews