Things I've Done with AI

2026-03-0919:2486143sjer.red

My thoughts on AI, and what it has helped me achieve

| View comments on Hacker News

I started programming in middle school. The first thing I remember writing is HTML for my neopets homepage. That morphed into writing static sites for Minecraft servers, and later on Java plugins for Minecraft.

Programming is so fun. I took it on as a hobby and it became an obvious career path, but I didn’t realize how well engineers were paid until my junior year in college. I had received an internship at AWS and was astonished. That led to a return offer and eventually to where I am today, with about seven years of professional experience and another seven of self-teaching/programming as a hobby.

I felt I was a year or two ahead of the application-focused classes I took. The more theoretical courses helped round out my knowledge and was the foundation I needed to work at tech companies.

In college I programmed all of the time. I found problems to solve. I found libraries and languages to try. I always wanted to learn more — to make sure my code was well-architected, maintainable, “clean”. This led to me reading programming books for fun. Clean Code taught me how to write better Java. You Don’t Know JavaScript taught me that JavaScript is actually quite a good language. Category Theory for Programmers taught me that it’s really hard to find a job writing Haskell.

I tend to qualify my statements before talking about AI. I write this to make it clear that I have a passion for programming. The money is quite nice but it’s incidental. I would be doing this if I were rich. I honestly cannot imagine my life without programming — it’s so satisfying to learn, solve problems, and build something others can use.

I was initially quite hesitant on applying AI to programming. I avoided GitHib Copilot when it came out. I thought Cursor was overhyped. I didn’t understand why someone would use Claude Code (a CLI/TUI interface) over an IDE.

I have spent years caring about architecture, type systems, maintainability. I am quite good at paying attention to every little detail. I wanted full control over my code. How could I have control if AI is writing everything? How could I be sure it wasn’t writing my project in a substandard way?

Effectively using AI required fundamental shift in how I thought about my projects. Why did I care about types? Why do we have design patterns? Why does code need to be maintainable or “well written”. For hobby projects, it can be a source of pleasure to write and see beautiful code.

That’s not an acceptable reason for projects I’m paid to work on, though. At work, all that matters is that value is delivered to the business. Code needs to be maintainable so that new requirements can be met. Code follows design patterns, when appropriate, because they are known solutions to common problems, and thus are easy to talk about with others. Code has type systems and static analysis so that programmers make fewer mistakes.

Speaking in the context of solving a problem: does AI need to write beautiful code? No. It needs to write code that works. The code doesn’t need to be maintainable in the traditional sense. If you have sufficient tests, you can throw some LLMs at a pile of “bad” code and have them figure it out. Type systems and static analysis continue to be useful to LLMs, if not more so than humans.

This is all to say: if you care about solving a problem more than gaining satisfaction, LLMS fit the bill. I’ve discovered that, largely, I enjoy solving problems more than I care about writing code. I haven’t written code, for work or personal projects, since October 2025. I’ve only written prompts and reviewed (a lot of) LLM output. This has led to me making a massive number of projects.

Projects

Personal

I’ve done all of these in the last 9 months with the help of Cursor and Claude Code.

Work

  • Writing complex design docs to migrate projects or add features
  • Writing bespoke tools for investigations, data analysis, etc.
  • Automating common tasks around operations, root causing, etc.
  • Quickly adding new features or fixing bugs
    • Small requests from PMs or users are now essentially free to implement. The only work left is reviewing code and manual testing.

Final Thoughts

I’ve accomplished so much with AI. I still am figuring out how to use it effectively. There is so much power in these tools — there has never been a better time to be a programmer who just wants to build things.

At the same time, it is a bit exhausting. Work in particular has been difficult as we slowly embrace these new tools. There are many problems to solve around testing, developer experience, and velocity.

For personal projects, testing and documentation has been become the bottleneck. I have to make sure that the LLM has produced the correct thing, and that the documentation it has written is truthful.

As an industry I think we have to invest a significant amount of effort into better tools for testing and generated documentation.

I’ll continue use these tools with the hope that they don’t make me obsolete too quickly.


Read the original article

Comments

  • By brotchie 2026-03-0921:238 reply

    Not enough time, too many projects. Useful projects I did over the weekend with Opus 4.6 and GPT 5.4 (just casually chatting with it).

    2025 Taxes

    Dumped all pdfs of all my tax forms into a single folder, asked Claude the rename them nicely. Ask it to use Gemini 2.5 Flash to extract out all tax-relevant details from all statements / tax forms. Had it put together a webui showing all income, deductions, etc, for the year. Had it estimate my 2025 tax refund / underpay.

    Result was amazing. I now actually fully understand the tax position. It broke down all the progressive tax brackets, added notes for all the extra federal and state taxes (i.e. Medicare, CA Mental Health tax, etc).

    Finally had Claude prepare all of my docs for upload to my accountant: FinCEN reporting, summary of all docs, etc.

    Desk Fabrication

    Planning on having a furniture maker fabricate a custom walnut solid desk for a custom office standing desk. Want to create a STEP of the exact cuts / bevels / countersinks / etc to help with fabrication.

    Worked with Codex to plan out and then build an interactive in-browser 3D CAD experience. I can ask Codex to add some component (i.e. a grommet) and it will generate a parameterized B-rep geometry for that feature and then allow me to control the parameters live in the web UI.

    Codex found Open CASCADE Technology (OCCT) B-rep modeling library, which has a web assembly compiled version, and integrated it.

    Now have a WebGL view of the desk, can add various components, change their parameters, and see the impact live in 3D.

    • By cj 2026-03-0921:333 reply

      I love the tax use case.

      What scares me though is how I've (still) seen ChatGPT make up numbers in some specific scenarios.

      I have a ChatGPT project with all of my bloodwork and a bunch of medical info from the past 10 years uploaded. I think it's more context than ChatGPT can handle at once. When I ask it basic things like "Compare how my lipids have trended over the past 2 years" it will sometimes make up numbers for tests, or it will mix up the dates on a certain data points.

      It's usually very small errors that I don't notice until I really study what it's telling me.

      And also the opposite problem: A couple days ago I thought I saw an error (when really ChatGPT was right). So I said "No, that number is wrong, find the error" and instead of pushing back and telling me the number was right, it admitted to the error (there was no error) and made up a reason why it was wrong.

      Hallucinations have gotten way better compared to a couple years ago, but at least ChatGPT seems to still break down especially when it's overloaded with a ton of context, in my experience.

      • By shepherdjerred 2026-03-0921:443 reply

        I've gotten better results by telling it "write a Python program to calculate X"

        • By dmd 2026-03-0922:061 reply

          Yeah, in my user prompt I have "Whenever you are asked to perform any operation which could be done deterministically by a program, you should write a program to do it that way and feed it the data, rather than thinking through the problem on your own." It's worked wonders.

        • By brotchie 2026-03-0922:59

          For the tax thing. I had Claude write a CLI and a prompt for Gemini Flash 2.5 to do the structured extraction: i.e. .pdf -> JSON. The JSON schema was pretty flexible, and open to interpretation by Gemini, so it didn't produce 100% consistent JSON structures.

          To then "aggregate" all of the json outputs, I had Claude look at the json outputs, and then iterate on a Python tool to programmatically do it. I saw it iterating a few times on this: write the most naive Python tool, run it, throws exception, rinse and repeat, until it was able to parse all the json files sensibly.

        • By cj 2026-03-0921:591 reply

          Good call. I’ve also had better results pre-processing PDFs, extracting data into structured format, and then running prompts against that.

          Which should pair well with the “write a script” tactic.

          • By tavavex 2026-03-0922:10

            Yeah, asking for a tool to do a thing is almost always better than asking for the thing directly, I find. LLMs are kind of not there in terms of always being correct with large batches of data. And when you ask for a script, you can actually verify what's going on in there, without taking leaps of faith.

      • By arjie 2026-03-0921:58

        In my case, what I like to do is extract data into machine-readable format and then once the data is appropriately modeled, further actions can use programmatic means to analyze. As an example, I also used Claude Code on my taxes:

        1. I keep all my accounts in accounting software (originally Wave, then beancount)

        2. Because the machinery is all in programmatically queriable means, the data is not in token-space, only the schema and logic

        I then use tax software to prep my professional and personal returns. The LLM acts as a validator, and ensures I've done my accounts right. I have `jmap` pull my mail via IMAP, my Mercury account via a read-only transactions-only token and then I let it compare against my beancount records to make sure I've accounted for things correctly.

        For the most part, you want it to be handling very little arithmetic in token-space though the SOTA models can do it pretty flawlessly. I did notice that they would occasionally make arithmetic errors in numerical comparison, but when using them as an assistant you're not using them directly but as a hypothesis generator and a checker tool and if you ask it to write out the reasoning it's pretty damned good.

        For me Opus 4.6 in Claude Code was remarkable for this use-case. These days, I just run `,cc accounts` and then look at the newly added accounts in fava and compare with Mercury. This is one of those tedious-to-enter trivial-to-verify use-cases that they excel at.

        To be honest, I was fine using Wave, but without machine-access it's software that's dead to me.

      • By ElFitz 2026-03-0922:24

        I’d say for these use cases it’s better to make it build the tools that do the thing than to make it doing the thing itself.

        And it usually takes just as long.

    • By thijsvandien 2026-03-0921:271 reply

      I don't know, but I would never upload such sensitive information to a service like that (local models FTW!) or trust the numbers.

      • By basch 2026-03-0921:434 reply

        Which part is sensitive? Social is public, income is private but what is someone going to do with it?

        • By jumpman500 2026-03-0922:30

          It's not good in some job negotiations if someone has a very clear picture of what your current net worth and income is. Also in some purchases companies could price discriminate more effectively against you.

        • By AlecSchueler 2026-03-1014:29

          That's dream info for targeted advertising and political manipulation.

        • By thijsvandien 2026-03-0921:571 reply

          Now that's a question I'd feel more confident having answered by an LLM. Personally, I'm tired of arguing with "nothing to hide", which (no offense) is just terribly naive these days.

          • By whackernews 2026-03-100:57

            I find it really weird too, like, haven’t we done this? Also struggle to understand the motivation for arguing from this direction. Do people forget it’s the normal, default position NOT to be spied on?

        • By whackernews 2026-03-101:00

          Where’s the line for you? Would you upload a picture of you sat on the toilet for example?

    • By generallyjosh 2026-03-1023:47

      Did it make any mistakes on your taxes?

      Personally, I know coding pretty well. So when I'm using it for coding, I can spot most of its mistakes / misunderstandings

      I would not trust using it on a complex domain I'm not super familiar with, like doing taxes

      A mistake here is pretty high cost (getting audited, and/or having to pay a bunch in penalties)

    • By mandeepj 2026-03-0921:57

      > Result was amazing. I now actually fully understand the tax position.

      You couldn’t do that with TurboTax or block’s tax file? You don’t have to submit or pay.

    • By slopinthebag 2026-03-0921:57

      > had Claude prepare all of my docs for upload to my accountant: FinCEN reporting, summary of all docs, etc.

      I imagine your accountant had the same reaction I do when an amateur shows me their vibe codebase.

    • By MikeNotThePope 2026-03-0921:56

      Be careful with taxes. Hallucinations will cost you.

    • By g947o 2026-03-1011:07

      We usually call that FAAFO

    • By whattheheckheck 2026-03-0921:36

      I had ai hallucinate that you can use different container images at runtime for emr serverless. That was incorrect its only at application creation time.

      Hope you dont get audited

  • By semiquaver 2026-03-0921:045 reply

    I feel pretty productive myself with AI but this list isn’t beating the rap that AI boosters mostly use AI to do useless stuff focused on pretending to improve productivity or projects that make it easier to use AI.

    • By stavros 2026-03-0921:103 reply

      Here's what I made:

      * https://www.stavros.io/posts/i-made-a-voice-note-taker/ - A voice note recorder.

      * https://github.com/skorokithakis/stavrobot - My secure AI personal assistant that's made my life admin massively easier.

      * https://github.com/skorokithakis/macropad - A macropad.

      * https://github.com/skorokithakis/sleight-of-hand - A clock that ticks seconds irregularly but is accurate for minutes.

      * https://pine.town - A whimsical little massively multiplayer drawing town.

      * https://encyclopedai.stavros.io - A fictional encyclopedia.

      * https://justone.stavros.io - A web implementation of the board game Just One.

      * https://www.themakery.cc - The website and newsletter for my maker community.

      * https://theboard.stavros.io - A feature board that implements itself.

      * https://github.com/skorokithakis/dracula - A blood test viewer.

      * https://github.com/skorokithakis/support-email-bot - An email bot to answer common support queries for my users.

      Maybe some of these will beat the rap.

      • By risyachka 2026-03-0921:433 reply

        It does not matter how much stuff is built. What matters is what comes out of it.

        And with AI the result of 99.9% is abandonware. Just piles of code no one will ever touch again.

        Which proves the point of no productivity gains. Its just cheap dopamine hits.

        • By danso 2026-03-0921:542 reply

          The user you're responding too lists a "blood test viewer" [0], which looks to be a tool that turns his blood test PDFs into structured and analyzed data. You're saying that unless he continuously revises/upgrades the code, it's still "abandonware" even if it meets his needs for the near future?

          [0] https://github.com/skorokithakis/dracula

          • By sarchertech 2026-03-0922:101 reply

            Bit rot is real. The dependencies listed here include calling into AI APIs that will stop working with time. So yes if no one keeps this up to date it will rot into useless likely very quickly.

            That’s not even mentioning that this tools doesn’t do much beyond wrap a call to Claude. And it’s using Claude to display blood test data to the end user. This is not something I’d trust an LLM to not mess up. You’d really want to double check every single result.

            • By dsf2 2026-03-0923:23

              Also humans are not bots.

              We hate having to feel like we have to double check everything. We have an asymmetric relationship with gains and losses etc.

              Is it me or is this stuff flying over peoples heads?

          • By slopinthebag 2026-03-0923:221 reply

            Just saying, you can paste the sample report into ChatGPT and it does the same thing, and even creates interactive graphs for you. Im not sure how useful something is if a chatbot can do it, with the side benefit of being able to ask for follow up questions.

            • By simmerup 2026-03-0923:46

              i guess the custom UI makes you believe you can trust the output, as if there’s any thought going into it rather than just an LLM hallucinating for you

        • By tempaccount5050 2026-03-0921:492 reply

          Missing the point. I no longer need to buy or rely on someone else for software I want to use. A lot of things I want to do ARE one offs. I can write software and throw it away when I'm done.

          • By incr_me 2026-03-0922:101 reply

            I know this sounds sarcastic but I really mean it: For years everyone has been monastically extolling some variation of "the best code is deleted code". Now, we have a machine that spits out infinite code that we can infinitely delete. It's a blessing that we can have shitty code generated that exposes at light speed how shitty our ideas are and have always been.

            • By dsf2 2026-03-0923:24

              A nicer framing is original ideas and original thinking in general is very hard and doesn't come around very often.

              Steve Jobs once said a thing about the belief that an idea is 90% of the work is a disease. He is and was absolutely right.

          • By sarchertech 2026-03-0922:14

            You still need to spend plenty of time verifying they work though unless it’s something where that truly doesn’t matter.

        • By grim_io 2026-03-0921:49

          Abandonware is what the customer wants.

          Constant enshittification and UI redesigns are driven by the provider to justify monthly extortion.

      • By profsummergig 2026-03-0921:171 reply

        > "A clock that ticks seconds irregularly but is accurate for minutes."

        Sounds like something that could be tried as a fix for a kind of OCD (obsessive seconds counting).

        • By stavros 2026-03-0921:223 reply

          Maybe, although it's actually giving me OCD, I think. It's really hard to tune out because of the irregular ticking. I implemented a regular mode to combat this, defeating the purpose somewhat.

          • By observationist 2026-03-0921:45

            Unpredictable things catch our attention - it's the exceptions that are important to survival, and our brains evolved to cope with the stimuli that this experiment messes with.

            Something like this would be anxiety inducing for most people, I bet. That'd be an excellent experiment, track heart rate, EEG, and performance on a range of cognitive tasks with 2 minute long breaks between each tasks, one group exposed to the irregular ticking, another exposed to regular ticking, another with silence, and one last one with pleasant white noise.

          • By bencyoung 2026-03-109:44

            Sounds like the Chronophage clock in Cambridge: https://en.wikipedia.org/wiki/Corpus_Clock. It it's purely mechanical but has odd pauses in the ticks etc

          • By pinkmuffinere 2026-03-0921:441 reply

            what was the motivation for originally making it with irregular ticking?

            • By stavros 2026-03-0921:491 reply

              It sounded fun (and it is)! My favorite mode is one that ticks each second imperceptibly fast, and then stalls for a second in one of the ticks (so that it lasts two).

              It's just the right amount of "did that clock just skip a beat? Nah must just be my imagination".

      • By saulpw 2026-03-0921:213 reply

        Some of them definitely do not. Like a fictional encyclopedia? What is the point of that? That's like "an alphabetical novel".

        And even for the ones that might "beat the rap", I don't understand from your descriptions why they are interesting or unique. A voice note recorder? Cool. There are already hundreds if not thousands of those, why did you need to make your own in the first place? I'm not saying that yours isn't special, I'm just saying that it doesn't help to post the blandest description possible if you're trying to impress people with the utility of your utility.

        • By senko 2026-03-0922:204 reply

          So not only does he have to show what he built with AI, what he built with AI has to be interesting and unique to you? Why? He's not selling it to you.

          Seems like the bar is now it has to be a mass market product. On another post someone else commented a SaaS doesn't count if it doesn't earn sustainable revenue.

          I guess OpenClaw also doesn't count because we don't know how much Peter got from OpenAI.

          This is an ideological flame war, not a rational discussion. There's no convincing anyone.

          • By timacles 2026-03-1015:35

            It’s kind of like the beginning sequence from back to the future 1 when it shows all the random inventions at Docs house.

            Yeah they are interesting, I guess they do something but are any of them actually delivering value? That’s when you get into the argument of what is value and to whom, but as AIs role in society of generating productivity, that’s pretty disputable if every person being able to build their own train set that turns on the toaster and makes coffee is going to move us forward as a species like , say the internet.

            That’s really the only argument, is the use of LLMs worth the trillions of dollars and selling out the future of humanity for. Not is it fun Bildungsroman quirky apps really fast

          • By munksbeer 2026-03-101:48

            > Seems like the bar is now it has to be a mass market product.

            The bar for this will just keep moving. Some people are heavily invested in the anti-stance, so human nature being what it is, you've little hope of changing their minds anyway.

          • By saulpw 2026-03-0923:23

            I'm actually becoming an AI convert myself. If there is ideology here, it's not about AI, but about keeping trash off the streets.

            For example, I checked out their "Fictional Encyclopedia". It's an absolutely terrible project, much worse than useless, because it claims to be an "encyclopedia" right in the name (the tagline is "Everything about everything"), yet it's engineered to just completely make things up, and nowhere on the page does it indicate this! I looked up my own niche open-source project, and was prepared to be at least somewhat impressed that it pulled together facts on the fly into an encyclopedic form. For the first couple of paragraphs that seemed like it might be the case, then it veered into complete fantasy and just kept going.

            Like what is the point of this? I can already ask a chatbot the same question and at least then I have explicit indicators that it might be hallucinating. But this page deliberately confuses truth and reality for absolutely zero purpose. It's a waste of brain cells, for both the creator and the consumer, with no redeeming value. It's neither interesting, nor different, nor valuable. AND it's burning tokens to boot!

            I mean, come on, the bar is not that high. Some of stavros' projects may even be over it. But the first projects I checked were sub-basement, and I am not interested in searching through mounds of trash for what might be a quarter dollar. I'm actually kind of disappointed that stavros didn't have (or apply) the sense or taste to whittle down that list of 11 (!) projects to some 3 that show off the value of their work. Which I'm starting to understand is everyone's issue with AI brain rot; it seems to just encourage "here's everything, I dunno, you figure it out" which is maddening and deserves the pushback it gets.

          • By Grimblewald 2026-03-1011:45

            no, the bar is accurate and descriptive descriptions. You know how AI words are typically hollow and devoid of meaning? loads of grammatically fine words but not actually saying anything? well, these repos are the github version of that. Lots of words but so starved of meaning I shut off mentally trying to read half of them. Some descriptions are outright lies.

        • By stavros 2026-03-0921:232 reply

          Sounds like the goalposts are moving from "not useless stuff focused on pretending to improve productivity or projects that make it easier to use AI" to "extremely useful stuff".

          • By saulpw 2026-03-0921:40

            One issue is that I interpreted the parent as OR, not AND. "useless stuff OR productivity tools OR AI tools".

            Moreover though, I'm not even saying you shouldn't do those things. I'm actually playing around with AI quite a bit, and certainly have created my share of useless/productivity tools. But it's not a flex to show off your own Flappy Birds or OpenNanoClaw clone, even if they are written in COBOL or MUMPS.

            And they definitely do not have to be "extremely useful". But they should answer the question: what problem does it solve?

          • By jjee 2026-03-0921:281 reply

            Fair. But finally we are seeing what LLM proponents are putting forward.

            And it’s exactly what I expected - lines of code. Cute. But… so what? This is not good for the AI hype and nor any continued support for future investment.

            On the other hand all this stuff is going to drive continual innovation. The more tokens generated the more model producers invest. And we might eventually get to a place of local models.

            • By stavros 2026-03-0921:402 reply

              I swear, I'm going to stop commenting on this site, the amount of shitting on people who use LLMs (ie everyone) is just impossible to deal with.

              • By tuesdaynight 2026-03-0923:401 reply

                Don't do that, just avoid answering the "non-believers" or whatever they are called. Your comments are insightful for me (and for a lot of other people, I'm sure). You don't need to prove that they are useful, just comment about your experience and ignore them. It's like arguing about religion trying to make the other person to flip their beliefs (a waste of time for everyone involved)

                • By stavros 2026-03-0923:54

                  I guess you're right, I really need to get better at ignoring some people. It just really got to me today because someone else looked at one of my projects for two seconds and decided to tell me off for it being "insecure" and "slop", and it kind of ruined my day.

                  Thanks for the support!

              • By slopinthebag 2026-03-0922:021 reply

                I have the opposite experience, the amount of AI boosters deriding the less enthusiastic, gleefully exclaiming how someone will be "left behind" if they don't immediately adopt the latest hype cycle, or sharing AI slop and either embellishing or outright lying about it's capabilities is making me want to log off forever. "Handwritten code? Don't you only care about providing maximum shareholder value?" No.

                • By munksbeer 2026-03-101:521 reply

                  No-one (apart from some CEOs) cares that you don't use AI, I promise you.

                  The thing that triggers people is comments like yours still, even at this point, claiming that AI just produces slop and everyone is just lying.

                  It is absurd, and people are obviously going to react to it.

                  • By slopinthebag 2026-03-103:02

                    When did I claim AI just produces slop? When did I claim everyone was lying?

                    If by "react" you mean make stuff up, sure.

        • By Grimblewald 2026-03-1011:43

          don't waste your time, they're a slop slinger who won't take any feedback that could feel like a hit to the ego. I've wasted too much time on them already, cut your losses and move on. Their 'safer' personal bot for example is anything but, but they won't listen to feedback.

    • By lukan 2026-03-0921:531 reply

      "Or projects that make it easier to use AI"

      I get the sentiment, but this is natural with a groundbraking new technology. We are still in the process of figuring out how to best apply generative LLM's in a productive way. Lots of people tinker and share their results. Most is surely hype and will get thrown away and forgotten soon, but some is solid. And I am glad for it as I did not take part in that but now enjoy the results as the agents have become really good now.

      • By harry8 2026-03-0922:131 reply

        > "Or projects that make it easier to use AI"

        This is exactly the same reason why the appropriate question to ask about Haskell is "where are the open source projects that are useful for something that is not programming?"

        The answer for Haskell after 3 decades is very, very little. Pandoc, Git Annexe, Xmonad. Might be something else since I last did the exercise but for Haskell the answer is not much. Then we examine why the kids (us kids of all ages) can't or don't write Haskell programs.

        The answer for LLM coding may be very different. But the question "where is the software that does something that solves a problem outside its own orbit" is crucial. (You have a problem. You want to use foo to solve it, now you have two problems but you can use foo to solve a part of the second one!!)

        The price of getting code written just went down. Where are the site/business launches? Apps? New ideas being built? Specifically. With links. Not general, hand-wavy "these are the sorts of things that ..." because even if it's superb analysis, without some data that can be checked it's indistinguishable from hype.

        Whatever data we get will be very informative.

        • By lukan 2026-03-0922:23

          For instance, there is a abandoned open source project, I would have liked to see revived, https://www.wickeditor.com/ (a attempt at recreating flash with web technology). Current official state in the repo: outdated dependencies, build process, etc.

          I looked into doing it manually, but gave up. Way too much dirty work and me no energy for that.

          Then I discovered that claude CLI got good - and told it to do it (with some handholding).

          And it did it. Build process modernized. No more outdated dependencies. Then I added some features I missed in the original wick editor. Again, it did it and it works.

          A working editor that was abandoned and missed features - now working again with the missing features. With minimal work done from my side (but I did put in work before to understand the source).

          I call this a very useful result. There are lots of abandoned half working projects out there. Lots of value to be recovered. Unlike Haskell, Agents are not just busy with building agents, but real tools. Currently I have the agents refactor a old codebase of mine. Lot's of tech dept. Lot's of hacks. Bad documentation. There are features I wanted to implement for ages but never did as I did not wanted to touch that ugly code again. But claude did it. It is almost scary of what they are already capable of.

    • By shepherdjerred 2026-03-0921:211 reply

      That's a fair criticism of my personal projects. Maybe 3-4 of those could potentially see usefulness outside of myself.

      At work, I would say I've done plenty of "useful" things with AI, but that's hard to show off given that I work on an internal application.

      • By peteforde 2026-03-0922:41

        I don't think you should feel like your personal projects need to be vetted by an armchair peanut gallery. It's actually kind of offensive how so many people show up in a thread like this and demand that what sparked joy for you be formally subjected to a gauntlet of moving goalpost validation markers.

        Quite simply, I don't think that they are asking or arguing in good faith.

    • By SunshineTheCat 2026-03-0921:14

      I've actually felt the same way about some (not all) but some "productivity" hacks I've seen people post online with their OpenClaw setups.

      I chuckle when I see some of them because you could achieve the same (or often faster) result by jotting a note onto a notecard and sticking it in your pocket.

      Most of the other automations running don't really seem to serve any real purpose at all.

      But hey, if it's fun, have at it.

    • By gopher_space 2026-03-0921:49

      I mean I’m using it to deconstruct and reinvent my development process from the ground up, but it’s so easy to do this now and so customized for my specific needs that the idea of posting about it never crossed my mind.

  • By bronlund 2026-03-0921:332 reply

    If you are a parent, you know that feeling when your child is struggling with something and gets frustrated, but you keep silent and don't help because you know that the child has to figure this out by themselves. That's the same feeling I get when I hear all those doom and gloom perspectives on how AI is ruining coding :D

    • By ssrshh 2026-03-0923:062 reply

      Not condescending at all

      • By archagon 2026-03-100:101 reply

        For some reason, AI boosters can't help but condescend. I've never seen this with the rollout of any other technology. It's like this stuff immediately becomes a core part of their personality.

        • By bronlund 2026-03-100:572 reply

          I agree that this new thing is polarizing, but as with the rollout of any groundbreaking technology, the ones looking backwards is just going to be left behind.

          • By archagon 2026-03-101:001 reply

            As I see it, the only reason to make such a bold claim (as opposed to just doing what you do and seeing how things shake out) is insecurity. Especially if you’re condescending about it.

            • By bronlund 2026-03-101:09

              This discussion is not new. Books was criticized for being addictive and antisocial. TV for rotting our brains. And even if there are some truths to these claims, I do appreciate both.

              If you want to be passive aggressive without AI, the more tokens there is for the rest of us ;)

          • By slopinthebag 2026-03-103:162 reply

            I largely agree with you but I still can't stand the "you're gonna be left behind!!" framing that is really common with people who are enthusiastic about AI. What does it even mean to "be left behind" in this instance, it's just a vague emotional expression.

            These AI tools are not hard to learn, in fact they're super easy when you have some experience programming, so the only people who are going to be left behind are the ones who simply refuse to use the tools out of principle. And why would they care about being "left behind"? They're making a conscious choice to not use the tools. They want to be left behind!

            And not everyone who is skeptical are that out of principle, some just don't see the value yet or are slowly and cautiously adopting it into their workflow. If AI powered coding ends up being even half as good as promised, so good that denying the evidence is impossible, they can just start using it and catch right up. So who exactly is "being left behind" here? It's complete nonsense while simultaneously being extremely condescending and I get triggered every time I read the phrase.

            I don't mean anything against you personally with my ranting, it's more a general observation. Perhaps you and some others do mean it as a genuine bit of advice, like "hey, you should learn these tools or else you might struggle to find work in the future", but the sense I get most of the time are people who are gleeful that the non-believers are soon to be homeless or whatever.

            • By bronlund 2026-03-1011:391 reply

              Yeah, I could have formulated that in another way - 'missing out' would be a better term. I do not mean to be condescending in the meaning that I look down on people that have other preferences than me. Take books as an example. I do not believe that people that can't read is of any lesser value than me, and whether you can read or not, is a poor indication of how intelligent you are - or how happy you are for that matter. Sure, information is important, but there are a lot of different types of information and a lot of different ways of acquire it - so I do not believe that my way is any better than someone else's. That being said, me being able to read and appreciate books, do believe that people who can't, are missing out.

              We know books has been used for both good and evil, but I still think books are a wonderful invention. Same with television or the internet, the quality of the content on there doesn't really take anything away from the fact that the technology in itself is absolutely amazing. In hindsight - it has been a minute since Gutenberg - how the society has adopted the written word do have real implications for the people living in it though. If you can't read today, you will struggle. Not because there are anything wrong with you, but because the system more or less take it for granted that you can.

              And it is going to be the same with AI, but even more so. The ones who learns to master it will dominate those who don't. It will create a new form of class divide, where access to tokens and knowing how to use them, is going to be the main drivers. AI is still in it's early stages and we see that not everything is alright with it, take for instance the economics around it or the environmental impact it has. But still, I do believe that it is an amazing invention, and that if you do not embrace it, you are missing out.

              • By slopinthebag 2026-03-1016:261 reply

                If someone said "Yeah I haven't read in 5 years" would you find it reasonable to tell them they're going to be left behind? If someone did that to me in person I'd consider whacking them in the face.

                Maybe don't say it online if you wouldn't say it offline.

                • By bronlund 2026-03-1017:311 reply

                  No, but if you claim that reading will be the end of human kind, I will have no problem leaving you behind.

            • By adampunk 2026-03-1014:401 reply

              I think this approach means well, but it doesn’t connect with the reality of the times. You’ve got people repeatedly insisting that times have shifted and folks are going to be left behind because that’s what’s happening. October 2025 marked a turning point in all of software— that sounds grandiose, but it’s true. The longer we pretend that it didn’t happen the harder it gets to adjust.

              I think people are talking like this because we have not lived in a genuine computing revolution like this since probably the introduction of the micro computer. It’s been more than 40 years.

              I get that people are mad about this. That’s real obvious when you comment in any way about the use of AI. You get told that you’re a robot you get told that you’re not a real engineer you get told that you’re insecure you get told that—-all kinds of things. So it’s super clear that people are upset because they’re being fucking childish about it. Even a post like this one where the author tries hard to be pretty nice, we see the same sneering comments about training your own replacement and shit like that. It’s not subtle.

              Where I get off the train is concluding that because they’re upset that they don’t need to be told what’s happening. All of computing is already changing. It’s already happening. It’s like if the sun winked out right now we would discover it in eight minutes, but the event has already happened. We are merely outside the cone of visibility. This shit is all happening right now. It is all real. I think it does a disservice to people to pretend as though it’s not.

              • By slopinthebag 2026-03-1016:263 reply

                No disrespect intended but I think you're in a bubble. Things will change, sure, but not to the same extent as the invention of the Internet for example.

                • By bronlund 2026-03-1017:42

                  I disagree. This is like the invention of the steam machine which may be the most important factor in kickstarting the industrial revolution. I am pretty sure none of the guys who were there, had any clue as to how this was going to change the world, but I suspect at least some, knew that; this is something else.

                  People are bitching now about how AI has ruined coding, not fully grasping that for most people, there will be no code, no applications, no operating systems. AI will pretend to be all of that, and doing it way better. A six year old will be able to "out-code" all of us.

                  This is half a year ago: https://www.youtube.com/watch?v=dGiqrsv530Y

                • By archagon 2026-03-1017:331 reply

                  Comments like the one you’re replying to give me a disturbing feeling — like AI is speaking through the mouths of its users, Pluribus style.

                  • By adampunk 2026-03-1018:06

                    What am I supposed to do with this, man?

                    Am I supposed to talk to you like this? Should I do some psychoanalysis here? If you wanna say I’m pantomimed machinery, then I think we may need to have a discussion.

                    Because here’s what it looks like to me: I think there’s a lot of people who arguably had a pretty good handle on how their corner of computing worked. They can understand a pretty deep dive into the stack they use, and where they have to deposit something into the intellectual hinterlands, it can safely be abstracted away on dependable, engineered machines or standards. That is no mean feat; lots of people cannot say that. The fact that someone who does say it doesn’t fully understand paging or floating point arithmetic is not a sign that something is wrong, but rather that we have succeeded in big shared engineering problems. Cool.

                    Some new shit is afoot. We are entering into a new, turbulent, uncertain era of computing. A lot of people who previously had a pretty confident grasp of both the core in the frontier of their work now do not understand what is driving the frontier. They have made the fact that they do not understand this everyone else’s problem. Rather than admit that they do not understand an area they used to understand we are subjected to incessant infantile progression through what I hope are stages of grief. Because at least then it might come to an end.

                    Everyone has an explanation for why this is all gonna collapse tomorrow and why they don’t need to learn about it. Everyone has a smart remark about the use of AI for some very important moral reason which also means they don’t need to learn about it. They both add up to the same thing which might just be healthier if treated as a true admission of ignorance.

                • By adampunk 2026-03-1019:171 reply

                  I think we are *for sure* in a bubble. There is a kind of venture-capital driven craze for compute because...well a whole host of silly and sensible reasons. It's hard to talk rationally about the money being thrown around if you have suspicions that billions of it are being thrown around because some dudes think they'll create god in the machine and own it, or that they have some theory of consciousness tied up in the matter. Animal spirits, in a sense.

                  Some of the money thrown around is by companies that want to lock in some kind of interdependency because the only downside they can conceive is Oracle or Dell or whoever invents AGI and they get left behind. So huge circular deals are getting made which increase the correlation coefficient for any collapse to 1 lmao. These deals are getting made for reasons that feel like 2004-2005 american real-estate, where the downside risk of a national profile of mortgages was actually (not joking) taught in textbooks to be 0. So naturally if you're maximizing revenue by making things interdependent, you really only consider the upside.

                  All these forward looking energy contracts and local generation of energy are signs that the market is under strain more than we expect increasingly exponential future use. Giant companies are thinking they're locking counterparties into the right risk structure (here with an energy company being somehow willing to foresake this infinite future energy price they could charge by just waiting), but really that energy company is perfectly happy to accept some money to start a project which will generate revenue long before electricity. That energy company has an idea of its own risk and revenue profile and they can extract money from a bubble, too.

                  I give us...months? Maybe 18 months--probably less--before things things get really nasty and messy for the firms who thought they were buying a golden ticket. It then gets messy for users (if not sooner) who are right now being subsidized nicely--not the ludicrous 5k being thrown around recently but compute is at a subsidy right now, so long as you want to rent it or can run a model like Qwen locally. Lots of other people are paying for that subsidy while extraordinary amounts of money flow from one part of computing to another. That's already having weird consequences as firms who spent money on what is essentially rental compute pushing their employees to use more of it in order to keep the person who made the contract safe. More companies will do the Microsoft route (no I'm not talking about Copilot!) and try to push tasks into their internal pipelines like MSFT does with Azure--where with e.g. github, what github needed to do took a back seat (literally haha) to integration with the compute pipeline. That's good-bad-whatever, depending on how you want to think about it. But it's certainly disruptive and I think right now a lot of genAI is doing that kind of disrupting, where people and orgs are being forced through money shaped holes.

                  I don't know what happens when the music stops. I just know that it is playing.

                  • By bronlund 2026-03-1020:331 reply

                    I don't think that many will disagree with you on this. If we compare it to the dot-com bubble in 2000, just as speculators then tried to get in early in this new thing called Internet, today they are trying to capture early market shares in this new thing called AI. But just like then, when this bubble bursts, the technology that everyone was chasing, will still be relevant. Today the internet is regarded as critical infrastructure.

                    And I do think you are underestimating just how much money these guys can print, if some event is disrupting the machine. If this thing really goes down, it will be by design and because they got the Thing 2.0 ready to go.

                    • By adampunk 2026-03-1020:53

                      The problem I have with this analogy is it cuts the other way. His tempting to look at the dotcom bust and think of pets.com or some other venture where somebody lost their shirt. It gets a little harder when you expand it to companies that today own an enormous slice of the world, because they bet on the Internet. Most people lost in the dotcom boom; Amazon, for instance, didn’t. The whole premise was that if you moved soon enough, you could buy in on the ground floor—-that was emphatically the case if you bought Amazon. Now they own a huge almost inextractable chunk of the Internet. It’s quite hard to do business or pleasure on the Internet without somehow involving Amazon.

                      What I am saying and what I think a lot of folks who are trying to get this point across are saying is that this will be critical infrastructure sooner than 20 years. The right frame of mind is to look at this like we look at the Internet circa of the 1980s or the 1970s. This is big messy and experimental right now. We are in the middle of rebuilding computing with foundation models. That is happening at an enormous subsidy for the time being.

      • By bronlund 2026-03-0923:171 reply

        My children are on their own journey and I’m just trying to be supportive. I don’t measure their worth by how much they agree or understand my perspective.

    • By lagrange77 2026-03-0923:141 reply

      What is your perspective on the matter from a parent point of view?

      • By bronlund 2026-03-100:51

        I think an humble and open mind is essential. I think that we reap what we sow, but also that struggle makes us robust.

        I try to explain stuff to my kids, to the best of my ability, but give them room to make their own conclusions. As an old fart, there is a limit to how relevant my world will be to them - and I have to acknowledge that.

        Change is scary and not always for the better, but in my humble opinion; we have nothing to lose and everything to gain.

        I, For One, Welcome Our New AI Overlords :]

HackerNews