Greg Knauss Is Losing Himself

2026-03-0118:096960shapeof.com

February 25, 2026 Greg Knauss ▸ Lose Myself: I got into computers because solving puzzles was fun, and building worlds was fun, and making things — the process of making things — was fun, down at the…

February 25, 2026

Greg Knauss ▸ Lose Myself:

I got into computers because solving puzzles was fun, and building worlds was fun, and making things — the process of making things — was fun, down at the granular level. It was nice to have something at the end, but the act of creation was the exciting part. I suspect that predilection will begin to disappear (in commercial environments, at the very least), now that the people who do it — who want who do it — can be replaced. The journey actually was the reward for some subset of weird little freaks, but you can now skip all that crap and just jump to the end and get on with it.

I’ve been using Claude Code quite a bit lately, not so much to replace my programming but to augment it. The new animated image export preview in Acorn 8.4.1 was a direct result of that. It was a nice little feature that I knew exactly how to do, but I hadn’t prioritized getting done yet because there were a bunch of other things on my plate. But with a little assist, it was quick to implement.

I get where Knauss is coming from, and I feel it a too. I love coding! But why bother implementing anything when anyone can make an app in an instant? I’ve been wanting to make upgrades to my online dough calculator but have been putting it off because … well, anyone can just vibe code this themself now.

But at the same time I’m not worried about being replaced by AI, or by quick free apps that have been built by AI. And in some ways I’m more hopeful than ever.

For almost 20 years now, I’ve been feeling the pressure from competing image editing apps and the potential of everything falling apart and the utter doom of my chosen profession and company. These feelings are not new to me.

So I kept on making the software that I wanted to build, that I wanted to exist in this world. And though some months or years are rougher than others, people still were willing to pay for what I’m making. And that’s kept me going, and more importantly - employed.

Lately I’ve been thinking more and more about what Acorn should be. In a time when anyone can come up with an app idea and ship with little effort, what is going to make Acorn stand out?

I’m starting to think that’s going to be personality and feel and polish, but turned up a notch. That’s what I used to do when I started writing apps, but in some ways I have really toned it down in favor of OS alignment.

Does an AI know how to do that? Does a coding assistant know that an app is really a giant collection of details?

Maybe I’ll even have fewer competitors in the long run, or at least not as many new competitors. Because at some point it’s not about how good a programmer you are (and I’ve always been a middle-tier programmer), it’s about discipline and vision.

I’ve got feelings because anyone can put an app together now, so what’s the point of me? But at the same time, I can focus on what I want to focus on and hopefully charge forward and maybe everyone else will get tired of little vibe coded apps because you still have to know exactly what you want to build. And you can’t build something you can’t think of. And I know how to think and I have ideas.

And I have discipline and I know how to ship. And in my experience, that’s what has always mattered.


Read the original article

Comments

  • By gtowey 2026-03-0415:126 reply

    LLMs can't be strategic because they do not understand the big picture -- that the real work of good software is balancing a hundred different constraints in a way that produces the optimal result for the humans who use it.

    It's not all that different from the state of big corp software today! Large organizations with layers of management tend to lose all abiliy to keep a consistent strategy. They tend to go all in on a single dimension such as ROI for the next quarter, but it misses the bigger picture. Good software is about creating longer term value and takes consistent skill & vision to execute.

    Those software engineers who focus on this big picture thinking are going to be more valuable than ever.

    • By butILoveLife 2026-03-0415:514 reply

      >Good software is about creating longer term value and takes consistent skill & vision to execute.

      >Those software engineers who focus on this big picture thinking are going to be more valuable than ever.

      Not to rain on our hopes, but AI can give us some options and we can pick the best. I think this eliminates all middle level positions. Newbies are low cost and make decisions that are low stakes. The most senior or seniors can make 30 major decisions per day when AI lays them out.

      I own a software shop and my hires have been: Interns and people with the specific skill of my industry(Mechanical engineers).

      2 years ago, I hired experienced programmers. Now I turn my mechanical engineers into programmers.

      • By zaphar 2026-03-0416:351 reply

        So what you are a saying is that you removed the people who can make the decisions that keep your software maintainable and kept the people who will slowly over time cause your software to become less maintainable? I'm not sure that tradeoff is a a good one.

        • By butILoveLife 2026-03-0416:402 reply

          This might have been true pre-agent AI programming, but honestly the code seems better than ever. It finds edge cases better than me.

          I know... I know buddy. The world changed and I don't know if I'm going to have a job.

          • By zaphar 2026-03-0418:052 reply

            I'm every bit as immersed in this as you are. I've been developing my own custom claude code plugins that allow me to delegate more and more the agents. But the one thing the agent is not reliably doing for me is making sound architectural choices and maintaining long term business context and how it intersects with those architectural choices.

            I tried teaching all of that in system prompts and documentation and it blows the context window to an unusable size. As such the things that as a high level experienced senior engineer I have been expected to do pre-agents I am still expected to do.

            If you are eliminating those people from your business then I don't know that I can ever trust the software your company produces and thus how I could ever trust you.

            • By bigbadfeline 2026-03-0519:19

              I'd try a different approach - mechanical engineering isn't that hard and it can benefit greatly from developing some specialized agents and fine-tuned LLMs for it. As a side benefit, that approach happens to create some software jobs too, open source for best results.

            • By aspenmartin 2026-03-0418:46

              > making sound architectural choices and maintaining long term business context and how it intersects with those architectural choices.

              I completely agree with you, but this is rapidly becoming less and less the case, and would not at all surprise me if even by the end of this year its just barely relevant anymore.

              > If you are eliminating those people from your business then I don't know that I can ever trust the software your company produces and thus how I could ever trust you.

              I mean thats totally fine, but do realize many common load bearing enterprise and consumer software products are a tower of legacy tech debt and junior engineers writing terrible abstractions. I don't think this "well how am I going to trust you" from (probably rightfully) concerned senior SWEs is going to change anything. s

          • By daveguy 2026-03-0416:591 reply

            Finding edge cases is completely orthogonal to creating maintainable software. Finding edge cases ~= identifying test suites. Making software maintainable ~= minimizing future cost of effective changes.

            Ignoring future maintenance cost because test suites are easier to create seems like disjointed logic.

            • By butILoveLife 2026-03-0417:082 reply

              [flagged]

              • By switchbak 2026-03-0417:343 reply

                "Im not even sure we will need maintain software" (sic) - I'm not sure what your specific background is, but with a statement like that you lose all legitimacy to me.

                • By Alive-in-2025 2026-03-0420:25

                  Yes, I'd like to hear more of their background, because they seem very naive about writing software, adding to it, testing it, etc.

                  You can't just whip up a replacement for salesforce using claude code. Who's going to fix the bugs, who is going to have tests, manage performance testing? People will still pay for software that is tested and performant. I could get a replacement for an online spreadsheet, google docs like thing. Suppose you tell it to copy the google docs or whatever programming language. You won't know if it's buggy because you won't haev the same test coverage. You'll never know about bugs that took a long time to reveal in some combination of features.

                  You can create a new system with a few features together to do something. Again, not tested, not perf tested, isn't away of a compiler bug you had to work around.

                  But lots of simple things can be claude coded and replaced. Say something that took a photo of a person, centered it, then say put some kind of log on the pic. Something you paid $5 a month to do.

                • By aspenmartin 2026-03-0418:431 reply

                  Writings on the wall, it is true, tech debt will no longer be a thing to care about.

                  "but who will maintain it?" massive massive question, rapidly becoming completely irrelevant

                  "but who will review it?" humans sure, with the assistance of ai, writing is also on the wall: AI will soon become more adept at code review than any human

                  I can understand "losing all legitimacy" being a thing, but to me that is an obvious knee jerk reaction to someone who is not quite understanding how this trend curve is going.

                  • By switchbak 2026-03-0721:52

                    Trust me, I’m a well seasoned leathery developer and I’m no newbie when it comes to using AI. But this level of irrational exuberance is so over the top I just can’t take it seriously.

                    Yes, in the very long term I expect this to be able to replace large swaths of the sw dev lifecycle, product, ideation, the whole kaboodle. That’s a long way off, whatever “a long way off” means in this accelerated timeline.

                    For the next bunch of years, yes you’ll have to worry about architecture, coupling, testing, etc. I’m happy to have my competitors share your attitude, cause we’ll smoke them in the market.

                • By bigstrat2003 2026-03-0420:201 reply

                  New account... singing the praises of how AI "changed everything" in the past few weeks... my money is on this being a shill.

                  • By toraway 2026-03-0423:061 reply

                    They were a skeptic 3 weeks ago... fast forward today and they've oriented their company's strategy around Opus 4.6 agents and vibe-coding.

                    • By layla5alive 2026-03-056:34

                      This would be funny if it were not exactly what appears to be happening where I work...

              • By hobs 2026-03-0417:17

                And the human downstream of this random reorganization of things at will, how do they manage it?

                If its AI agents all the way down its commoditization all the way down, if humans have to deal with it there's some sort of cost for change even if its 0 for code.

      • By AlotOfReading 2026-03-0416:25

            Not to rain on our hopes, but AI can give us some options and we can pick the best.
        
        a.k.a. greedy algorithms, a subject those of us on HN should be well-acquainted with. You can watch the horizon effect frequently play out in corporate decisionmaking.

      • By hobs 2026-03-0417:15

        AI often simply does not offer the best options, does not think strategically, and if you constrained to its choices you will often make silly mistakes.

        This is why all the arguments about context windows and RAG exist, because at the end of the day even if you asked the question of a human with all the context there's such a thing as opinions, stated vs unstated goals, requirements vs non functional requirements, etc which will give you wildly different answers.

        Most of the time people don't even know the questions they want to ask.

      • By gtowey 2026-03-0416:00

        > Not to rain on our hopes, but AI can give us some options and we can pick the best.

        But that's kind of my point. A bunch of decisions like that tend to end up with a "random walk" effect. It's a bunch of tactical choices which don't add up to something strategic. It could be, but it takes the human in the loop to hold onto that overall strategy.

    • By bee_rider 2026-03-0415:424 reply

      Why can’t LLMs understand the big picture? I mean, a lot of companies have most of their information available in a digital form at this point, so it could be consumed by the LLM.

      I think if anything, we have a better chance in the little picture: you can go to lunch with your engineering coworkers or talk to somebody on the factory floor and get insights that will never touch the computers.

      Giant systems of constraints, optimizing many-dimensional user metrics: eventually we will hit the wall where it is easier to add RAM to machines than humans.

      • By troupo 2026-03-0415:552 reply

        > Why can’t LLMs understand the big picture?

        Because LLMs don't understand things to begin with.

        Because LLMs only have access to aource code and whatever .md files you've given them.

        Because they have biases in their training data that overfit them on certain solutions.

        Because LLMs have a tiny context window.

        Because LLMs largely suck at UI/UX/design especially when they don't have referense images.

        Because...

        • By bee_rider 2026-03-0416:142 reply

          > Because LLMs don't understand things to begin with.

          Ok, that’s fair. But I think the comment was making a distinction between the big picture and other types of “understanding.” I agree that it is incorrect to say LLMs understand anything, but I think that was just an informal turn of phrase. I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.

          The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.

          • By troupo 2026-03-0417:21

            > I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.

            LLMs can do neither reliably because to do that you need understanding which LLMs don't have. You need to learn from the codebase and the project, which LLMs can't do.

            On top of that, to have the big picture LLMs have to be inside your mind. To know and correlate the various Google Docs and Figma files, the Slack discussions, the various notes scattered on your system etc.

            They can't do that either because, well, they don't understand or learn (and no, clawdbot will not help you with that).

            > The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.

            These are not limitations of tooling, and no, LLM developers are not even close to overcoming, especially not "constantly". The only "overcoming" has been the gimmicky "1 million token context" which doesn't really work.

          • By gtowey 2026-03-0416:341 reply

            I would say that it's very germane to my original statement. Understanding is absolutely fundamental to strategy and it is pretty much why I can say LLMs can't be strategic.

            To really strategize you have to have mental model of well, everything and be able to sift through that model to know what elements are critical or not. And it includes absolutely everything -- human psychology to understand how people might feel about certain features or usage models, the future outlook for what popular framework to choose and will it as viable next year as it is today. The geographic and geopolitics of which cloud provider to use. The knowledge of human sentiment around ethical or moral concerns. The financial outlook for VC funding and interest rates. The list goes on and on. The scope of what information may be relevant is unlimited in time and space. It needs creativity, imagination, intuition, inventiveness, discernment.

            LLMs are fundamentally incapable of this.

            • By braebo 2026-03-0512:542 reply

              Claude is perfectly capable of all of this. Give it access to meeting notes and notion/linear and it can elegantly connect the dots within the context of a given problem.

              • By troupo 2026-03-069:01

                It routinely can't "connect dots" on a 10 kloc project with design notes right there in the same project.

                It routinely cannot read files more than 2k lines long.

                You can't even provide detailed CLAUDE.md instructions because "file is too large and will affect context".

                But sure. "Just give it access to a magnitude more info and it will be able to do stuff".

              • By Throaway1985123 2026-03-0523:48

                yes, its just a matter of capability, not skill

        • By gtowey 2026-03-0416:05

          Yeah, it's strange to me that the default assumption is that current LLMs are already human-level AGI.

      • By butILoveLife 2026-03-0415:521 reply

        I basically just posted the same response. I generally agree with everything you said.

        Only thing to add, maybe we have the most senior of seniors verifying the decisions of AI.

        • By bee_rider 2026-03-0416:05

          Most senior could make sense (although I’d like to see a collection of independent guilds coordinated by an LLM “CEO” just to see how it could work—might not be good enough yet, but it’d be an interesting experiment).

          Ultimately, I suspect “AI” (although, maybe much more advanced than current LLMs) will be able to do just about any information based task. But in the end only humans can actually be responsible/accountable.

      • By Throaway1985123 2026-03-0523:48

        They just can't yet. They'll tell you themselves they can only take in a certain amount of a context window.

      • By manxiemanx 2026-03-122:34

        You don’t understand how LLMs work. An LLM can’t understand the big picture because it doesn’t understand. Period. It works more like an autocorrect

    • By sp1nningaway 2026-03-0419:421 reply

      Yes! "Does an AI know how to do that? Does a coding assistant know that an app is really a giant collection of details?"

      There are just so many small decisions that add up to a consistent vision for a piece of software. It doesn't seem like LLMs are going to be able to meaningfully contribute to that in the near future.

      I tried vibecoding my own workout tracker, but there were so many small details to think through that it was frustrating. I gave up and found an app that is clearly made by a team of experienced, thoughtful people and AI can't replicate the sheer thoughtfulness of every decision that was made to create this app. The inputs for reps/sets, algorithms for adjusting effort on the fly, an exercise library with clear videos and explanations; there's just no way to replicate that without people who have been trainers and sport scientists for decades.

      LLMs can help increase the speed that these details turn in to something tangible, but you definitely can't "skip all that crap and just jump to the end and get on with it."

      • By Throaway1985123 2026-03-0523:50

        Right, it's people who can't code (and cant afford to hire a coder) that can use LLM's to replace coders, but oyu still need to sit through it and iterate. More so than with a human being, because of the dumb mistakes LLMs still make, so you need a patient and consistent vision.

    • By h3lp 2026-03-0420:21

      Greg mentions discipline and vision as determinants of successful software, which is correct but I think he misses another aspect of vision: the ability to attract and crystallize a community around their project. Arguably, most successful softwares thrive in the long term because they have a team of people that inspire each other, fill in with complementary talents, and provide continuity.

    • By antonvs 2026-03-0415:49

      > LLMs can't be strategic because they do not understand the big picture -- that the real work of good software is balancing a hundred different constraints in a way that produces the optimal result for the humans who use it.

      There’s good reason to think that they could understand the big picture just fine, even today, except that they’re currently severely constrained by what we choose, or have time, to tell them. They can already easily give a much more comprehensive survey of suitable options for solving a given problem than most humans can.

      If they had more direct access to the information we have access to, that we currently grudgingly dole out to them in dribs and drabs, they would be much more capable.

    • By pixl97 2026-03-0415:383 reply

      >LLMs can't be strategic because they do not understand the big picture

      While I do tend to believe you, what evidence based data do you have to prove this is true?

      • By gtowey 2026-03-0415:442 reply

        > While I do tend to believe you, what evidence based data do you have to prove this is true?

        IMO the onus is to prove that they can be strategic. Otherwise you're asking me to prove a negative.

        • By vonneumannstan 2026-03-0416:311 reply

          Task Time horizons are improving exponentially with doubling times around 4 months per METR. At what timescale would you accept that they "can be strategic"? Theres little reason to think they wont be at multi week or month time horizons very soon. Do you need to be strategic to complete multi month tasks?

          • By gtowey 2026-03-0416:491 reply

            Can an LLM give you an upfront estimate that a task will take multiple months?

            Can it decide intelligently what it would have to change if you said "do what you can to have it ready in half the time?"

            • By vonneumannstan 2026-03-0421:201 reply

              >Can an LLM give you an upfront estimate that a task will take multiple months?

              >Can it decide intelligently what it would have to change if you said "do what you can to have it ready in half the time?"

              Do you think ChatGPT 5.2 Pro can't estimate how long a task might take? Do you think that estimate would necessarily be worse than the estimates, which are notoriously poor, coming from human engineers?

              But you can still answer my question. When an LLM can complete a task that takes a person N months or years, is it capable of being strategic?

              • By manxiemanx 2026-03-122:39

                Multiple people have already answered your question in this thread. An LLM can’t be strategic because that’s not a capability of the technology itself

        • By pixl97 2026-03-0416:201 reply

          Saying the tiger has to prove it can eat you is not a great strategy to survive a tiger attack.

          • By bigstrat2003 2026-03-0420:211 reply

            Well so far the tiger faceplants in an embarrassing fashion every time it tries to eat someone. So I'm not really worried about that.

            • By pixl97 2026-03-0420:41

              Gary Marcus: "LLMs will never be able to a..... wait, what do you mean they can already do that?"

      • By manxiemanx 2026-03-122:37

        This is fundamental to the way LLMs work. If you don’t know this, you should be reading up on that instead of asking for evidence based data on things that are fundamental to the technology we’re discussing

      • By Terr_ 2026-03-0417:152 reply

        Prompt-injection in all its forms. If the hyper-mad-libs machine doesn't reliably "understand" and model the difference between internal and external words, how can we trust them to model fancier stuff?

        • By manxiemanx 2026-03-122:40

          That’s no problem. Just tell it to make sure the code it generates has no security vulnerabilities! That’ll take care of the issue

        • By bigstrat2003 2026-03-0420:25

          We can't even trust LLMs to get basic logic right, or even the language syntax sometimes. They reliably generate code worse than a human would write, and have zero reasoning ability. Anyone who thinks they can model something complicated is either uncritically absorbing hype or has a financial stake in convincing people of the hype.

  • By sunir 2026-03-0414:26

    I don't feel like the abstraction away from assembly language resulted in fewer software engineering jobs. Nor do I feel like Java's virtual machine resulted in fewer systems engineering jobs. Somehow I don't feel that writing in English rather than pure logic will result in fewer engineering problems either. A lot more actually. But at least we'll get the requirements out of users into something concrete faster.

    What is definitely going to be abundantly clear is just how much better machines can get at creating correct code and how bad each of us truly is at this. That's an ego hit.

    The loving effort an artisan puts into a perfect pot still has wabi sabi from the human error; whereas a factory produced pot is way more perfect and possesses both a Quality from closeness to Idealism and an eerieness from its unnaturalness.

    However, the demand for artisan pottery has niched out compared to Ikea bowls, so that's just how it is.

HackerNews