They lied to you. Building software is hard

2026-01-3012:33143118blog.nordcraft.com

Every week there seems to be a new tool that promises to let anyone build applications 10x faster. The promise is always the same and so is the outcome.

Every week there seems to be a new tool that promises to let anyone build applications 10x faster. The promise is always the same and so is the outcome.

Andreas Møller

Andreas Møller

February 23, 2025

It used to be no-code tools but recently AI programming platforms have been gaining a lot of traction.

No-code and AI programming tools might seem like different categories but they have a lot in common. They both target users with little prior knowledge of programming and promise to let them build anything they could desire. They are easy to learn and often have you building something in minutes that otherwise might have taken you weeks if not months. 

The problem is that while these tools can help you build a simple prototype incredibly quickly, when it comes to building functional applications they are much more limited. They make the simple parts of software development simpler, but the complex parts can often become more difficult.

The reality is that if your goal is to become a software developer, relying on these tools early on often ends up slowing you down. You get the illusion of progress early on, but the flat learning curve just means that it will take much longer to learn all the things you need.  When you eventually face a problem that the tool cannot solve for you, you will be back at where you started having to learn everything from scratch.

The great thing about a steep learning curve is that you progress a lot faster.

If you are looking for that one trick that lets you get ahead and jumpstart your career, my advice to you is: Don’t choose the path of least resistance. When training a muscle, you only get stronger with resistance. The same is true for learning any new skill. It is when you struggle with a specific problem or concept that you tend to remember. When you are fully engaged and wracking your brain to try and understand what is going on, that is when you grow. Relying on your tools is like copying the answer from your classmate. You forget it the next day.

In simple terms, the steeper the learning curve, the faster you are going to learn.

The true value of a software engineer is in our ability to analyze problems as well as design and implement creative solutions. To get good at these skills you need to understand not just the tools at your disposal but also the technologies you are building on top of. If you don’t understand how an application works then you have no chance of fixing its bugs and issues. 

With no-code tools you often reach a hard limit where the tool simply does not make sense to use anymore. With AI it is more of a gradual curve. It is difficult to get any reliable data on how AI tools can impact developer productivity. One thing that does seem to hold true is that the effect greatly diminishes the more experienced the developer is.

My best estimate is that the curve looks something like this

As you gain more experience, you tend to spend more of your time on complex problems that AI assistants have a harder time solving. At the same time you also just get much better at coding and can solve problems much faster than in your junior years.

There has been a lot of chat on social media about the future of software developers. One point that seems to come up frequently is the idea that companies will only hire senior engineers and rely on AI for the tasks that were previously done by their junior colleagues. This is clearly absurd since without junior developers there would be no senior developers. There is however a real risk that we will start seeing junior developer salaries dropping as their contributions might not be deemed as valuable in the age of AI. In that case senior engineers will likely be even more in demand and their salaries will likely reflect that.

To sum up the advice in this article into a single, easily consumable bite it would be this:

Invest in yourself. 

Your skills and experience as a developer have value. The harder it is to acquire a set of skills the more valuable they tend to be. Even though some of the things you will pick up along the way will become outdated, the experience you gained while using them will stay with you. Each language or technology you use makes the next one a little easier to learn.


Read the original article

Comments

  • By gdubs 2026-02-0219:047 reply

    One of my all-time favorite quotes is from Zen Mind, Beginner's Mind and it goes: “In the beginner’s mind there are many possibilities, but in the expert’s there are few.”

    There's such a wide divergence of experience with these tools. Often times people will say that anyone finding incredible value in them must not be very good. Or that they fall down when you get deep enough into a project.

    I think the reality is that to really understand these tools, you need to open your mind to a different way of working than we've all become accustomed to. I say this as someone who's made a lot of software, for a long time now. (Quite successfully too!)

    In someways, while the ladder may be getting pulled up on Junior developers, I think they're also poised to be able to really utilize these tools in a way that those of us with older, more rigid ways of thinking about software development might miss.

    • By bsoles 2026-02-0220:271 reply

      Over the last 25 years of building commercial software, but being a programming enthusiast since I was 15 years old, I came to the conclusion that self-improvement (in the sense of gaining real expertise in a field, building a philosophy of things, and doing the right things) is in direct opposition to creating "value" in the corporate/commercial sense of today.

      Using AI/LLMs, you perhaps will create more commercial value for yourself or your employer, but it will not make you a better learner, developer, creator, or person. Going back to the electronic calculator analogy that people like to refer to these days when discussing AI, I also now think that, yes, electronic calculators actually made us worse with being able to use our brains for complex things, which is the thing that I value more than creating profits for some faceless corporation that happens to be my employer at the moment.

      • By gdubs 2026-02-0221:362 reply

        Why are you so certain that LLMs/AI can't be used as a tool to learn and grow?

        Like Herbie Hancock once said, a computer is a tool, like an axe. It can be used for terrible things, or it can be used to build a house for your neighbor.

        It's up to people how we choose to use these tools.

        • By bravetraveler 2026-02-038:43

          Just putting it out there, not really interested in exercising the metaphor. I tend to be able to own my tools, these are closer to services.

        • By bsoles 2026-02-0223:232 reply

          > Why are you so certain that LLMs/AI can't be used as a tool to learn and grow?

          Because every other post in here, for example, starts with "I vibe coded..." and not with "I learned something new today on ChatGPT".

          • By nerdsniper 2026-02-030:06

            I’m vibe coding apps that help me explore stuff and learn things. That’s their specific purpose.

          • By pixl97 2026-02-0313:27

            Maybe people that learn stuff from AI aren't the type to enthusiastically make posts about it?

    • By phicoh 2026-02-0219:27

      There have always been young people who can quickly hack something together with whatever new tools are available. That way of working never lasts, but the tools do last.

      When tools prove their worth, they get taken into to normal way software is produced. Older people start using them, because they see the benefit.

      The key thing about software production is that it is a discussion among humans. The computer is there to help. During a review, nobody is going to look at what assembly a compiler produces (with some exceptions of course).

      When new tools arrive, we have to be able to blindly trust them to be correct. They have to produce reproducible output. And when they do, the input to those tools can become part of the conversation among humans.

      (I'm ignoring editors and IDEs here for the moment, because they don't have much effect on design, they just make coding a bit easier).

      In the past, some tools have been introduced, got hyped, and faded into obscurity again. Not all tools are successful, time will tell.

    • By bdangubic 2026-02-0220:232 reply

      This reminds of talking to my nephew at Thanksgiving years ago. He was studying for an exam after the holidays and I was looking at his screen open to a Google Doc which looked like his study notes except - they were being edited as I was watching - by someone else. I asked about it and he goes “we have a single Google Doc where all students collaborate on the study notes.” My mind was blown, I was also using Google Docs but not in a millions years would it cross my mind its utility for such a thing he and his classmates were using it for. Can’t wait to see what new blood “Juniors” brings to the table!

      • By whattheheckheck 2026-02-033:50

        Collective cognition is effectively what all knowledge work is. The programmers are the dunces that can't keep it all in their heads and need explicit type systems and databases to manage state unlike the genius business analysts and SMEs

      • By AuthAuth 2026-02-0220:372 reply

        All students collaborating on notes kind of defeats the point no? As I see it study notes are reminders to link you back to when you were reviewing the material. If you never wrote the notes you wont get that connection back to the material.

        • By bdangubic 2026-02-032:091 reply

          The shared study notes represent shared understanding of the topics at hand. Different people grasp concepts in different way and seeing how other people think/understand/deduce/... (at least for me) makes a world of difference.

          Like seeing a PR and going "holy s**, would never have dreamed of doing it that way" - I have learned A LOT in a looooong SWE career from that...

          • By AuthAuth 2026-02-0318:24

            I agree that collectively they understand but they dont sit the exam collectively. If it works, it works I guess.

        • By bitwize 2026-02-0222:141 reply

          On the one hand, this is the kind of closed mind the zen guy in the root comment was talking about.

          On the other hand, you're probably right...

          • By saulpw 2026-02-030:57

            Perhaps wisdom is closing your mind to common stupidity.

    • By AndreasMoeller 2026-02-0313:15

      At the same time I see experience engineers pretend that everything they have learned about software development is no longer true.

      3 years ago the idea of measuring productivity in lines of code would have been ridiculous. After AI, it is the norm.

    • By mnky9800n 2026-02-0222:17

      I was talking about this with someone today, that before perhaps there is an exactness you expect. But actually, what really matters is "good enough." And if AI written code takes you to "good enough" according to whatever metric you've set, then what exactly is the problem? Because a lot of the technical part of the job is taking X data, doing f(x) transformation to that data, and thus Y is born and handed to the next step. So if it passes whatever metric you have set to make sure that going from X to Y handles Z% of the problem space, and doesn't create downstream issues (probably this should be part of your metric), then you have done your job. And yes, of course sometimes the job will require you writing the code yourself because that level if precision is necessary. But why should we consider that always to be the case? And thus, actually, there are probably new programming languages and paradigms to consider that we haven't thought of yet that makes this kind of problem solving more efficient. Because right now we are not super effective at juggling both the human and the machine's problem space context. Except some experts who say they can orchestrate tens of agents all at once doing whatever. I dunno. I think right now is exciting and not hand wringing. A computer is meant to help you think. Why shouldn't new computational tools bring excitement?

    • By commandlinefan 2026-02-0219:281 reply

      ... and the biggest problem is that the people who _do_ know how hard it is to build software are the ones whose input on the matter is most likely to be discounted as "sour grapes"/"fear of obsolescence".

      • By pixl97 2026-02-0313:29

        Then become a consultant that fixes broken AI generated apps for outrageous fees.

    • By hn_throwaway_99 2026-02-0219:361 reply

      I definitely agree with this. Older folks have to deal with the double whammy of being familiar with what they already know, plus there is a good bit of research that learning and absorbing new things just gets harder past mid-40s or so.

      That said, I don't think this negates what TFA is trying to say. The difficulty with software has always been around focusing on the details while still keeping the overall system in mind, and that's just a hard thing to do. AI may certainly make some steps go faster but it doesn't change that much about what makes software hard in the first place. For example, even before AI, I would get really frustrated with product managers a lot. Some rare gems were absolutely awesome and worth their weight in gold, but many of them just never were willing to go to the details and minutiae that's really necessary to get the product right. With software engineers, if you don't focus on the details the software often just flat out doesn't work, so it forces you to go to that level (and I find that non-detail oriented programmers tend to leave the profession pretty quickly). But I've seen more that a few situations where product managers manage to skate by without getting to the depth necessary.

      • By atmavatar 2026-02-0220:11

        > Older folks have to deal with the double whammy of being familiar with what they already know, plus there is a good bit of research that learning and absorbing new things just gets harder past mid-40s or so.

        Unfortunately, since the tech industry still largely skews young, reticence to chase every new hype cycle also feeds into the perception of an inability to learn new things, even after many prove to be fads (e.g., blockchain).

  • By xiaohanyu 2026-01-3012:331 reply

    "If you are looking for that one trick that lets you get ahead and jumpstart your career, my advice to you is: Don’t choose the path of least resistance. When training a muscle, you only get stronger with resistance. The same is true for learning any new skill. It is when you struggle with a specific problem or concept that you tend to remember."

    Pretty nice description.

    • By advisedwang 2026-02-0217:501 reply

      As with anything, there's also too much of a good thing though.

      In my own career I switched role to get more time on a area where I felt I needed more growth an practice. Turns out I never got really very good at it, and basically was just in a role I wasn't great at for 6 years. It was miserable. My lesson is "if you know you are bad at something, don't make it load-bearer in your life or career".

      • By hobs 2026-02-0217:571 reply

        There's a reason that one of the big corporate skills books is Strength Finder - because fundamentally playing to your weaknesses isn't a good play, its that you need to consistently challenge yourself to keep building whatever muscle you choose to do. You don't want to build strength by lifting 10,000 pounds all at once, but by increasing your load every day.

        In most professions barely anyone is doing the continual education or paying attention to the "scene" for that profession, if you do that alone you're probably already in the top 10%.

  • By adam_arthur 2026-02-0219:078 reply

    LLMs have clearly accelerated development for the most skilled developers.

    Particularly when the human acts as the router/architect.

    However, I've found Claude Code and Co only really work well for bootstrapping projects.

    If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.

    It will probably change once the approach to large scale design gets more formalized and structured.

    We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.

    Yes, AI will one shot crappy static sites. And you can vibe code up to some level of complexity before it falls apart or slows dramatically.

    • By lowbloodsugar 2026-02-0219:231 reply

      >If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.

      Worse, as its planning the next change, it's reading all this bad code that it wrote before, but now that bad code is blessed input. It writes more of it, and instructions to use a better approach are outweighed by the "evidence".

      Also, it's not tech debt: https://news.ycombinator.com/item?id=27990979#28010192

      • By adam_arthur 2026-02-0219:311 reply

        People can take on debt for all sorts of things. To go on vacation, to gamble.

        Debt doesn't imply it's productively borrowed or intelligently used. Or even knowingly accrued.

        So given that the term technical debt has historically been used, it seems the most appropriate descriptor.

        If you write a large amount of terrible code and end up with a money producing product, you owe that debt back. It will hinder your business or even lead to its collapse. If it were quantified in accounting terms, it would be a liability (though the sum of the parts could still be net positive)

        Most "technical debt" is not buying the code author anything and is materialized through negligence rather than intelligently accepting a tradeoff

        • By lowbloodsugar 2026-02-0221:18

          All those examples were borrowing money. What you're describing as "technical debt" doesn't involve borrowing anything. The equivalent for a vacation would be to take your kids to a motel with a pool and dress up as Mickey Mouse and tell them its "Disney World debt". You didn't go in debt. You didn't go to Disney World. You just spent what money you do have on a shit solution. Your kids quite possibly had fun, even.

          > term technical debt has historically been used

          There are plenty of terms that we no longer use because they cause harm.

    • By Sohcahtoa82 2026-02-0221:49

      Agreed.

      What I've found is that AI can be alright at creating a Proof of Concept for an app idea, and it's great as a Super Auto-complete, but anything with a modicum of complexity, it simply can't handle.

      When your code is hundreds of thousands of lines, asking an agent to fix a bug or implement a feature based on a description of the behavior just doesn't work. The AI doesn't work on call graphs, it basically just greps for strings it thinks might be relevant to find things. If you know exactly where the bug lies, it can usually find it with context given to it, but at that point, you're just as good fixing the bug yourself rather than having the AI do it.

      The problem is that you have non-coders creating a PoC, then screaming from the rooftops how amazing AI is and showing off what it's done, but then they go quiet as the realization sets in that they can't get the AI to flesh it out into a viable product. Alternatively, they DO create a product that people start paying to use, and then they get hacked because the code is horribly insecure and hard-codes API keys.

    • By athenot 2026-02-0219:221 reply

      > We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.

      Containment of state also happens to benefit human developers too, and keep complexity from exploding.

      • By adam_arthur 2026-02-0219:26

        Yes!

        I've found the same principles that apply to humans apply to LLMs as well.

        Just that the agentic loops in these tools aren't (currently) structured and specific enough in their approach to optimally bound abstractions.

        At the highest level, most applications can be written in simple, plain english (expressed via function names). Both humans and LLMs will understand programs much better when represented this way

    • By AndreasMoeller 2026-02-0313:04

      The most interesting thing for me is that I am sure it does.

      I have been coding for 20+ years and I have used AI agents for coding a lot, especially for the last month and a half. I can't say for sure they make me faster.They definitely do for some tasks, but over all? I can solve some tasks really quickly, but at the same time my understanding of the code is not as good as it was before. I am much less confident that is is correct.

      LLMs clearly make junior and mid level engineers faster, but it is much harder to say for Senior.

    • By CuriouslyC 2026-02-0219:29

      Valknut is pretty good at forcing agents to build more maintainable codebases. It helps them dry out code, separate concerns cohesively and organize complexity. https://github.com/sibyllinesoft/valknut

    • By krainboltgreene 2026-02-0221:561 reply

      > LLMs have clearly accelerated development for the most skilled developers.

      Have they so clearly? What's the evidence?

      • By thegrim000 2026-02-0223:14

        Most people's "truth" nowadays is what they've heard enough people say is true. Not objective data/measures. What people believe is true, and say is true, IS truth, to them.

    • By themafia 2026-02-0219:34

      > accrue massive technical debt

      The primary difference between a programmer and an engineer.

    • By sjdixjjxs 2026-02-0220:51

      > We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation

      Wait till you find out about programming languages and libraries!

      > It will probably change once the approach to large scale design gets more formalized and structured

      This idea has played out many times over the course of programming history. Unfortunately, reality doesn’t mesh with our attempts to generalize.

HackerNews