
Comparing a career flying the 747 and a career coding with AI assistance.
A couple years ago, I was on the way back from a work trip to Germany. I had been upgraded to business class, and I sat next to a Belgian 747 pilot, probably in his fifties or sixties. We talked a fair bit about our careers. I had left the Navy and started professionally programming less than a year before. He had been a pilot since shortly after graduating university, and had flown the 747 for about twenty years. He had studied mechanical engineering at school, and he told me in great depth about the variable geometry jet turbines in modern aircraft, which could remain efficient across a wide altitude range.

I expressed some jealousy about how well suited he was to his job. Clearly he was a geek for aircraft, and even though most airlines don’t fly the 747 anymore, it is an incredible machine. He agreed that it was a privilege to fly the plane, but said wistfully:
In this job, after a while, there’s no improvement. You are no better today than you were yesterday.
He said that by now, he knew the 747 about as well as a pilot could. In fact, he sometimes wished he had become an engineer or designer of airplanes, so that he could learn new things as a core part of his job. Then he said:
You are lucky that your job is like that.
Since that flight, my job has changed a great deal. Coding agents can do a large portion of what I previously considered my work. I’m one of the last people who should be upset about this, since I work at an AI lab and stand to gain a great deal if AI follows through on its economic promise. Still, it has changed how I solve problems, and at times I feel more like a pilot than an engineer.
In the past, when I fixed a bug or implemented a feature, I would have to spend a minimum amount of effort understanding the situation. For example, to add pagination to this website, I would read the Jekyll docs, find the right plugin to install, read the sample config, and make the change. Possibly this wouldn’t work, in which case I would Google it, read more, try more stuff, retest, etc. In this process it was hard not to learn things. I would walk away from the problem with a better understanding of how the system worked. If I had to implement the feature again, I would be able to do it faster and more easily.
Once LLMs started getting good at coding, I would occasionally ask them for help at the beginning of this process, mostly replacing search engines. If I hit an error, I would copy and paste it into a chatbot to see what it said before trying hard to understand it (often, before reading it). This didn’t replace critical thinking, though, since I would still need to learn and plan to implement the change.
With the AI coding agents of the last few months, though, things are different. Often the agent can implement a whole feature end-to-end, with no involvement from me. Now when I need to make a change to the codebase, I don’t start by trying to understand. Instead, I see if my coding agent can “one-shot” the problem, and only step in if it seems to be failing. This happens less and less, and the features that I trust agents with have become bigger and bigger.
I believe in coding primarily as a means to an end. Coding agents have allowed me to do much more than before, so for the most part I am happy with them! But I’ll admit there is also something bothersome about turning features over to AI fully.

I do not build skills or knowledge as quickly this way. If I build a feature with a coding agent and then have to do it again, I won’t be any faster the second time. It’s possible to imagine writing code with AI for twenty years and not being much more skillful at the end of it. There’s no improvement.
If I do have to step in and save the LLM, I often become lost as well. All of a sudden, I am reading someone else’s code. Rather than gradually coming to terms with a solution to a problem, I am presented with the solution wholesale—only, it’s a little bit wrong. As LLMs handle bigger tasks for me, this gets worse. My only saving grace is that I will do it less often.
You might say that the new, real skill is prompting agents (archived), but I don’t believe that. Prompting is easy and will only get easier. Hard knowledge about programming and the problem is what helps you make good design decisions, so this knowledge is the most important factor determining whether your coding agents are successful. Developing this knowledge is becoming optional.
Some people will probably respond to this by saying (snottily) that I should read the code that my agents produce, rather than rely on them blindly. I do read the code, but reviewing code is very different from producing it, and surely teaches you less. If you don’t believe this, I doubt you work in software.
Coding agents are here to stay, and you’re a fool if you don’t use them. Still, I think you’ll use them most successfully if you understand the domain in which you’re working. This used to be an essential byproduct of programming, but that’s not the case anymore. To this end, maybe it’s a good idea to write a minimum amount of code by hand as an educational task, rather than a productive one, or to try to write the solution to a problem yourself, and only compare with the LLM once you’re confident your answer is correct.
The pilot analogy hits different when you consider that pilots still train on simulators for exactly this reason — they're legally required to maintain proficiency even when autopilot handles 99% of flights.
There's no equivalent mandate for software engineers. Nothing stops you from spending years as a pure "prompt pilot" and losing the ability to read a stack trace or reason about algorithmic complexity. The atrophy is silent and gradual.
The author's suggestion to write code by hand as an educational exercise is right but will be ignored by most, because the feedback loop for skill atrophy is so delayed. You won't notice you've lost the skill until you're debugging something the agent made a mess of, under pressure, with no fallback.
The term "Children of the Magenta Line" has long been used in aviation to describe the over-reliance on automation. So even though they train to avoid losing manual skills, it's definitely still a concern.
We should be very concerned for the next generation. When you have the constant temptation of digging yourself out of a problem just by asking an LLM how will you ever learn anything?
My biggest lessons were from hours of pain and toil, scouring the internet. When I finally found the solution, the dopamine hit ensured that lesson was burned into my neurons. There is no such dopamine hit with LLMs. You vaguely try to understand what it’s been doing for the last five minutes and try to steer it back on course. There is no strife.
I’m only 24 and I think my career would be on a very different path if the LLMs of today were available just five years ago.
Ok imagine you went back 30 years and you had a swarm of experts around you who you could ask anything you wanted and they would even do the work for you if you wanted.
Does this mean youd be incapable of learning anything? Or could you possibly learn way more because you had the innate desire to learn and understand along with the best tool possible to do it?
Its the same thing here. How you use LLMs is all up to your mindset. Throughly review and ask questions on what it did, or why, ask if we could have done it some other way instead. Hell ask it just the questions you need and do it yourself, or dont use it at all. I was working on C++ for example with a heavy use of mutexs, shared and weak pointers which I havent done before. LLM fixed a race condition, and I got to ask it precisely what the issue was, to draw a diagram showing what was happening in this exact scenario before and after.
I feel like Im learning more because I am doing way more high level things now, and spending way less time on the stuff I already know or dont care to know (non fundementals, like syntax and even libraries/frameworks). For example, I don't really give a fuck about being an expert in Spring Security. I care about how authentication works as a principal, what methods would be best for what, etc but do I want to spend 3 hours trying to debug the nuances of configuring the Spring security library for a small project I dont care about?
> Does this mean you'd be incapable of learning anything?
Yes. This strikes me as obvious. People don't have the sort of impulse control you're implying by default, it has to be learnt just like anything else. This sort of environment would make you an idiot if it's all you've ever known.
You might as well be saying that you can just explain to children why they should eat their vegetables and rely on them to be rational actors.
I agree with your premise, but this example I strongly disagree with:
> You might as well be saying that you can just explain to children why they should eat their vegetables and rely on them to be rational actors.
YES! Explain to them, and trust them. They might not do exactly as you wish for them, but I'll bet you don't do exactly as you wish for yourself either. The children need your trust and they must learn how to navigate this world by themselves, with parents providing guidance and only taking the hard stance (but still explaining and discussing!) when safety is concerned. Also, lead by example. If you eat vegetables then children are likely to eat them too. The children are not stupid, they just don't have enough experience yet. Which you gain by trying (and failing), not by listening.
You're right, it was a bad example. I also don't eat my vegetables. I was more trying to make the point that most of us are not rational actors either, was just using children as a convenient proxy, unfairly.
I see it as being more personality/interest than impulse control. A curious/interested person would try and get involved and be a part of it, someone uninterested will just say what's the point and get by having the work done for them.
It may very well have stunted my learning. What’s the point of absorbing information when you have a consortium of experts available 24/7?
Saying what you said about it being down to being how you use LLM comes from a privileged position. You likely already know how to code. You likely know how to troubleshoot. Would you develop those same skillsets today starting from zero?
Supposedly because AI has limits and you still have to know what you're doing so you can guide it and do it better.
If that's not true, then what's the problem with not learning the material? Go do something more productive with your time if the personal curiosity isn't good enough. Were in a whole new world.
>Saying what you said about it being down to being how you use LLM comes from a privileged position. You likely already know how to code. You likely know how to troubleshoot. Would you develop those same skillsets today starting from zero?
This is true, and I can't answer that 100% confidently. I imagine I would just be doing more more/complicated things and learning higher level concepts. For example, if right off the bat I could produce a web app, Id want to deploy it somewhere. So Id come across things like ssh, nginx, port forwarding, jars, bundles, DNS, authentication, etc. Do this a 1000 times just the way I wrote 1000 different little functions or programs by hand and you'll no shit absorb little here and there as issues come up. Or maybe if whats hard a year ago is easy today, Id want to do something far more incredibly complex than anything anyone's been able to imagine before, and learn in that struggle.
Programmers in the 90s were far more apt at understanding CPU registers, memory and all sorts of low level stuff. Then the abstraction moved up the stack, and then again and again. I think same thing will happen.
Also, you can't say Im in a privileged position for already knowing how to code and at the same time asking what's the point of learning it yourself.
The problem is that the abstraction level moved up so far that we're now programming in the English language, and we're more like managers than programmers. This will only get worse. The next step will be that AIs run entire companies. And BigAI will not allow us to profit from that because they will just run the AI themselves, the current situation was just a stepping stone.
Managers still need technical skills though.
If AIs really get there, we're all out of jobs to do.
> We should be very concerned for the next generation. When you have the constant temptation of digging yourself out of a problem just by asking an LLM how will you ever learn anything?
This is just the same concern whenever a new technology appears.
* Socrates argued that writing would weaken memory, that it would create only superficially knowledge but incapable of really understanding. But it didn't destroy it. It allowed to store information and share it with many others far away.
* The internet and web indexers made information instantly accessible, allowing you to search for the information you just need, the fear is that people would just copy from the internet, yet researching information became way faster, any one with Internet access could access this information and learn themselves, just look at the amount of educational websites with courses to learn.
Each time a new technology came and people feared that it could degrade knowledge, the tools only helped us to increase our knowledge.
Just like with books and the internet, people could simply copy and not learn anything, its not exclusive to LLMs. The issue isn't in the tool itself, but how we use it. The new generation will probably instead of learning how to search, they will need to learn how to prompt, ask and evaluate whether the LLM isn't hallucinating or not.
Socrates was proven dead wrong by neurobiology.
LLMs making you dumber is far from being "disproven" by science. Quite the opposite https://arxiv.org/abs/2506.08872
I'm not sure what you mean by Socrates was proven dead wrong.
The study you linked doesn't show that people are becoming dumber because of LLMs, its just showing that when you offload tasks to these tools your brain engages less in that specific task, just like you'd do with a calculator, instead of doing complex calculations on paper, the calculator will do them for you, or when writing and using a spell-checker or using a search engine, instead of opening a book and searching. The question is whether in the long-term cognitive capacity is reduced, and like I said before this argument predates LLMs (All the way back to Socrates)
Also, take the study with a grain of salt as this is a small sample with only 54 participants for a single task on a short term study.
Personally, I believe LLMs just allows us to have a higher level of abstraction.
At the beginning of the internet, I used to save all webpages where I’d find info, just in case I would be stuck without a connection or if the website removed it. I had parts of the MDN.
The internet never fell. I bet it’ll be the same with AI. You will never not have AI.
The big difference is the internet was a liberation movement: Everything became open. And free. AI is the opposite: By design, everything is closed.
Not only that. AI will have increasingly diminishing returns as it relies on good quality human written code. As that starts being less and less true, quality of generated code will also suffer since at some point AI will train from AI generated content.
As an older person, I'm not worried. The world changes all the time. People are put people in difficult situations, and they have to adapt. "Oh no, how will people learn things?" is not that big of a struggle in the grand scheme. We're not burning books or giving people lobotomies. People can still learn if they want to, easier than ever before. Businesses will adapt, people will adapt, by necessity. Things will be very different, sure. But then we get used to the difference, and it becomes normal.
Kids today couldn't imagine how people used to live just 100 years ago, like it was the dark ages. People from that age would probably look at kids 10 years ago and think, these poor children! They don't know how to work hard! They don't know anything about life! They're glued to these bizarre light machines! Every age is different.
Yea, IMO people shouldn't make jobs / professions too big a part of their identity. At some point human programming may be largely gone, but probably there will be increased demand for something else.
It should be government's job to make it as easy as possible for people to retrain, switch jobs and start new careers. Obviously taxation should be reworked too, if AI and robots replace lots of jobs in some sectors. Profits produced by efficiency gains shouldn't be concentrated just among few billionaires.
My concern is also, how will programming and software design ever improve?
In my eyes, it will be same as introduction of garbage collectors. It will help to a degree, make people more lazy along the way and cause some additional and brand new issues. But over all very little will change as for serious implementations human intellect is still going to be the primary actor and AI will be disallowed.
I've seen a lot of posts like this one, but this is the first to encapsulate how I feel so well.
Honestly, I don't really know what to do. I spent my whole life (so far; I'm still very young) falling in love with programming, and now I just don't find this agent thing fun at all. But I just don't know how to find my niche if using LLMs truly does end up being the only way for me to build valuable things with my only skills.
It's pretty depressing and very scary. But I appreciate this article for at least conveying that so effectively...
Could I ask, what did you love about programming that you now don't find this agent thing fun at all.
I'm genuinely curious, I feel very differently and excited about this agent thing.
Asking because unlike a lot of other commentary, this struck me as being more about the act itself than being depressed/anxious for financial reasons, etc
I love the act of writing code, it clicks well with me. I love the feeling of my brain solving problems, figuring out how something works, and then finally understanding it. I love debugging. I love having built something that people love, solely wrought by my own fingers.
I got into programming because the act of spinning a web of code just feels like what I'm designed to do.
Vibe coding definitely has some of that, but it feels so detached from any understanding of the computer itself. I feel like I'm bossing someone around — and I would never want to be a non-coding manager. I'm curious, how/why do you feel so different?
(Obviously the financial side is stressful too, but I feel like I'm in a good spot to figure that out either way.)