
In 2021, being a good software engineer felt great. The world was full of software, with more companies arriving every year who needed to employ engineers to write their code and run their systems. I…
In 2021, being a good software engineer felt great. The world was full of software, with more companies arriving every year who needed to employ engineers to write their code and run their systems. I knew I was good at it, and I knew I could keep doing it for as long as I wanted to. The work I loved would not run out.
In 2026, I’m not sure the software engineering industry will survive another decade. If it does, I’m certain it’s going to change far more than it did in the last two decades. Maybe I’ll figure out a way to carve out a lucrative niche supervising AI agents, or maybe I’ll have to leave the industry entirely. Either way, the work I loved is going away.
It’s unseemly to grieve too much over it, for two reasons. First, the whole point of being a good software engineer in the 2010s was that code provided enough leverage to automate away other jobs. That’s why programming was (and still is) such a lucrative profession. The fact that we’re automating away our own industry is probably some kind of cosmic justice. But I think any working software engineer today is worrying about this question: what will be left for me to do, once AI agents have fully diffused into the industry?
The other reason it’s unseemly is that I’m probably going to be one of the last to go. As a staff engineer, my work has looked kind of like supervising AI agents since before AI agents were a thing: I spend much of my job communicating in human language to other engineers, making sure they’re on the right track, and so on. Junior and mid-level engineers will suffer before I do. Why hire a group of engineers to “be the hands” of a handful of very senior folks when you can rent instances of Claude Opus 4.6 for a fraction of the price?
I think my next ten years are going to be dominated by one question: will the tech industry overshoot or undershoot the capabilities of AI agents?
If tech companies undershoot - continuing to hire engineers long after AI agents are capable of replacing them - then at least I’ll hold onto my job for longer. Still, “my job” will increasingly mean “supervising groups of AI agents”. I’ll spend more time reviewing code than I do writing it, and more time reading model outputs than my actual codebase.
If tech companies tend to overshoot, it’s going to get a lot weirder, but I might actually have a better position in the medium term. In this world, tech companies collectively realize that they’ve stopped hiring too soon, and must scramble to get enough technical talent to manage their sprawling AI-generated codebases. As the market for juniors dries up, the total number of experienced senior and staff engineers will stagnate, driving up the demand for my labor (until the models get good enough to replace me entirely).
Of course, the software engineering industry has looked like it was dying in the past. High-level programming languages were supposed to let non-technical people write computer code. Outsourcing was supposed to kill demand for software engineers in high-cost-of-living countries. None of those prophecies of doom came true. However, I don’t think that’s much comfort. Industries do die when they’re made obsolete by technology. Eventually a crisis will come along that the industry can’t just ride out.
The most optimistic position is probably that somehow demand for software engineers increases, because the total amount of software rises so rapidly, even though you now need fewer engineers per line of software. This is widely referred to as the Jevons effect. Along these lines, I see some engineers saying things like “I’ll always have a job cleaning up this AI-generated code”.
I just don’t think that’s likely. AI agents can fix bugs and clean up code as well as they can write new code: that is, better than many engineers, and improving each month. Why would companies hire engineers to manage their AI-generated code instead of just throwing more and better AI at it?
If the Jevons effect is true, I think we would have to be hitting some kind of AI programming plateau where the tools are good enough to produce lots of code (we’re here already), but not quite good enough to maintain it. This is prima facie plausible. Every software engineer knows that maintaining code is harder than writing it. But unfortunately, I don’t think it’s true.
My personal experience of using AI tools is that they’re getting better and better at maintaining code. I’ve spent the last year or so asking almost every question I have about a codebase to an AI agent in parallel while I look for the answer myself, and I’ve seen them go from hopeless to “sometimes faster than me” to “usually faster than me and sometimes more insightful”.
Right now, there’s still plenty of room for a competent software engineer in the loop. But that room is shrinking. I don’t think there are any genuinely new capabilities that AI agents would need in order to take my job. They’d just have to get better and more reliable at doing the things they can already do. So it’s hard for me to believe that demand for software engineers is going to increase over time instead of decrease.
It sucks. I miss feeling like my job was secure, and that my biggest career problems would be grappling with things like burnout: internal struggles, not external ones. That said, it’s a bit silly for software engineers to complain when the automation train finally catches up to them.
At least I’m happy that I recognized that the good times were good while I was still in them. Even when the end of zero-interest rates made the industry less cosy, I still felt very lucky to be a software engineer. Even now I’m in a better position than many of my peers, particularly those who are very junior to the industry.
And hey, maybe I’m wrong! At this point, I hope I’m wrong, and that there really is some je ne sais quoi human element required to deliver good software. But if not, I and my colleagues are going to have to find something else to do.
edit: This post got some comments on Hacker News. Some commenters are doubtful, either because they don’t think AI coding is very good, or because they think human creativity/big-picture thinking/attention to detail will always be valuable. Others think ten years is way too optimistic. The top comment repeats the irony that I describe in the third paragraph of this post.
edit: This post also got some comments on the Serbian r/programming subreddit.
Here's a preview of a related post that shares tags with this one.
Is it worrying that 95% of AI enterprise projects fail?
In July of this year, MIT NANDA released a report called The GenAI Divide: State of AI in Business 2025. The report spends most of its time giving advice about how to run enterprises AI projects, but the item that got everybody talking was its headline stat: 95% of organizations are getting zero return from their AI projects.
Continue reading...
Out of curiosity I checked out the author’s resume and this is their current position. Oh the irony!!
I built significant pieces of the Copilot onboarding, purchasing, billing and settings flow. For eight months I headed up the Copilot anti-abuse effort. I then led the launch of GitHub Models, and am now working on other Copilot projects.
No wonder GitHub is always collapsing every week.
Tay.ai, Zoe and Copilot bots being deployed wrecking the platform being unable to fix infrastructure issues whilst the humans are just tweaking all the tiniest issues.
They should instead focus on GitHub actions and improving the uptime of the whole platform first before doing anything AI.
Wherever you work, are there embarrassing experiences that your organization ships?
If so, why didn’t you personally fix them so that nobody could associate you as an individual with a broken CX?
If not, please let me know where to apply, because that sounds like a unicorn organization.
Humans in large groups do amazing and crappy things at the same time. Playing gotcha with someone’s resume is a shitty thing to do.
On the other hand the place I work isn't owned by a company with many many billions to throw around, including many billions paid for the platform in question. I agree attacking someone's resume is lame but attacking big corporate prioritization feels valid here to me.
Totally. None of us is as stupid as all of us.
But working in a well-funded company doesn’t make this problem go away. It makes it worse in a lot of ways.
For my org, I’ll be happy to be first in line to criticize some of the notably wrongheaded prioritization decisions.
Maybe they hope Copilot will write a better Actions for them
Irony indeed! After years of seeing no value, i finally cancelled my github copilot subscription. Poor quality product to say the least.
must be painful, Copilot gives me serious "I cant do that Dave" vibes. But I can walk you though how to add events to your outlook calendar manually one by one.
[dead]
In “Ask HN style”, let’s say that software engineering does go extinct in the next X years, what would you do?
I’ve thought about psychology. I know LLMs can work as pseudo-therapists but I feel like that’s a field where the human connection / human element will remain important.
So I see two sides.
On one hand some jobs with human element are safe, at first. Think of artists being made obsolete by the camera. Portrait artists became mostly obsolete, but we still pay for art. It's the story behind the art that became important. Or, I still go to cafes with nice atmosphere and friendly staff. There are restaurants with robot staff here in Japan, much cheaper. After the meal you pay at the table without ever talking to a person. But it does not feel nice to sit in there, so I gladly pay a premium for the nice coffee.
On the other hand, it is not only software jobs in danger, but all office jobs. So a lot of people may suddenly be out of money. Let's say you open a cafe, but no one has money to come and pay. Society has to change a lot from the current model to be able to handle this.
If software engineering goes extinct, a ton of other white-collar jobs will go with it, and we could be in an intractable depression.
I actually think AI has an unfair advantage with software that is making it seem far more capable that it is. Software is entirely text based, and producers have been putting their outputs and problem solving online for free for decades.
I think applying AI to other white collar roles that also require problem solving but do not have as much training, will prove much more difficult. Even coding on proprietary dominated domains is a much, much worse experience than people have with more accessible code. Using it for electronics has been hit or miss, embedded software is a bit shakey, game development is also challenging to use it for etc.
I get what you’re saying, but it’s not as though there are trillions of books and blog posts and stackexchange questions about excel and the handful of other things that most office workers do, too.
I honestly figured that’s why everyone is coming out with MS Office plugins for all the models, and MS itself is putting it in the tools.
So if most any company only needs one person to solve limited IT issues, prompt code production and deployment, generate the usual truckloads of excel spreadsheets, and do most of the finance and accounting… it starts to look pretty scary.
Then, what about the people making and maintaining all the facilities for these people we don’t need anymore? The world flipped its lid about commercial real estate when wfh became a thing. That was relatively small and temporary.
> Then, what about the people making and maintaining all the facilities for these people we don’t need anymore?
And all the small businesses like local restaurants and coffee shops that they frequent, etc, etc.
There are so many 2nd order contagion impacts if the knowledge work economy implodes that very few people won't be negatively impacted to some significant degree.
And some people seem to think that outcome means the government will step in and engineer some sort of soft landing. And outside of the US this may very well be true, but here in the US? Seems unlikely.
There’s a reason AI is banned in Dune and Warhammer 40K
While I see your point, I think AI may be banned in those universes for the same reason that time-travelling devices tend to be written out of sci-fi and fantasy: Stories are better without them. (If one counts the ever unfolding history of the world as a story, my counter-point actually validates your original point. Touché.)
Those broads limitations also make writing the stories easier, since the author/worldbuilder doesn't have to come up with 100 different reasons why it can't be done for each particular case. He only have to do it once.
No other profession has such a corpus of free training data available.
I know a lot of people working across portfolio management and tax accounting. Nobody I know of is using LLMs much and frankly their management has started to back-off pushing it more in the workplace.
LLMs suit some jobs more than others. Its quite possible SWE's are the only profession massively affected - whether that means a evolution of the role or decline/death is another question.
> I know a lot of people working across portfolio management and tax accounting. Nobody I know of is using LLMs much and frankly their management has started to back-off pushing it more in the workplace.
I could say the same thing for software engineers I know as recently as the middle of last year, things can change very quickly.
Up until about December 2025 the fact that LLMs would replace us all (SWEs) was the punchline to a joke for most working developers I know. But most of the ones I know aren't laughing anymore, unless its a nervous laugh.
LLMs may (likely will) disrupt software developers first, but I don't think we are particularly unique and I don't see any reason why the same risks won't spread to virtually all knowledge work, especially if executives in those fields see a significant amount of SWEs being replaced by LLMs as an initial test case.
There are still a few quantum leaps needed. I have had great results with Opus 4.6, in particular in green field. But it behaved real messy in some professional real life projects. It seems you also need to tell it very specifics things some times but for that you need to do a software developer in the first place.
We'll see.
LLMs certainly aren't ready to replace all software developers yet.
They may never reach that point.
But even if they never get good enough to replace all software developers, they can still cause massive job losses by allowing companies to do the same work with far fewer developers.
Why would they do the same amount of work with fewer devs when they could also get much more done with the same amount of developers?
Sorry man. This feels like you are hoping SWE alone isn’t affected.
Unfortunately the workflow of a software engineer has been to do things like asking questions on stack overflow to do their job - to use digital resources scattered across the web - to show examples of code freely across the web.
The workflow of an accountant, portfolio manager etc has nothing to do with accessing and using the web in the same manner. If you did their jobs you’d know this, but you don’t. Right?
Is it really a surprise? Nope. Thankfully writing code isn't enough. So your job is still somewhat safe for now.
Tax and accounting is rule based reporting. With formal authorities and openly available rulesets on right and wrong. There’s judgement in it, but that’s even less true than in development. Maybe someone makes the case that there’s art and ergonomics in it too, but not more than swe.
Professional accreditation and responsibility is its only real moat. And those are “yeah but!” issues we hand-wave in discussions around swe too.
Otherwise those are more vulnerable.
Accounting. In my region tax law can suddenly change outside of the usual annual update cycle and, like everywhere, is riddled with edge cases and unclear interpretations.
Most importantly there's often a period of general uncertainty and adoption, during which the new law is already in force, but LLMs will rely on whatever there was previously.
Most people find this job stressful and boring, but the same can be said about software engineering. Regular people pay money to have it dealt with.
Overall I think there will always be demand for handling the messiness of the real world and humans have the upperhand here because they learn as they go, not via release cycles costing a sizeable sum and taking months.
If LLMs can handle software engineering well enough to no longer need engineers, accounting will be solved by the same model version.
Seriously if the future manifests, all of these standard effort based jobs would become redundant...
The issue with outdated information is way overstated, it'd just add the current rules to the context when evaluating and be done with it. We're already at 1 million context size... That's enough for a lot of rules - and the number will likely go higher as time progresses
Realistically, I would die.
A condo costs $2500/month so I will either be homeless and freeze to death or be euthanized.
Maybe I'm a contrarian but I don't think there's hope for anyone that doesn't control resources.
There is no way a condo would continue to cost $2500/mo in a world where there isn't a concentration of well-paid office jobs in that location.
Great! We'll be able to scoop up lots of hay and even toast it on the fire a bit. It's a good starvation-proof fallback.
You would die rather than move somewhere cheaper? What an odd take. I live in the midwest and pay $700/mo for a perfectly fine apartment in a clean and safe suburb.
I live in Canada so housing is uniformly expensive unless you live super rural.
Best choice would be moving up north and slaving in a mineral mine along with everyone else that lost their jobs. Like the 1920s.
I don't see myself being qualified for such a role since I am too short and don't have the physical leverage.
Housing is expensive in Canada, but it's absolutely not uniform. $2500/mo starting is crazy, which city are you sourcing these claims for? I live in a major city (but not Vancouver or Toronto, obviously) and if you're just trying to survive, you can live with roommates for $700-900, possibly less depending on your luck. Apartments, studios and other types of housing for one are about $1500 and up. Then you can go to Quebec and enjoy slighter cheaper housing still, even in the big cities. There's some middle ground between downtown Toronto and some mining town in northern Manitoba.
In this theoretical scenario where AI displaces everyone, the only thing with value will be housing and physical necessities, so I think housing prices will go up.
I think its somewhat comparable to cutting grass in the cities.
It was manual labour first. Then there were teactors. Now robots join in - does that mean that personel cutting grass is obsolete? No , you need all of them. That means that city becomes nicer.
With software and AI I somehow feel the same will happen. How many features have you skipped just because it would help some niche set of users and PM or Management would not approve the spending. It is low priority. Or bugs that were annoying but financially not bringing much value.
I hope switching some work to AI , some companies will capture opportunity to make software better while others will make the same software cheaper
This was already asked today: https://news.ycombinator.com/item?id=47288773
Hadn’t seen that thread. Thanks for linking!
I have some ideas for small business, but am also keeping an eye on jobs that could be prominent and enjoyable. I would much prefer a small business (have done it before) but financial realities may dictate a regular job at first.
I think many of us who have been in software for a while will fantasize about low-tech jobs, I imagine there will be a bunch of hobby farms...
Whats the thing that is the Apple ][ of this new world? Get into that? Maybe ASIC design and programming. Not sure.
I am also quite interested in psychology, but at least in my realm it would take something like 5–7 years of studies etc to become a licensed psychologist, if you want to work directly with humans. That is quite the investment and I am not sure that I have a long enough personal runway for that.
Now is the time to start thinking about being more than just IC, and thinking in terms of an entrepreneur. Call it a "lifestyle business" or whatever, but what can we work toward today that enables us to call the shots? Just don't fall down the trap of making it developer-related.
I've decided to look into becoming either a landscape designer or an electrician. Worst case scenario, there is always a nursing shortage and it's really not very hard to get that degree.
I suspect the number of startups will skyrocket the nexr few years. Fired engineers will start to compete against the establishment that fired them. Competition may get a lot more fierce for a while.
> In 2026, I’m not sure the software engineering industry will survive another decade.
Due to a text predictor?
I'm a daily user of the most recent Claude and while it's amazing at presenting other people's knowledge and reducing cognitive load by filling in the gaps, it's still just a machine that predicts text and that is a limitation which won't be overcome in this generation of such tools which, including research demonstrations, are close to a decade old already.
To me the main issue is that investors are not aware of these limitations and will keep pouring money into this way beyond everyone's breaking point. But really that's a failing of the world's economic system, which relies too much on their whims.
Even if it's "just" a text predictor, it already outperforms the average person in certain domains, particularly software engineering. With all the recent advances in agentic systems like Claude Code and OpenCLAW, these text predictors can iterate and debug faster than the average human. Looking ahead at the next decade, I totally agree with Sean's view here.
> I'm a daily user of the most recent Claude and while it's amazing at presenting other people's knowledge and reducing cognitive load.
'Presenting other people's knowledge' is enough to get the job done when that knowledge encompasses the entire internet.
These are strong claims and I would want to see equally strong evidence for them.
My experience is that it's really darn good at producing text, but it's not a logic engine - it's not designed to be one and even the most recent versions make mistakes which indicate it's not actually thinking.
This is pretty strong evidence to me. https://simonwillison.net/2026/Mar/3/donald-knuth/
it doesn't outperform on any code i ever threw at it. It added more bugs, and invented them when there were none
[dead]
Well it’s not just a “text predictor” is it? You can pretend that today we still only have ChatGPT 2 and that there is only pre-training on a large corpus of information, but that simply isn’t true?
> Well it’s not just a “text predictor” is it?
It is, by definition, design and architecture a system that produces believable text.
Here's a task to give it which pulls the veil right off:
Ask it to add tests to a piece of code where code coverage is 100%, but it doesn't actually test functionality 100%. You'll start seeing nonsense sooner or later.
A leading theory in neuroscience is that human brains are fundamentally prediction machines too, constantly predicting sensory input, other people’s behavior, the next word in a sentence. “it’s just prediction” isn’t the gotcha you think it is. Prediction and attention turn out to be a surprisingly powerful foundation for intelligence.
The “just a text predictor” framing was fair a couple years ago but hasn’t kept up. Current models can genuinely identify untested edge cases even when coverage is 100%. You're definitely using the latest and greatest models?
The architecture started as next-token prediction, sure, and yes, human judgment is still required, but that judgment is being captured and integrated too. Every time millions of people use these models, their feedback feeds the next round of improvements.
Also, these models don’t need to replace your best engineers to be disruptive. They just need to outcompete the bottom of the bell curve. For a lot of junior-level work, we’re already getting close.
> You're definitely using the latest and greatest models?
Claude 4.6 opus high, specifically.
As for human brains: every self respecting neural networks 101 course is prefaced with "don't draw analogues to the human brain". And for good reason. Natural neural networks are fundamentally way more complex at every scale.
Also the brain indeed predicts, but also verifies and learns from the predictions. LLMs don't do that - not in real time at least.
You'll need to give it another metric to improve on.
I agree that the software industry should expect a major upheaval. Developers won’t be replaced by LLMs, but it’s extremely likely that software as a product will become less valuable. You can already vibe code solutions you would’ve otherwise had to pay for. As tools come out that take advantage of this, it’s going to just get easier to spin the app you want up instead of paying for someone else’s. Which is pretty cool if you aren’t already a software developer.
Yes, maybe in a decade all of the most popular LLM are dropped because lack of income. Only companies that learned how to use their own models and local optimized software using very specific hardware could still run them for very specific tasks