
The software industry is trying very hard right now to convince itself that software engineering is no longer necessary. Now anyone can do it. I'm calling bullshit!Large language models can certainly…
The software industry is trying very hard right now to convince itself that software engineering is no longer necessary. Now anyone can do it. I'm calling bullshit!
Large language models can certainly write code, and sometimes that can be a time saver. Rather than searching Stack Overflow and other sources I can go from description to code quickly. Sometimes it's spot on. Often times not. But it's an impressive advance, no doubt. However the industry seems to have concluded that software development has finally been simplified to the point where the expertise isn't needed. If code can be generated on demand, then the hard part must be over. Architecture, specifications, careful validation — are those just quaint artifacts? Nonsense.
In some organizations this idea isn’t even being explored cautiously. It has already begun to shape policy. Engineers are being laid off in startling numbers with AI advances cited as making expertise redundant. The truth is that AI is just the latest excuse to deflect from bad business decisions or overwhelming market forces.
The discipline that governed how complex systems are built are being abandoned practically wholesale. Prompting an AI is increasingly being presented as a replacement for the discipline that once defined software engineering. Should I use another expletive here? You can fill in your favorite.
I’m writing this because I'm feeling déjà vu all over again (anyone know who Yogi Berra was?). Over the course of a long career you start to recognize certain patterns in this industry. Every few years a new tool appears and someone declares that the difficult parts of software engineering have finally been solved, or eliminated. To some it looks convincing. Productivity spikes. Demos look impressive. The industry congratulates itself on a breakthrough. Staff reductions kick in in the hopes that the market will respond positively. And then, slowly, the systems continue to grow. The complexity grows. And now what?
I’ve spent more than 4 decades in this industry, and I’ve watched several cycles like this play out. The tools change and the arguments change, but the pattern rarely does.
It never works out the way people expect.
Aircraft maintenance has evolved as the aircraft systems themselves have evolved. The hand tools improved. Diagnostics became computerized. Manuals are digital. Procedures are well documented. AI systems can help interpret telemetry from the aircraft. Given all that progress, do we still need trained aircraft mechanics? Of course.
Modern aircraft are extraordinarily complex systems. A commercial airliner contains millions of parts and thousands of interconnected subsystems. Diagnosing a problem is not simply a matter of having the right tools or following a checklist. It requires experience. It requires judgment. It requires understanding how those systems behave under real operating conditions.
The tools help. The manuals help. The diagnostic systems help. But none of those things replace the expertise of the people responsible for maintaining the aircraft. No airline would ever suggest that improved tools eliminate the need for trained mechanics in favor of having the gate agent do repairs (sorry, no offense to gate agents).
Yet that is very close to the argument the software industry is now making about itself. Apparently we can finally get rid of those pesky software developers?
Before going further it’s worth clarifying something important. I’m not talking about hobbyists. I’m not talking about someone experimenting with a small application, building something for personal use, or exploring a new idea. People should absolutely do those things. Some of the most interesting ideas in computing have come from exactly that kind of experimentation.
But professional software development is a different category entirely.
Professional software is not a hobby project. It is a product. It is something customers rely on. It processes payments, stores sensitive information, manages infrastructure, and increasingly operates systems that people depend on every day. Once software crosses that line, the expectations change.
Customers assume the system behaves correctly. They assume it will continue to behave correctly as it evolves. They assume the people building it understand how the system actually works. Those expectations are not unreasonable. They are the basic conditions of professional engineering. And that is where discipline and expertise becomes unavoidable.
One of the longest-standing misconceptions about software development is that writing code is the difficult part of the job. It never was. Typing syntax into a machine has always been the least interesting part of building a system. The difficult work lies elsewhere: deciding how the system should behave, determining how its components interact, and ensuring that the system remains understandable as it grows in complexity.
Those questions require design decisions, careful reasoning, and a clear understanding of how changes propagate through a system over time. They are engineering problems, not coding problems.
Reducing the effort required to produce code does not eliminate those problems. It simply allows people to produce larger and more complicated systems more quickly. The delusion is that this is a productivity gain. It's not. Not yet. It has shifted the burden elsewhere. Just consider code review and the cognitive load required to actually deal with all of that code that someone can generate. That's ultimately a bigger drain on productivity than writing the code. And if the underlying behavior has not been understood clearly enough, the additional speed merely accelerates the moment when the complexity becomes unmanageable while the outcome is wrong.
In the 1990s we heard something similar about tools such as Visual Basic. The promise was that programming had been democratized and that software development would no longer require specialized expertise. Anyone with a useful idea could now produce an application.
There was some truth to that claim. Visual Basic enabled many applications that might never have been written otherwise. But it didn't eliminate the need for engineering discipline.
As systems grew larger and more interconnected, organizations rediscovered something important: producing software artifacts is not the same thing as engineering reliable systems.
What we are seeing today is the same pattern again, only amplified. Instead of lowering the barrier to building applications, large language models lower the barrier to producing code itself.
From that has emerged the seductive belief that expertise is no longer necessary.
Up to this point the hype sounds like reality. Better tools. Faster output. Less friction. But every wave of enthusiasm in this industry eventually runs into the same problem. It isn’t a tooling problem. It isn’t really a productivity problem either. It’s a systems problem.
Reliable software depends on something that most people outside engineering rarely talk about: alignment. A system begins with an idea about how something should behave. That idea is written down as a specification. Engineers translate that specification into tests and into production code. For the system to remain reliable over time, those three things have to stay aligned.
The specification describes the behavior. The tests verify it. And the implementation actually performs it. When those three drift apart, the system slowly begins to lose its integrity.
Specifications describe behavior that the system no longer implements. Tests verify fragments of behavior but miss the rest. Engineers who arrive later are forced to infer how the system really behaves by reading code that may or may not reflect the original design.
At first that seems manageable. A few educated guesses here and there. But over time the guesses pile up. Eventually the system becomes something nobody really understands anymore.
In my whitepaper Engineering Alignment, I describe this phenomenon as spec drift. Spec drift is exactly what it sounds like: the description of the system and the system itself gradually move apart.
Sometimes the code changes and the specification doesn’t. Sometimes the specification evolves but the tests remain frozen. Sometimes the behavior shifts incrementally until nobody can say with confidence what the original intent actually was.
However it happens, the result is the same. The system loses alignment. And once that happens, reliability rarely survives for long. You can read more about this problem here: https://robenglander.com/writing/engineering-alignment/
Large language models dramatically accelerate the production of code. That is their greatest strength. It is also where the danger appears.
When code can be produced faster than the engineering discipline surrounding it, the forces that create spec drift begin to accelerate. Changes that once required careful thought and manual implementation can now appear in seconds. Entire sections of a system can be rewritten before anyone has asked whether the behavior still corresponds to the specification.
The code usually looks reasonable. It compiles. It reads well. It might even pass the existing tests. But the alignment that once governed the system may already be gone. What appears to be productivity can quietly become the ability to move toward misalignment faster than ever before.
None of this means large language models are a mistake. They are remarkable tools, and used thoughtfully they can dramatically improve the way engineers explore and design systems.
Language models are exceptionally good at helping engineers reason about problems, explore design alternatives, summarize complex systems, and generate drafts that accelerate the early stages of implementation.
Where they struggle is in the areas that require strict discipline and consistency over time. Maintaining alignment between specifications, tests, and implementation remains an engineering responsibility. No tool can replace that responsibility, although many tools can help support it.
The real opportunity lies in using language models in ways that strengthen the engineering process rather than quietly replacing it.
One of the more interesting possibilities opened by language models is that parts of software engineering may become more conversational. For decades the tools we used to design systems were rigid. Specifications were documents. Architectures were diagrams. The reasoning that led to those artifacts often disappeared into meetings and hallway conversations.
Language models change that dynamic. Engineers can explore ideas interactively, test assumptions, and work through designs in ways that feel much closer to natural conversation. That ability is genuinely valuable. But conversation is not engineering.
Conversation is how ideas are explored. Engineering begins when those ideas are captured in a form that can be validated, tested, and maintained. The challenge for the next generation of engineering tools will be learning how to bridge those two worlds without losing the discipline that complex systems require.
Professional software still requires engineers who understand how the systems they build actually work. Tools can accelerate development, but they do not eliminate the expertise required to design, reason about, and maintain complex systems. Right now the industry seems dangerously close to forgetting that.
LLMs are remarkable tools. They can make experienced engineers far more productive. But they do not replace the engineering discipline required to build reliable systems.
Let’s use these tools effectively, not worshipfully.
I disagree with the premise. It made all engineering easier. Bad and good.
I believe vibe coding has always existed. I've known people at every company who add copious null checks rather than understanding things and fixing them properly. All we see now is copious null checks at scale. On the other hand, I've also seen excellent engineering amplified and features built by experts in days which would have taken weeks.
I believe the article exaggerates to make a point. Yes, good engineering can also be assisted with LLM-based agents, but there is a delta.
Good engineering requires that you still pay attention to the result produced by the agent(s).
Bad engineering might skip over that part.
Therefore, via Amdahl's law, LLM-based agents overall provide more acceleration to bad engineering than they do to good engineering.
The connection to Amdahl's law is totally on point. If you're just using LLMs as a faster way to get _your_ ideas down, but still want to ensure you validate and understand the output, you won't get the mythical 10x improvement so many seem to claim they're getting. And if you do want that 10x speedup, you have to forego the validation and understanding.
I do agree with you, but don't underestimate the projects where you can actually apply this 10x. For example, I wanted to get some analytics out of my database. What would have been a full weekend project was now done in an hour. So for such things there is a huge speed boost.
But once software becomes bigger and more complex, the LLM starts messing up, and the expert has to come in. That basicaly means your months project cannot be done in a week.
My personal prediction: plugins and systems that support plugins will become important. Because a plugin can be written at 10x speed. The system itself, not so much.
I think there will also be a lot of work in how to modularize month long projects into plugin sized pieces.
Well, it's made bad engineering massively easier and good engineering a little easier.
So much so that many people who were doing good engineering before have opted to move to doing three times as much bad engineering instead of doing 10% more good engineering.
[dead]
In corporate app development, I would see tests to check that the mocks return the expected values. Like, what are we even doing here?
# abstract internals for no reason
func doThing(x: bool)
if (x)
return true
else
return false
# make sure our logic works as expected
assert(doThing(true))
# ???
# profit
It's excellent software engineering because there are testsYou could ask the same thing about tests themselves. And I'm not talking about tests that don't exercise the code in a meaningful manner like your assertions on mocks(?!)
I'm saying you could make the same argument about useful tests themselves. What is testing that the tests are correct?
Uncle Bob would say the production code is testing the tests but only in the limited, one-time, acceptance case where the programmer who watches the test fail, implements code, and then watches it pass (in the ideal test-driven development scenario.)
But what we do all boils down to acceptance. A human user or stakeholder continues to accept the code as correct equals a job well done.
Of course, this is itself a flawed check because humans are flawed and miss things and they don't know what they want anyhow. The Agile Manifesto and Extreme Programming was all about organizing to make course corrections as cheap as possible to accommodate fickle humanity.
> Like, what are we even doing here?
What ARE we doing? A slapdash job on the whole. And, AI is just making slapdash more acceptable and accepted because it is so clever and the boards of directors are busy running this next latest craze into the dirt. "Baffle 'em with bullsh*t" works in every sector of life and lets people get away with all manner of sins.
I think what we SHOULD be doing is plying our craft. We should be using AI as a thinking tool, and not treat it like a replacement for ourselves and our thinking.
I'm trying to wrap my head around here.
So there are tests that leverage mocks. Those mocks help validate software is performing as desired by enabling tests to see the software behaves as desired in varying contexts.
If the software fails, it is because the mocks exposed that under certain inputs, undesired behavior occurs, an assert fails, and a red line flags the test output.
Validating that the mocks return the desired output.... Maybe there is a desire that the mocks return a stream of random numbers and the mock validation tests asserts said stream adheres to a particular distribution?
Maybe someone in the past pushed a bad mock into prod, that mock validated a test that would have failed given better mock, and a post mortem when the bad software, now pushed into prod, was traced to a bad mock derived a requirement that all mocks must be validated?
Yeah, seems plausible, or it was just "belt and suspenders." Sure made a lot of pretty green checkmarks.
Someone was asked to test untestable code so verifying mock contents was the best they could come up with.
No. Someone was asked to meet an arbitrary code coverage threshold. I'm dealing with this malicious compliance/weaponized incompetence at $current_job
How will you deal with it? I successfully convinced $big_important_group at $day_job to not implement a policy of failing their builds when code coverage dips below their target threshold > 90%. (Insane target, but that's a different conversation.)
I convinced them that if they wanted to treat uncovered lines of code as tech debt, they needed to add an epic stories to their backlog to write tests. And their artificially setting some high target coverage threshold will produce garbage because developers will write do-nothing tests in order to get their work done and not trip the alarms. I argued that failing the builds on code coverage would be unfair because the tech debt created by past developers would unfairly hinder random current-day devs getting their work done.
Instead, I recommended they pick their current coverage percentage (it was < 10% at the time) and set the threshold to that simply to prevent backsliding as new code was added. Then, as their backlogged, legit tests were implemented, ratchet up the coverage threshold to the new high water mark. This meant all new code would get tests written for them.
And, instead of failing builds, I recommended email blasts to the whole team to indicate there was some recent backsliding in the testing regime and the codebase had grown without accompanying tests. It was not a huge shame event, but good a motivator to the team to keep up the quality. SonarQube was great for long-term tracking of coverage stats.
Finally, I argued the coverage tool needed to have very liberal "ignore" rules that were agreed to by all members of the team (including managers). Anything that did not represent testable logic written by the team: generated code, configurations, tests themselves, should not count against their code coverage percentages.
we use this https://github.com/auchenberg/volkswagen
I think it's easy to forget that the LLM is not a magic oracle. It doesn't give great answers. What you do with the LLM's output determines whether the engineering you produce is good or bad. There are places where you can plonk in the LLM's output as-is and places you can't, or times when you have to keep nudging for a better output, and times when nothing the LLM produces is worth keeping.
It makes bad engineering easier because it's easy to fall into the trap of "if the LLM said so, it must be right".
Even if you agree with the OP, there's a large portion of applications where it simply doesn't matter if the quality of the software is good or terrible as long as it sufficiently works.
Yeah, I've seen this too. I like to call them "single-serving apps". I made a flashcard app to study for interviews and one-shot it with Claude Code. I've had it add some features here and there but haven't really looked at the code.
It's just a small CLI app in 3 TypeScript files.
> Ive known people at every company who add copious null checks rather than understanding things and fixing them properly.
ynow "defensive programming" is a thing, yeah? Sorry mate, but that statement I'd expect from juniors, which are also often the one's claiming their own technical superiority over others
Adding null checks where they aren't needed means adding branching complexity. It means handling cases that may never need to be handled. Doing all that makes it harder to understand "could this variable ever be null?" If you can't answer that question, it is now harder to write code in the future, often leading to even more unnecessary null checks.
I've seen legacy code bases during code review where someone will ask "should we have a null check there?" and often no-one knows the answer. The solution is to use nullability annotations IMO.
It's easy to just say "oh this is just something a junior would say", but come on, have an actual discussion about it rather than implying anyone who has that opinion is inexperienced.
No, the branching complexity exists anyway. You've just made it clearly visible by adding a null check or accept that the computational may fail if violated.
You never know what changes are being done in the future, while today the variable may not be nullable in the scenario you're up-to-date on, that doesn't necessarily mean it'll stay like that in the future.
Ultimately, there is a cost associated with null checks everywhere and another by omitting them. The person I responded to just insinuated that people which introduce copious amounts of null checks are inept and lazy.
In response to that I pointed out that that's literally one of the core tenets of defensive programming, and people that make such sweeping statements about other people's capabilities in this way are very often juniors. I stand by this opinion. You can disagree on specific places were a null check may have been placed unnecessary, but that's always a discussion about a specific field and cannot be generalized like he did there.
Except vibe coding is not "engineering," but more akin to project management. Engineering presupposes a deep and thorough understanding of your code. If you ship code that you’ve never even looked at, you are no longer an engineer.
100%.
There are cases where a unit test or a hundred aren’t sufficient to demonstrate a piece of code is correct. Most software developers don’t seem to know what is sufficient. Those heavily using vibe coding even get the machine to write their tests.
Then you get to systems design. What global safety and temporal invariants are necessary to ensure the design is correct? Most developers can’t do more than draw boxes and arrows and cite maxims and “best practices” in their reasoning.
Plus you have the Sussman effect: software is often more like a natural science than engineering. There are so many dependencies and layers involved that you spend more time making observations about behaviour than designing for correct behaviours.
There could be useful cases for using GenAI as a tool in some process for creating software systems… but I don’t think we should be taking off our thinking caps and letting these tools drive the entire process. They can’t tell you what to specify or what correct means.
I don't have any idea of what a unit test is, but with AI I can make programs that help me immensely in my real world job.
Snobby programmers would never even return an email offering money for their services.
It's unclear what point you're even trying to make, other than that AI has been helpful to you. But surely you understand that if you don't know what a unit test is you're probably not in a position to comment on the value of unit testing.
> Snobby programmers would never even return an email offering money for their services.
Why the would they? I don't respond to the vast majority of emails, and I'm already employed.
Helpful to me and millions of others. Soon to be billions even.
You are employed because somewhere in the pipeline there are paying customers. They don't care about unit tests, they care about having their problems solved. Beware of AI.
Right... I mean, no engineer is going to tell you that customers care about unit tests, so I think you're arguing against a straw man here. What engineers will tell you is that bugs cost money, support costs money, etc, and that unit tests are one of the ways we cheaply reduce those costs in order to solve problems, which is what we're in the business of doing.
We are all very aware of the fact that customers pay us... it seems totally strange to be that you think we wouldn't be aware of this fact. I suspect this is where the disconnect comes in, much to the point of the article - you seem to think that engineers just write tests and code, but the article points out how silly that is, we spend most of our time thinking about our customers, features, user experience, and how to match that to the technology choices that will allow us to build, maintain, and support systems that meet customer expectations.
I think people outside of engineering might be very confused about this, strangely, but engineers do a ton of product work, support work, etc, and our job is to match that to the right technology choices.
Yeeesh. Looking at other posts from that user, they seem to have a serious grudge against software devs, presumably for not responding to their emails. "You should starve" - words taken from another post.
Look, no one wanted to write code for you idk what to tell you. Now you can have AI do that for you. Congrats, best of luck. Whatever weird personal issue you have, I doubt anyone was not working for you out of some whatever this perceived snobbery is and it's just like... we all have jobs?
I don't have a grudge, but there needs to be some balance. Software devs are incredibly well paid compared to other professionals. It is their responsibility to use their talents to benefit themselves, and if they are out-competed then they should work with something else. They don't have a right to a fantastic career.
All other workers have had to go through this when their fields became more automated and efficient. A cargo ship used to have hundreds of crew, now it's a dozen and the amount of cargo on a ship is ten times more.
So I will absolutely not cry for a software dev who has to make changes in the face of AI competition. If they're too precious to adapt or take a different job, then starve.
> Look, no one wanted to write code for you idk what to tell you. Now you can have AI do that for you. Congrats, best of luck.
Me and hundreds of thousands of other organizations who have software needs that were under served by the market. Now we will have AI write that code for us - or more realistically, now we will purchase this software from any of the thousands of boutique software development shops that will emerge, which use AI + talented human developers to serve us.
I have the strong impression that programmers in many cases have a good deal of snobbery regarding what tasks they are willing to work on. If it's not giant enterprise software, then it's usually just filed under "hobbyist" or open source. Hopefully many programmers will find a well paying career serving less glamorous customers with software that solves real world problems. But many will have to change their attitude if they want to do that.
> If they're too precious to adapt or take a different job, then starve.
Yeah I mean I think everyone is with you except for the "then starve", this is just weirdly combative and lacking in empathy, I find it totally strange.
> Me and hundreds of thousands of other organizations who have software needs that were under served by the market.
And... you blame software developers for that? You blame software devs for a lack of capacity in the field? So weird.
> Now we will have AI write that code for us - or more realistically, now we will purchase this software from any of the thousands of boutique software development shops that will emerge, which use AI + talented human developers to serve us.
Okay, I mean, this has always been an option. I guess it will be more of an option now. There have been consulting agencies or "WYSIWYG" editors like Wix or other "low code/ no code" platforms for ages. No one is going to be upset that you're using them. This hostility is totally one sided lol
> I have the strong impression that programmers in many cases have a good deal of snobbery regarding what tasks they are willing to work on.
We like to work on interesting projects... is that surprising? Is that snobbery? I don't get it.
> If it's not giant enterprise software, then it's usually just filed under "hobbyist" or open source.
I find this funny because hobbyist/ open source projects are by far the ones that are glamorized by the field, not enterprise software.
> Hopefully many programmers will find a well paying career serving less glamorous customers with software that solves real world problems. But many will have to change their attitude if they want to do that.
I have no idea where you get this impression from. Most software devs I've worked with are motivated heavily by solving real world problems. I think you have very, very little insight into what software development actually looks like or what software engineers are motivated by. Frankly, this comes off as very much "I was snubbed and now I'm happy that the people who I perceive as having snubbed me will be replaced by AI", which I think is quite lame. You definitely seem to have a resentful tone to your posts that I find weird.
Lacking in empathy could also be said of the software devs who think that software devs are a significant customer group in the economy, when they are a tiny percentage of the work force. Asking yourself "who is going to purchase the products?" when software development is being automated is quite silly. Why didn't they ask that question when thousands of other professions suffered the same?
> And... you blame software developers for that?
I don't blame them. They had more lucrative ventures to tend to. Now that under served market segments can be served with the help of AI, then they shouldn't complain.
You mention making web sites, but this is probably the only field in computing where the market has a lot of offerings to customers from all segments. If I need a website I don't have to use Wix, there is an endless supply of freelancers or small, medium, or big studios that offer their services. The same cannot be said of other bespoke software needs.
BTW, you are hearing a lot more hostility in my comments than is actually there.
> We like to work on interesting projects... is that surprising? Is that snobbery? I don't get it.
Yes it is snobbery. Other skilled professionals generally do not have that option, they have to do the boring stuff as well. And if you only like to work on interesting projects, then why are people complaining that AI is taking their jobs?
Regarding hobbyist / open source, I mean that when software devs aren't working on big enterprise style projects as a job, they tend to work on enterprise style projects as open source, or just play with hobbyist projects. Servicing smaller customers with bespoke software seems to be considered a little bit beneath the programmer dignity.
And it's not my personal experience talking. Consider how many studios are offering bespoke software for small businesses, compared to how many studios are offering websites for small businesses. There's a huge gap, that is probably going to be filled in some way pretty soon.
> Lacking in empathy could also be said of the software devs who think that software devs are a significant customer group in the economy, when they are a tiny percentage of the work force.
This is silly equivocation. I'm telling you that your statement lacks empathy, and you're making vague, unclear gestures to an entire field.
Anyway, reading your post it's clear that you have a rather pathetic grudge because software devs weren't interesting in working with you and now you get to grin gleefully as you see AI take away jobs. You obviously have zero insight into software development as a practice, nor how software devs think - this is glaringly obvious by your detraction that software devs give software away for free as somehow snobbery because they wouldn't work on whatever project you clearly hold a grudge over. Further, your comments from start to finish demonstrate a complete lack of understanding of what the job actually entails.
> BTW, you are hearing a lot more hostility in my comments than is actually there.
Maybe so! I can't tell you what you actually think, but it comes off as really pathetic, so maybe reread your post and consider why I'm hearing it.
Best of luck in your ventures.
What I'm saying is that it's a non-issue for customers if software has good engineering or not, if it fulfills their needs at a price they can pay.
With AI code we might get software that is an ugly mess underneath, but at least we have it. While human programmers are unwilling to provide this software for even a high price.
I could argue that people are better off having nothing to eat rather than having high quality food. But in reality something is better than nothing.
There is a gigantic market and a gigantic need of software in the field between hobbyist and enterprise. And AI code will serve that field. Software engineers like you are the people who can best exploit this market segment, probably by leveraging these new AI tools.
Otherwise more and more people will do like me and have AI make their own bespoke solutions.
> What I'm saying is that it's a non-issue for customers if software has good engineering or not, if it fulfills their needs at a price they can pay.
You think that good engineering is unrelated to fulfilling needs at a price they pay? I think you're confused. Software engineers are tasked with exactly this problem - determining how to deliver utility at a price point. That's... the whole job. We consider how much it costs to run a database, how to target hardware constraints, how to build software that will scale accordingly as new features are added or new users onboard, etc. That's the job...
That's sort of the whole point of the article. The job isn't "write code", which is what AI does. The job is "understand the customer, understand technology, figure out how to match customer expectations, business constraints, and technologies together in a way that can be maintained at-cost".
> While human programmers are unwilling to provide this software for even a high price.
Sorry, but this is just you whining that people didn't want to work for you. Software engineers obviously are willing to provide software in exchange for money, hence... idk, everyone's jobs.
> And AI code will serve that field.
That may be true, just as no-code solutions have done in the past.
> Software engineers like you are the people who can best exploit this market segment, probably by leveraging these new AI tools.
Yes, I agree.
> Otherwise more and more people will do like me and have AI make their own bespoke solutions.
I'm a bit skeptical of this broadly in the long term but it's certainly the case that some portion of the market will be served by this approach.
The end user of a bridge doesn’t care about most things the engineer who designed it does. They care that the bridge spans the gap and is safe to use. The company building the bridge and the group financing its construction care about a few more things like how much it will cost to provide those things to the end user, how long it will last. The engineer cares about a few more things: will the tolerances in the materials used in the struts account for the shearing forces of the most extreme weather in the region?
So it is with software.
You might not need a blueprint if you’re building a shed in your back yard. This is the kind of software that and user might write or script themselves. If it’s kind of off nobody is going to get hurt.
In many cities in North America you can’t construct a dwelling with plumbing connections to a sewer and a permanent foundation without an engineer. And you need an engineer and a blueprint to get the license to go ahead with your construction.
Because if you get it wrong you can make people in the dwelling sick and damage the surrounding environment and infrastructure.
Software-wise this is where you’re handling other people’s sensitive data. You probably have more than one component that needs to interact with others. If you get it wrong people could lose money, assets could get damaged, etc.
This is where I think the software industry needs to figure out liability and maybe professionalize a bit. Right now the liability is with the company and the deterrents are basically no worse than a speeding ticket in most cases. It’s more profitable to keep speeding and pay off the ticket than to prevent harm from throwing out sloppy code and seeing what sticks.
Then if you are building a sky scraper… well yeah, that’s the large scale stuff you build at big tech.
There are different degrees of software with different requirements. While not engineers by professional accreditation, in practice I would say most software developers are doing engineering… or trying.
What I agree with in the article is that AI tools make bad engineering easier. That is for people building houses and skyscrapers who should be thinking about blueprints: they are working under assumptions that the AI is “smart” and will build sky-scrapers for them. They’re not thinking about the things they ought to be and less about things that will pass on to the customers: cost and a product that isn’t fit for use.
A bridge that falls down if you drive too fast over it isn’t a useful bridge even though it looks like a bridge.
> Every few years a new tool appears and someone declares that the difficult parts of software engineering have finally been solved, or eliminated. To some it looks convincing. Productivity spikes. Demos look impressive. The industry congratulates itself on a breakthrough. Staff reductions kick in in the hopes that the market will respond positively.
As a software engineer, I'd love if the industry had an actual breakthrough, if we found a way to make the hard parts easier and prevent software projects from devolving into balls of chaos and complexity.
But not if the only reward for this would be to be laid off.
So, once again, the old question: If reducing jobs is the only goal, but people are also expected to have jobs to be able to pay for food and housing, what is the end goal here? What is the vision that those companies are trying to realize?
The goal has nothing to do with you being employed. Your job security is a consequence of the ultimate goal to build AGI. And software development salaries and employment will be affected before getting there. In my opinion, we already past the SWE peak as far as yearly salary. Yes there are super devs working on AI making a lot of dough, but I consider that a particular specialty. On average the salary of a new grad SWE in the US is past its peak if you consider how many new grads can’t get a job.
> if we found a way to make the hard parts easier and prevent software projects from devolving into balls of chaos and complexity.
I don't really believe this is possible. Or, it's the sort of thing that gets solved at a "product" level. Reality is complicated. People are complicated. The idea that software can exist without complexity feels sort of absurd to me.
That said, to your larger point, I think the goal is basically to eliminate the middle class.
You can work with something else if there's no longer any demand for your current skills. If you refuse you should starve.
> So, once again, the old question: If reducing jobs is the only goal, but people are also expected to have jobs to be able to pay for food and housing, what is the end goal here? What is the vision that those companies are trying to realize?
Capitalism is reliant on the underclass (the homeless, the below minimum-wage) to add pressure to the broader class of workers in a way that makes them take jobs that they ordinarily wouldn't (Because they may be e.g. physically/emotionally unsafe, unethical, demeaning), for less money than they deserve and for more hours than they should. This is done in order to ensure that the price of work for companies is low, and that they can always draw upon a needy desperate workforce if required. You either comply with company requirements, or you get fired and hope you have enough runway not to starve. This was written about over a hundred years ago and it's especially true today in the modern form of it. Programmers as a field have just been materially insulated from the modern realities of "your job is timing your bathroom breaks, tracking how many hours you spend looking at the internet, your boss verbally abuses you for being slow, and you aren't making enough money to eat properly".
This is also why many places do de-facto 'cleansings' of homeless people by exterminating their shelter or removing their ability to survive off donations, and why the support that is given for people without the means to survive is not only tedious but almost impossible to get. The majority of workers are supposed to look at that and go "well fuck, glad that's not me!" with a little part of their brain going "if i lost my job and things went badly, that could become me."
This is also why immigration enforcement is a thing — so many modern jobs that nobody else in the western world wants to do are taken by immigrants. The employer won't look too closely at the visa, and in return the person gets work. With the benefit being towards the employer — if the person refuses to do something dangerous to themselves or others, or refuses to produce enough output to sustain the exponential growth at great personal cost, well, then the company can just cut the immigrant loose with no recourse, or outright call the authorities on them so they get deported. Significantly less risky to get people to work in intolerable conditions for illegal wages if there is no hope of them suing you for this.
Back in the 1900s there were international conventions to remove passports. Now? Well, they're a convenient underclass for political manoeuvring. Why would you want people to have freedom of movement if your own citizens could just leave when things get bad, and when the benefits are a free workforce that you don't have to obey workers rights laws about?