AI Made Writing Code Easier. It Made Being an Engineer Harder

2026-03-0114:09403311www.ivanturkovic.com

Writing code is easier than ever. Being a software engineer is harder than ever. The paradox nobody talks about, and what engineers and leaders should do.

Yes, writing code is easier than ever.

AI assistants autocomplete your functions. Agents scaffold entire features. You can describe what you want in plain English and watch working code appear in seconds. The barrier to producing code has never been lower.

And yet, the day-to-day life of software engineers has gotten more complex, more demanding, and more exhausting than it was two years ago.

This is not a contradiction. It is the reality of what happens when an industry adopts a powerful new tool without pausing to consider the second-order effects on the people using it.

If you are a software engineer reading this and feeling like your job quietly became harder while everyone around you celebrates how easy everything is now, you are not imagining things. The job changed. The expectations changed. And nobody sent a memo.

There is a phenomenon happening right now that most engineers feel but struggle to articulate. The expected output of a software engineer in 2026 is dramatically higher than it was in 2023. Not because anyone held a meeting and announced new targets. Not because your manager sat you down and explained the new rules. The baseline just moved.

It moved because AI tools made certain tasks faster. And when tasks become faster, the assumption follows immediately: you should be doing more. Not in the future. Now.

A February 2026 study published in Harvard Business Review tracked 200 employees at a U.S. tech company over eight months. The researchers found something that will sound familiar to anyone living through this shift. Workers did not use AI to finish earlier and go home. They used it to do more. They took on broader tasks, worked at a faster pace, and extended their hours, often without anyone asking them to. The researchers described a self-reinforcing cycle: AI accelerated certain tasks, which raised expectations for speed. Higher speed made workers more reliant on AI. Increased reliance widened the scope of what workers attempted. And a wider scope further expanded the quantity and density of work.

The numbers tell the rest of the story. Eighty-three percent of workers in the study said AI increased their workload. Burnout was reported by 62 percent of associates and 61 percent of entry-level workers. Among C-suite leaders? Just 38 percent. The people doing the actual work are carrying the intensity. The people setting the expectations are not feeling it the same way.

This gap matters enormously. If leadership believes AI is making everything easier while engineers are drowning in a new kind of complexity, the result is a slow erosion of trust, morale, and eventually talent.

A separate survey of over 600 engineering professionals found that nearly two-thirds of engineers experience burnout despite their organizations using AI in development. Forty-three percent said leadership was out of touch with team challenges. Over a third reported that productivity had actually decreased over the past year, even as their companies invested more in AI tooling.

The baseline moved. The expectations rose. And for many engineers, no one acknowledged that the job they signed up for had fundamentally changed.

Here is something that gets lost in all the excitement about AI productivity: most software engineers became engineers because they love writing code.

Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.

Now they are being told to stop.

Not explicitly, of course. Nobody walks into a standup and says “stop writing code.” But the message is there, subtle and persistent. Use AI to write it faster. Let the agent handle the implementation. Focus on higher-level tasks. Your value is not in the code you write anymore, it is in how well you direct the systems that write it for you.

For early adopters, this feels exciting. It feels like evolution. For a significant portion of working engineers, it feels like being told that the thing they spent years mastering, the skill that defines their professional identity, is suddenly less important.

One engineer captured this shift perfectly in a widely shared essay, describing how AI transformed the engineering role from builder to reviewer. Every day felt like being a judge on an assembly line that never stops. You just keep stamping those pull requests. The production volume went up. The sense of craftsmanship went down.

This is not a minor adjustment. It is a fundamental shift in professional identity. Engineers who built their careers around deep technical skill are being asked to redefine what they do and who they are, essentially overnight, without any transition period, training, or acknowledgment that something significant was lost in the process.

Having led engineering teams for over two decades, I have seen technology shifts before. New frameworks, new languages, new methodologies. Engineers adapt. They always have. But this is different because it is not asking engineers to learn a new way of doing what they do. It is asking them to stop doing the thing that made them engineers in the first place and become something else entirely.

That is not an upgrade. That is a career identity crisis. And pretending it is not happening does not make it go away.

While engineers are being asked to write less code, they are simultaneously being asked to do more of everything else.

More product thinking. More architectural decision-making. More code review. More context switching. More planning. More testing oversight. More deployment awareness. More risk assessment.

The scope of what it means to be a “software engineer” expanded dramatically in the last two years, and it happened without a pause to catch up.

This is partly a direct consequence of AI acceleration. When code gets produced faster, the bottleneck shifts. It moves from implementation to everything surrounding implementation: requirements clarity, architecture decisions, integration testing, deployment strategy, monitoring, and maintenance. These were always part of the engineering lifecycle, but they were distributed across roles. Product managers handled requirements. QA handled testing. DevOps handled deployment. Senior architects handled system design.

Now, with AI collapsing the implementation phase, organizations are quietly redistributing those responsibilities to the engineers themselves. The Harvard Business Review study documented this exact pattern. Product managers began writing code. Engineers took on product work. Researchers started doing engineering tasks. Roles that once had clear boundaries blurred as workers used AI to handle jobs that previously sat outside their remit.

The industry is openly talking about this as a positive development. Engineers should be “T-shaped” or “full-stack” in a broader sense. Nearly 45 percent of engineering roles now expect proficiency across multiple domains. AI tools augment generalists more effectively, making it easier for one person to handle multiple components of a system.

On paper, this sounds empowering. In practice, it means that a mid-level backend engineer is now expected to understand product strategy, review AI-generated frontend code they did not write, think about deployment infrastructure, consider security implications of code they cannot fully trace, and maintain a big-picture architectural awareness that used to be someone else’s job.

That is not empowerment. That is scope creep without a corresponding increase in compensation, authority, or time.

From my experience building and scaling teams in fintech and high-traffic platforms, I can tell you that role expansion without clear boundaries always leads to the same outcome: people try to do everything, nothing gets done with the depth it requires, and burnout follows. The engineers who survive are the ones who learn to say no, to prioritize ruthlessly, and to push back when the scope of their role quietly doubles without anyone acknowledging it.

There is an irony at the center of the AI-assisted engineering workflow that nobody wants to talk about: reviewing AI-generated code is often harder than writing the code yourself.

When you write code, you carry the context of every decision in your head. You know why you chose this data structure, why you handled this edge case, why you structured the module this way. The code is an expression of your thinking, and reviewing it later is straightforward because the reasoning is already stored in your memory.

When AI writes code, you inherit the output without the reasoning. You see the code, but you do not see the decisions. You do not know what tradeoffs were made, what assumptions were baked in, what edge cases were considered or ignored. You are reviewing someone else’s work, except that someone is not a colleague you can ask questions. It is a statistical model that produces plausible-looking code without any understanding of your system’s specific constraints.

A survey by Harness found that 67 percent of developers reported spending more time debugging AI-generated code, and 68 percent spent more time reviewing it than they did with human-written code. This is not a failure of the tools. It is a structural property of the workflow. Code review without shared context is inherently more demanding than reviewing code you participated in creating.

Yet the expectation from management is that AI should be making everything faster. So engineers find themselves in a bind: they are producing more code than ever, but the quality assurance burden has increased, the context-per-line-of-code has decreased, and the cognitive load of maintaining a system they only partially built is growing with every sprint.

This is the supervision paradox. The faster AI generates code, the more human attention is required to ensure that code actually works in the context of a real system with real users and real business constraints. The production bottleneck did not disappear. It moved from writing to understanding, and understanding is harder to speed up.

What makes all of this especially difficult is the self-reinforcing nature of the cycle.

AI makes certain tasks faster. Faster tasks create the perception of more available capacity. More perceived capacity leads to more work being assigned. More work leads to more AI reliance. More AI reliance leads to more code that needs review, more context that needs to be maintained, more systems that need to be understood, and more cognitive load on engineers who are already stretched thin.

The Harvard Business Review researchers described this as “workload creep.” Workers did not consciously decide to work harder. The expansion happened naturally, almost invisibly. Each individual step felt reasonable. In aggregate, it produced an unsustainable pace.

Before AI, there was a natural ceiling on how much you could produce in a day. That ceiling was set by thinking speed, typing speed, and the time it takes to look things up. It was frustrating sometimes, but it was also a governor. A natural speed limit that prevented you from outrunning your own ability to maintain quality.

AI removed the governor. Now the only limit is your cognitive endurance. And most people do not know their cognitive limits until they have already blown past them.

This is where many engineers find themselves right now. Shipping more code than any quarter in their career. Feeling more drained than any quarter in their career. The two facts are not unrelated.

The trap is that it looks like productivity from the outside. Metrics go up. Velocity charts look great. More features shipped. More pull requests merged. But underneath the numbers, quality is quietly eroding, technical debt is accumulating faster than it can be addressed, and the people doing the work are running on fumes.

If the picture is difficult for experienced engineers, it is even harder for those starting their careers.

Junior engineers have traditionally learned by doing the simpler, more task-oriented work. Fixing small bugs. Writing straightforward features. Implementing well-defined tickets. This hands-on work built the foundational understanding that eventually allowed them to take on more complex challenges.

AI is rapidly consuming that training ground. If an agent can handle the routine API hookup, the boilerplate module, the straightforward CRUD endpoint, what is left for a junior engineer to learn from? The expectation is shifting toward needing to contribute at a higher level almost from day one, without the gradual ramp-up that previous generations of engineers relied on.

Entry-level hiring at the 15 largest tech firms fell 25 percent from 2023 to 2024. The HackerRank 2025 Developer Skills Report confirmed that expectations are rising faster than productivity gains, and that early-career hiring remains sluggish compared to senior-level roles. Companies are prioritizing experienced talent, but the pipeline that produces experienced talent is being quietly dismantled.

This is a problem that extends beyond individual career concerns. If junior engineers do not get the opportunity to build foundational skills through hands-on work, the industry will eventually face a shortage of senior engineers who truly understand the systems they oversee. You cannot supervise what you never learned to build.

As I have written before, code is for humans to read. If the next generation of engineers never develops the fluency to read, understand, and reason about code at a deep level, no amount of AI tooling will compensate for that gap.

If you lead engineering teams, the most important thing you can do right now is acknowledge that this transition is genuinely difficult. Not theoretically. Not abstractly. For the actual people on your team.

The career they signed up for changed fast. The skills they were hired for are being repositioned. The expectations they are working under shifted without a clear announcement. Acknowledging this reality is not a sign of weakness. It is a prerequisite for maintaining a team that trusts you.

Start with empathy, but do not stop there.

Give your team real training. Not a lunch-and-learn about prompt engineering. Real investment in the skills that the new engineering landscape actually requires: system design, architectural thinking, product reasoning, security awareness, and the ability to critically evaluate code they did not write. These are not trivial skills. They take time to develop, and your team needs structured support to build them.

Give them space to experiment without the pressure of immediate productivity gains. The engineers who will thrive in this environment are the ones who have room to figure out how AI fits into their workflow without being penalized for the learning curve. Every experienced technologist I know who has successfully integrated AI tools went through an adjustment period where they were less productive before they became more productive. That adjustment period is normal, and it needs to be protected.

Set explicit boundaries around role scope. If you are asking engineers to take on product thinking, planning, and risk assessment in addition to their technical work, name it. Define it. Compensate for it. Do not let it happen silently and then wonder why your team is burned out.

Rethink your metrics. If your engineering success metrics are still centered on velocity, tickets closed, and lines of code, you are measuring the wrong things in an AI-assisted world. System stability, code quality, decision quality, customer outcomes, and team health are better indicators of whether your engineering organization is actually producing value or just producing volume.

Protect the junior pipeline. If you have stopped hiring junior engineers because AI can handle entry-level tasks, you are solving a short-term efficiency problem by creating a long-term talent crisis. The senior engineers you rely on today were junior engineers who learned by doing the work that AI is now consuming. That path still matters.

And finally, keep challenging your team. I have never met a good engineer who did not love a good challenge. The engineers on your team are not fragile. They are capable, intelligent people who signed up for hard problems. They can handle this transition. Just make sure they are set up to meet it.

If you are an engineer navigating this shift, here is what I would tell you based on two decades of watching technology cycles reshape this profession.

First, do not abandon your fundamentals. The pressure to become an “AI-first” engineer is real, but the engineers who will be most valuable in five years are the ones who deeply understand the systems they work on. AI is a tool. Understanding architecture, debugging complex systems, reasoning about performance and security: these skills are not becoming less important. They are becoming more important because someone needs to be the adult in the room when AI-generated code breaks in production at 2 AM.

Second, learn to set boundaries with the acceleration trap. Just because you can produce more does not mean you should. Sustainable pace matters. The engineers who burn out trying to match the theoretical maximum output AI makes possible are not the ones who build lasting careers. The ones who learn to work with AI deliberately, choosing when to use it and when to think independently, are the ones who will still be thriving in this profession a decade from now.

Third, embrace the parts of the expanded role that genuinely interest you. If the engineering role now includes more product thinking, more architectural decision-making, more cross-functional communication, treat that as an opportunity rather than an imposition. These are skills that senior engineers and technical leaders need. You are being given access to a broader set of capabilities earlier in your career than any previous generation of engineers. That is not a burden. It is a head start.

Fourth, talk about what you are experiencing. The isolation of feeling like you are the only one struggling with this transition is one of the most damaging aspects of the current moment. You are not the only one. The data confirms it. Two-thirds of engineers report burnout. The expectation gap between leadership and engineering teams is well documented. Talking openly about these challenges, with your team, with your manager, with your broader network, is not complaining. It is professional honesty.

And fifth, remember that this profession has survived every prediction of its demise. COBOL was supposed to eliminate programmers. Expert systems were supposed to replace them. Fourth-generation languages, CASE tools, visual programming, no-code platforms, outsourcing. Every decade brings a new technology that promises to make software engineers obsolete, and every decade the demand for skilled engineers grows. AI will not be different. The tools change. The fundamentals endure.

AI made writing code easier and made being an engineer harder. Both of these things are true at the same time, and pretending that only the first one matters is how organizations lose their best people.

The engineers who are struggling right now are not struggling because they are bad at their jobs. They are struggling because their jobs changed underneath them while the industry celebrated the part that got easier and ignored the parts that got harder.

Expectations rose without announcement. Roles expanded without boundaries. Output demands increased without corresponding increases in support, training, or acknowledgment. And the engineers who raised concerns were told, implicitly or explicitly, that they just needed to adapt faster.

That is not how you build a sustainable engineering culture. That is how you build a burnout machine.

The industry needs to name this paradox honestly. AI is an incredible tool. It is also placing enormous new demands on the people using it. Both things can be true. Both things need to be addressed.

The organizations that get this right, that invest in their people alongside their tools, that acknowledge the human cost of rapid technological change while still pushing forward, those are the organizations that will attract and retain the best engineering talent in the years ahead.

The ones that do not will discover something that every technology cycle eventually teaches: tools do not build products. People do. And people have limits that no amount of AI can automate away.

If this resonated with you, I would love to hear your perspective. What has changed most about your engineering role in the last year? Drop me a message or connect with me on Twitter (X) or LinkedIn. I write regularly about the intersection of AI, software engineering, and leadership at ivanturkovic.com. Follow along if you want honest, experience-driven perspectives on how technology is actually changing this profession.

If this post made you think, you'll probably like the next one. I write about what's actually changing in software engineering, not what LinkedIn wants you to believe. No spam, unsubscribe anytime.


Read the original article

Comments

  • By Spide_r 2026-03-0114:2517 reply

    Its worth mentioning that this essay has some signs of being either partially AI generated or heavily edited through an LLM. Some of the signs are there (It's not X, it's Y), With the blog having gone from nearly zero activity between 2015 and 2025 to have it explode in posts and text output since then also raises an eyebrow.

    • By thinkingemote 2026-03-0114:282 reply

      It's now almost certain that every submission about LLMs will be written (or assisted) by LLMs.

      That this kind of writing puts a great number of us off is not important to many who seek their fortune in this industry.

      I hear the cry: "it's my own words the LLM just assisted me". Yes we have to write prompts.

      • By simonw 2026-03-0115:571 reply

        My current policy on this is that if text expresses opinions or has "I" pronouns attached to it then it's written by me. I don't let LLMs speak for me in this way.

        I'll let an LLM update code documentation or even write a README for my project but I'll edit that to ensure it doesn't express opinions or say things like "This is designed to help make code easier to maintain" - because that's an expression of a rationale that the LLM just made up.

        I use LLMs to proofread text I publish on my blog. I just shared my current prompt for that here: https://simonwillison.net/guides/agentic-engineering-pattern...

      • By WarmWash 2026-03-0115:143 reply

        I think it is very fair to say that in the same way that LLM's have given english majors access to programming, LLMs have also given engineers access to clear communication.

        I'm not shy to admit that LLMs even from 2 years ago could communicate ideas much better than me, especially for a general audience.

        • By wibbily 2026-03-0115:541 reply

          It’s not “clear communication” though. The prose that comes out of LLMs is awful - long, vapid paragraphs with distracting tropes. You can ask them to be concise but then they file down all the wrong bits of the sentence and lose meaning. There’s a reason people bother clocking it and complaining about it, it’s *bad*

          It’s like everything else that AI can do - looks fine at a glance, or to the inexperienced, but collapses under scrutiny. (By your own admission you’re not a great communicator… how can you tell then?)

          • By WarmWash 2026-03-0116:31

            >By your own admission you’re not a great communicator… how can you tell then?

            Thankfully we don't have to know how to write well to enjoy a well written book.

        • By troad 2026-03-0115:571 reply

          > LLMs have also given engineers access to clear communication.

          A lot of the time, the inability to express an idea clearly hints at some problem with the underlying idea, or in one's conceptualisation of that idea. Writing is a fantastic way to grapple with those issues, and iron out better and clearer iterations of ideas (or one's understanding thereof).

          An LLM, on the other hand, will happily spit out a coherent piece of writing defending any nonsense idea you throw at it. Nothing is learnt, nothing is gained from such "writing" (for either the author or the audience).

          • By bonoboTP 2026-03-0117:41

            Recently read a tweet suggesting to ask an llm to defend a position you know to be false. It's quite eye opening. I mean, it shouldn't be, if you did debate club etc. Or know how lawyers and politicians work. But it's quite revealing how it can piece together a good defense, selectively quoting real facts, embuing them with undue weight etc to make the thesis stand quite well.

        • By bonoboTP 2026-03-0117:35

          It's often warping the message or "snapping it to grid", taking off the edge, the unique insight. A lack of clear communication is much more a symptom of unclarity about the intended message, audience, prioritization etc. I don't doubt that you internally have a clear idea but sharing it requires thinking about the intended audience and the diff of their current state of knowledge and doubt and where you want to move their thinking. This is a much bigger part than knowing eloquent vocab and grammar tricks.

          It doesn't come naturally to the more introverted type of person who cares about the object level problem and not whatever anyone else may know or doubt, I'll admit this. But slapping LLMs on it is not a great solution.

    • By rcvassallo83 2026-03-0114:53

      As someone who has written a few deeply personal articles with LLM assistance, I see the signs and I'm almost certain this was generated off a few bullet points. The repetition and cadence strongly resembles the LLM output. Its the kind of fluff that I remove from a piece, because it lacks humanity and offers little substance.

    • By SecretDreams 2026-03-0114:302 reply

      I feel like it's such a lack of self respect and respect for others when people write using AI on personal blogs.

      Reading AI code is very pleasant. It's well annotated and consistent - how I like to read code (although not how I write code LOL). Reading language/opinions is not meant to be this way. It becomes repetitive, boring, and feels super derivative. Why would you turn the main way we communicate with each other into a soulless, tedious, chore?

      I think with coding it's because I care* about what the robot is doing. But, with communication, I care about what the person is thinking in their mind, not through the interpretation of the robot. Even if the person's mind isn't as strong. At least then I can size the person up - which is the other reason understanding each other is important and ruined when you put a robot in between.

      • By beej71 2026-03-0116:181 reply

        It's also because we (generally) consider a blog to be human communication and we consider math and programs to be something else.

        If you're talking to someone on the phone and halfway through they identify themselves as a bot, surprising you, there's a profound sense of something like betrayal. A moment ago you were having a human connection, and suddenly that vaporized. You were misled and were just talking to an unfeeling robot.

        And heartfelt writing is similar. We imagine the human at the other side of the screen and we relate. And when we discover it was a bot, no matter how accurate the sentiment, that relationship vanishes.

        But with math and software, it's already sterile from a human connection perspective. It's there for a different purpose. Yes, it can be beautiful, but when we read it we don't tend to build a human connection with the coder.

        An interesting exception is comments. When we read the fast inverse square root code and see the "what the fuck..." comment, we instantly relate to the person writing the software. If we later learned that comment was generated by an LLM, we'd lose that connection, again.

        IMHO. :)

        • By SecretDreams 2026-03-0117:22

          Totally agree. I'll extend this to email and slacks, too. I cannot stand getting AI written slop from fellow co-workers because they couldn't write the message themselves. Do not even bother to engage with me if you need to put your thoughts through an AI first. It won't go well. People gotta work on themselves a lot more and I think they're using AI to do the opposite.

      • By lelanthran 2026-03-0119:221 reply

        > I feel like it's such a lack of self respect and respect for others when people write using AI on personal blogs.

        Not so sure about the respect aspect: I have lots of self-respect, but I don't generally broadcast respect for random other people when I write my blogs - the most recent one even called readers stupid, IIRC!

        I feel it's more a matter of expression of contempt: if you can't be bothered to write it, WTF are you expecting people to read it?

        • By SecretDreams 2026-03-0122:44

          If you're writing an online blog with the intention that people are to read it, there should be some semblance of respect for their time and respect for your craft. IMO.

    • By altmanaltman 2026-03-0114:292 reply

      It is almost 90% generated using AI text. So many paragraphs to say basically nothing at all.

      Like look at this paragraph:

      > Junior engineers have traditionally learned by doing the simpler, more task-oriented work. Fixing small bugs. Writing straightforward features. Implementing well-defined tickets. This hands-on work built the foundational understanding that eventually allowed them to take on more complex challenges.

      The first sentence was enough to convey everything you needed to know, but it kept on adding words in that AI cadence. The entire post is filled with this style of writing, which, even if it is not AI, is extremely annoying to read.

      • By m00dy 2026-03-0114:371 reply

        What would he have written instead?

        • By altmanaltman 2026-03-0114:462 reply

          My point is that there's nothing to be written there "instead", it just is not needed text that is added to make the text longer, typical of AI writing that parrots the same points over and over to make up for word count.

          Here's another example from the blog:

          > Here is something that gets lost in all the excitement about AI productivity: most software engineers became engineers because they love writing code.

          > Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.

          can just be:

          > Most software engineers became engineers because they love writing code. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.

          Clarity is something that is taught in every writing class but AI generated text always seems to have this weird cadance as follows: The sound is loud. Not a whimper, not a roar, a simple sound that is very loud. And that's why... blah blah blah.

          You have to care about your readers if you're writing something seriously. Throwing just a bunch of text that all mean the same thing in your writing is one of the bigger sins you can do, and that's why most people hate reading AI writing.

          • By wolletd 2026-03-0115:151 reply

            I don't know...

            The part you'd like to remove ("Not managing code...") may be not required to convey the objective meaning of the sentence, but humans have emotions, too. I could have written stuff like that. To build up a bigger emotional picture.

            > The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended.

            This sentence may not be relevant for whatever you experience to be the relevant message of the text. But it still says something the remaining paragraph does not. And also something I can relate to.

            Also, as LLMs are statistical models, one has to assume that they write like this because their training data tells them to. Because humans write like this. Not when they do professional writing maybe, but when they just ramble. Not all blogs are written by professionals. I'd say most aren't. LLM training data consists mostly of humans rambling.

            I also sometimes write long comments on the internet. And while I have no example to check, I feel like I do write such sentences, expanding on details to express more emotional context. Because I'm not a robot and I like writing a lot. I think it's a perfectly human thing to do. I find it sad that "writing more than absolutely needed" is now regarded as a sign of AI writing.

            • By lelanthran 2026-03-0119:191 reply

              > Because humans write like this. Not when they do professional writing maybe, but when they just ramble.

              I keep seeing this assertion and I keep responding "Please, point to the volume of writing with this specific cadence that has a date prior to 2024" and I keep getting... crickets!

              You're asserting that this is a common way for humans to write, correct? Should be pretty easy, then, to find a large volume of examples.

              • By wolletd 2026-03-0121:501 reply

                Like I said: I think I write like this on some occasions.

                I wouldn't know how I would search for examples. I guess you'd have to search old reddit comment threads or something. But yeah, I have no motivation to do that, tbh. It could be that it's hard to find examples because they are scattered about in countless comment threads and single posts on countless platforms. Things I rarely keep links to, things nobody indexed on a large scale before LLMs.

                It may be that it wasn't a very popular style of writing, because most people don't like writing a lot and keep their texts on the internet short. LLMs exaggerate this style because they generate exaggerative amounts of text in general. The style wasn't particularly annoying in the past because it wasn't that popular. It's annoying now because LLMs flood the internet with it.

                The quoted example in particular didn't appear uncanny to me. And it still doesn't. I can see myself writing like that. I'm sorry I have no example for you. But I'm genuinely unsure whether I'm oblivious to the patterns others see, or whether others see patterns because they want to see them.

                • By lelanthran 2026-03-0123:19

                  It doesn't have to be your writing you show - the assertion is that LLMs write this way because humans write this way.

                  My counter is that I don't find examples of this type of writing prior to 2024; not with the same frequency of "tells" per paragraph, at any rate.

          • By skydhash 2026-03-0115:022 reply

            One of the good book about writing I read was William Zinsser's "On Writing Well". Striving for simplicity and avoiding clutter was the two first principles described in the book. AI writing feels more like ramblings than communication.

            • By tayo42 2026-03-0115:50

              When I've used AI for proofreading the suggestions it makes to me is to cut a lot and shorten it. It also gives me examples, never with my voice or style though.

            • By nz 2026-03-0115:333 reply

              Out of curiosity, how do you feel about florid and elaborate writing (e.g. Faulkner, Lispector, Mieville, Mossman, Joyce, Austen, etc)?

              • By addaon 2026-03-0115:42

                I do not think Faulkner would write very good C++ library documentation.

                I would read the hell out of Joyce’s Perl 5 documentation, but only after six or seven beers.

              • By rsynnott 2026-03-0118:20

                There's an art to it. Most human attempts, and every LLM attempt I've ever seen, are awful, sometimes bordering on unreadable, but, as you say, there are a relatively small number of authors who do it well. That doesn't mean that most people should do it.

              • By skydhash 2026-03-0116:28

                I'm a French speaker and florid and elaborate writing is something I've grown up with. It can be difficult if you don't know the word or are not used to the style, but it's not boring. AI writing is just repetitive.

      • By polynomial 2026-03-0116:11

        Classic LLM construction.

        5 sentence paragraph. First sentence is parataxis claim. Followed by 3 examples in sentence fragments, missing verbs, that familiar cadence. Then the final sentence, in this case also missing a verb.

        Pure AI slop.

    • By bonoboTP 2026-03-0117:261 reply

      The comments as well. I won't give away the tells but HN is less and less pleasant to read. Now is the time to cherish your pockets of small scale high quality forums that's not flooded by this stuff yet.

      • By alex_suzuki 2026-03-0117:381 reply

        How do you find those pockets?

        • By bonoboTP 2026-03-0117:532 reply

          I guess talking to people and making friends helps. Online, maybe seek out discords and befriend people and they may tell you. Not unlike how you find cool underground clubs.

          • By johnnyanmac 2026-03-022:511 reply

            Can't relate to either of those. I never made a friend online.

            • By bonoboTP 2026-03-023:141 reply

              You can also just kind of hop from blog to forum to subreddit to discord to social media account etc. passively over time based on what interesting people are talking about or mentioning. My main point is that you can't expect to find a list of good pockets of the internet with low effort, because if they are low effort to find, they will be flooded and made not good.

              • By johnnyanmac 2026-03-023:24

                I spent days worth of reseach some years ago trying to find a reddit alternative (think pre-pandemic, well before consensus started to turn). I more or less found a few substitutes (which includes HN), but nothing that ever truly felt like "community".

                As you implied here, the sad fact is that a lot of the niche groups live within the mainstream. on a Discord server, or a subreddit, or Facebook group. And I've moved away from most of those. Probably need to move away from Discord in time too.

                Either way, still never made a friend that way.

          • By alex_suzuki 2026-03-0118:49

            I do this but it mainly leads to a lot of 1-to-1 conversations, which is fine, but a wider but still “curated” audience would be interesting.

    • By marginalia_nu 2026-03-0114:311 reply

      Even the title has that unmistakable smell of punchy LinkedIn profundity.

      • By nz 2026-03-0114:56

        Even the linkedin profile has a studio-ghibli-style avatar. People are going to assume that he is just an "analog interface" to an LLM. Which is sad, because he might be a good programmer. In fact, I tend to see a lot of english-as-second-language people embrace LLMs as a kind of "equalizer", not realizing that in 2026 it is the opposite (not saying that it's right either way, just pointing out that it is becoming a kind of anti-marketing, like showing up to a conference without any clothing, and getting banned from the conference permanently).

        We should probably normalize publishing things in our native languages, and expecting the audience to run it through a translator. (I have been toying with the idea of writing everything in Esperanto (not my native language, but a favorite) and just posting links to auto-translated English versions where the translation is good enough).

        EDIT: as someone with friends and family from Eastern Europe, I can tell you that the prevailing attitude is: "everything is bullshit anyway" (which, to be fair, has a lot of truth to it), and so it is no surprise that people would enthusiastically embrace a pocket-sized bullshit factory, hook it up to a fire-hose, and start spraying. We saw it with spam, and we see it now with slop. It won't stop unless the system stops rewarding it.

    • By apt-apt-apt-apt 2026-03-0117:254 reply

      Why is this sentiment expressed so often ("It was written/edited by AI"?

      It seems to bother people, perhaps since it may have been low-effort. Doesn't it not matter as long as the content is good? Otherwise, it seems to be no different than a standard low-quality post.

      • By layer8 2026-03-0118:06

        The formulaic style/cadence/structure/tone is annoying, for one due to its LLM-induced prevalence, but also because it is padded and stretched without adding substance while being dyed in superficialities, and has a weird tendency of meandering through its thematic territory, like the author was slightly distracted or is writing the same thing for the 20th time, or is missing a good editor. Pre-LLM, it might have been an okay-ish, but not great, article. Now it’s just grating and makes you feel like you’re wasting your time reading it.

      • By lelanthran 2026-03-0119:16

        > Doesn't it not matter as long as the content is good?

        "Why is everyone railing against my spam? Doesn't it not matter as long as the deal I am offering is good?"

        When people don't want the spam, it is irrelevant whether the spammer is offering a good deal or not.

      • By bonoboTP 2026-03-0117:29

        When I want to read Ai writing (which is not never), I chat with it myself and I prompt it better and get more interesting stuff than these generic insight blogspam.

      • By rsynnott 2026-03-0118:18

        LLM prose is typically _painful_ to read, overly long, and bullshit-heavy.

    • By RevEng 2026-03-0116:283 reply

      LLMs write this way because people write this way. Maybe not everyone, but enough for it to train the models to do it. Much of my writing reads like an LLM wrote it, but that doesn't make me an LLM.

      • By timmytokyo 2026-03-0118:371 reply

        Yes and no. LLMs take all the writing on the Internet (good and bad) and average it out. It's similar to the way generative AI images always have an identifiable, artificial "look". They've averaged out the personality and thereby erased the individuality that went into the efforts the original artists used to create them.

        • By RevEng 2026-03-045:35

          Yup, I'll agree with that. It certainly does end up with a generic tone that has become ubiquitous. It's like the Continental English accent that actors and news hosts are taught to use - they all sound the same because they have all been taught to sound the same.

      • By lelanthran 2026-03-0119:14

        > Much of my writing reads like an LLM wrote it,

        I doubt it; share something you wrote prior to, say... 2024.

      • By krackers 2026-03-020:56

        No, this statement is not true for anything except a base model. Benchmaxxing during RL phase is how you get the advertisement style "punchy" writing, because even though people don't usually write that way it is eye catching and people will vote for the bullet-point emdash slop. I wonder if some lab will be bold enough to do "anti rlhf", lmarena score be damned.

    • By neogodless 2026-03-0115:011 reply

      AI made writing words easier. It made communicating well harder.

    • By brobdingnagians 2026-03-0114:591 reply

      AI made writing blog posts easier. It made critical thinking harder.

    • By jmcdl 2026-03-0114:59

      This was my thought after getting through a few paragraphs as well. At first, I was thinking, this is interesting, maybe worth sharing with colleagues. But then it became too obvious it was AI written or "assisted". Can't take that seriously.

    • By 383toast 2026-03-0117:55

      Yeah the article is 100% AI generated according to Pangram

    • By dom96 2026-03-0115:26

      It's funny how seemingly easy it is to tell articles like this have that AI generated whiff to them. The first bit that raised my suspicion was the "The Identity Crisis Nobody Talks About" headline. This "The x nobody talks about" feels like such a GenAI thing.

      I hate it. I couldn't read much more after that.

    • By agentultra 2026-03-0114:361 reply

      I couldn’t even finish it. I picked up on it after reading the other one that made it to the front page the other day.

      I don’t think there will be a point in coming to this site if it’s just going to be slop on the front page all the time.

      Maybe mods should consider a tag or flag for AI generated content submissions?

      • By lsc4719 2026-03-0119:16

        AI writings should be notified

    • By archagon 2026-03-021:40

      Why were flags manually removed? It's obvious that the community does not want to read AI slop.

    • By lezojeda 2026-03-0115:45

      [dead]

    • By jordanekay 2026-03-0115:48

      [dead]

  • By seethishat 2026-03-0115:282 reply

    One problem I have seen IRL is AI deployment mistakes and IMO Vibe Coders need an IT/Dev Father Figure type to avoid these simple mistakes. Here is one example:

    A surgeon (no coding experience) used Claude to write a web app to track certain things about procedures he had done. He deployed the app on a web hosting provided (PHP LAMP stack). He wanted to share it with other doctors, but wasn't sure if it was 'secure' or not. He asked me to read the code and visit the site and provide my opinion.

    The code was pretty reasonable. The DB schema was good. And it worked as expected. However, he routinely zipped up the entire project and placed the zip files in the web root and he had no index file. So anyone who navigated to the website saw the backups named Jan-2026.backup, etc. and could download them.

    The backups contained the entire DB, all the project secrets, DB connection strings, API credentials, AWS keys, etc.

    He had no idea what an 'index' file was and why that was important. Last I heard he was going to ask Claude how to secure it.

    • By dana321 2026-03-0115:45

      Claude is crazy good at coding but it won't hold your hand when it comes to the unknown unknowns that the regular joe like this doesn't know.

    • By i_think_so 2026-03-026:58

      We're probably not far away from the fully automated (no human interaction whatsoever needed) exploitation of such vulnerabilities by state level actors. At that point give it another few months and script kiddies will be doing it at significant scale.

      When this happens one of them is going to use their new toy to swat somebody, resulting in serious injury or death.

      How is that risk/responsibility debate going to go?

  • By oytis 2026-03-0114:254 reply

    > you are not imagining things. The job changed. The expectations changed. And nobody sent a memo.

    Looks like something AI would say. Regardless of how it really was written

    • By butILoveLife 2026-03-0114:571 reply

      Its really long winded. The entire thing could have been a couple bullet points.

      Admittedly it was so long and basic, I stopped halfway.

      • By alex_suzuki 2026-03-0117:52

        > The entire thing could have been a couple bullet points

        It probably was

    • By rhubarbtree 2026-03-0114:564 reply

      Why is AI such a bad writer? Phrasing like this feels like reading Fox News.

      • By gf263 2026-03-0115:181 reply

        I saw someone point out something like: ai makes every sentence count. There’s no building or allowing a point to breathe. Every sentence is an axiom to get the meaning across, and its so grating

        • By oytis 2026-03-0115:592 reply

          It's an interesting way to view it, because what happens in fact is likely the opposite - AI is asked to expand a few bullet points into a blog post

          • By bcooke 2026-03-0116:131 reply

            Maybe that's why the writing feels so terrible. The AI is attempting to maximize every sentence while simultaneously expanding on just a few actually meaningful points. And the net result of that dissonance is this rage-inducing vapidity. It's the written equivalent of the Uncanny Valley.

            • By oytis 2026-03-0116:42

              I think it has got past the uncanny valley really - it does read like a human, just a very attention-seeking one, like your typical LinkedIn salesman.

              That's probably just default settings though - I asked it to rewrite, and most of the tell-tale signs are gone as I can see (apart from the em-dash)

              https://chatgpt.com/s/t_69a46b290fb08191ad3bd93066b8cad4

          • By whstl 2026-03-0116:06

            Making fluff sound grandiose is probably what makes so grating.

      • By lelanthran 2026-03-0119:26

        > Why is AI such a bad writer?

        A better question is "Why can't the devs producing code with AI spot the same poor patterns in the code they are generating?"

        Maybe my point is that, to a poor speaker of English, the AI blogpost looks good and reads well. In much the same way, to a poor programmer, the AI produced code looks good and reads well.

        In a nutshell, if it generates poor English, WTF would anyone think it generates anything but poor code?

      • By oytis 2026-03-0115:30

        To be honest it still feels crazy that AI is a writer at all. But yeah, not a good one

      • By cindyllm 2026-03-0115:00

        [dead]

    • By 383toast 2026-03-0117:56

      Yep the article is 100% AI generated according to Pangram

    • By rcvassallo83 2026-03-0114:34

      Article definitely has an AI writing style

HackerNews