Your Job Isn't Disappearing. It's Shrinking Around You in Real Time

2026-02-0411:363869newsletter.jantegze.com

AI isn't taking your job. It's making your expertise worthless while you watch. The three things everyone tries that fail, and the one strategy that actually works.

You open your laptop Monday morning with a question you can’t shake: Will I still have a job that matters in two years?

Not whether you’ll be employed. Whether the work you do will still mean something.

Last week, you spent three hours writing a campaign brief. You saw a colleague generate something 80% as good in four minutes using an AI agent. Maybe 90% as good if you’re being honest.

You still have your job. But you can feel it shrinking around you.

The problem isn’t that the robots are coming. It’s that you don’t know what you’re supposed to be good at anymore. That Excel expertise you built over five years? Automated. Your ability to research competitors and synthesize findings? There’s an agent for that. Your skill at writing clear project updates? Gone.

You’re losing your professional identity faster than you can rebuild it. And nobody’s telling you what comes next.

When you feel your value eroding, you do what seems rational. You adapt. You learn. You try to stay relevant.

Here’s what that looks like for most people:

First, you learn to use the AI tools better. You take courses on prompt engineering. You master ChatGPT, Claude, whatever new platform launches next week. You become the “AI person” on your team. You think: if I can’t beat them, I’ll use them better than anyone else.

This fails because you’re still competing on execution speed. You’re just a faster horse. And execution is exactly what’s being commoditized. Six months from now, the tools will be easier to use. Your “expertise” in prompting becomes worthless the moment the interface improves. You’ve learned to use the shovel better, but the backhoe is coming anyway.

Second, you double down on your existing expertise. The accountant learns more advanced tax code. The designer masters more software. The analyst builds more complex models. You think: I’ll go so deep they can’t replace me.

This fails because depth in a disappearing domain is a trap. You’re building a fortress in a flood zone. Agents aren’t just matching human expertise at the median level anymore. They’re rapidly approaching expert-level performance in narrow domains. Your specialized knowledge becomes a liability because you’ve invested everything in something that’s actively being automated. You’re becoming the world’s best telegraph operator in 1995.

Third, you try to “stay human” through soft skills. You lean into creativity, empathy, relationship building. You go to workshops on emotional intelligence. You focus on being irreplaceably human. You think: they can’t automate what makes us human.

This fails because it’s too vague to be actionable. What does “be creative” actually mean when an AI can generate 100 ideas in 10 seconds? How do you monetize empathy when your job is to produce reports? The advice feels right but provides no compass. You end up doing the same tasks you always did, just with more anxiety and a vaguer sense of purpose.

The real problem with all three approaches: they’re reactions, not redesigns. You’re trying to adapt your old role to a new reality. What actually works is building an entirely new role that didn’t exist before.

But nobody’s teaching you what that looks like.

Three people on treadmills representing failed AI adaptation strategies going nowhere

This isn’t happening to you because you’re failing to adapt. It’s happening because the economic incentive structure is perfectly designed to create this problem.

Here’s the mechanism: Companies profit immediately from adopting AI agents. Every task automated results in cost reduction. The CFO sees the spreadsheet: one AI subscription replaces 40% of a mid-level employee’s work. The math is simple. The decision is obvious.

Many people hate to hear that. But if they owned the company or sat in leadership, they’d do the exact same thing. Companies exist to drive profit, just as employees work to drive higher salaries. That’s how the system has worked for centuries.

But companies don’t profit from retraining you for a higher-order role that doesn’t exist yet.

Why? Because that new role is undefined, unmeasured, and uncertain. You can’t put “figure out what humans should do now” on a quarterly earnings call. You can’t show ROI on “redesign work itself.” Short-term incentives win. Long-term strategy loses.

Nobody invests in the 12-24 month process of discovering what your new role should be because there’s no immediate return on that investment.

We’re in a speed mismatch. Agent capabilities are compounding at 6-12 month cycles. Human adaptation through traditional systems operates on 2-5 year cycles.

Universities can’t redesign curricula fast enough. They’re teaching skills that will be automated before students graduate. Companies can’t retrain fast enough. By the time they identify the new skills needed and build a program, the landscape has shifted again. You can’t pivot fast enough. Career transitions take time. Mortgages don’t wait.

Here’s the deeper issue: we’ve never had to do this before.

Previous automation waves happened in manufacturing. You could see the factory floor. You could watch jobs disappear and new ones emerge. There was geographic and temporal separation.

This is different. Knowledge work is being automated while you’re still at your desk. The old role and new role exist simultaneously in the same person, the same company, the same moment.

And nobody has an economic incentive to solve it. Companies maximize value through cost reduction, not workforce transformation. Educational institutions are too slow and too far removed from real-time market needs. Governments don’t understand the problem yet. You’re too busy trying to keep your current job to redesign your future one.

The system isn’t helping because it isn’t designed for continuous, rapid role evolution; it is designed for stability.

We’re using industrial-era institutions to solve an exponential-era problem. That’s why you feel stuck.

Two clocks showing speed mismatch between human adaptation and AI advancement

Let me tell you a story of my friend, let’s call her Sarah. She was a senior research analyst at a mid-sized consulting firm. Ten years of experience. Her job: client companies would ask questions like “What’s our competitor doing in the Asian market?” and she’d spend 2-3 weeks gathering data, reading reports, interviewing experts, synthesizing findings, creating presentations.

She was good. Clients loved her work. She billed at $250 an hour.

The firm deployed an AI research agent in Q2 2023. Not to replace Sarah. To “augment” her. Management said all the right things about human-AI collaboration.

The agent could do Sarah’s initial research in 90 minutes. It would scan thousands of sources, identify patterns, generate a first-draft report.

Month one: Sarah was relieved. She thought she could focus on high-value synthesis work. She’d take the agent’s output and refine it, add strategic insights, make it client-ready.

Month three: A partner asked her, “Why does this take you a week now? The AI gives us 80% of what we need in an hour. What’s the other 20% worth?”

Sarah couldn’t answer clearly. Because sometimes the agent’s output only needed light editing. Sometimes her “strategic insights” were things the agent had already identified, just worded differently.

Month six: The firm restructured. They didn’t fire Sarah. They changed her role to “Quality Reviewer.” She now oversaw the AI’s output for 6-8 projects simultaneously instead of owning 2-3 end to end.

Her title stayed the same. Her billing rate dropped to $150 an hour. Her ten years of experience felt worthless.

Sarah tried everything. She took an AI prompt engineering course. She tried to go deeper into specialized research methodologies. She emphasized her client relationships. None of it mattered because the firm had already made the economic calculation.

One AI subscription: $50 a month. Sarah’s salary: $140K a year. The agent didn’t need to be perfect. It just needed to be 70% as good at 5% of the cost.

The part that illustrates the systemic problem: You often hear from AI vendors that, thanks to their AI tools, people can focus on higher-value work. But when pressed on what that meant specifically, they’d go vague. Strategic thinking. Client relationships. Creative problem solving.

Nobody could define what higher-value work actually looked like in practice. Nobody could describe the new role. So they defaulted to the only thing they could measure: cost reduction.

Sarah left six months later. The firm hired two junior analysts at $65K each to do what she did. With the AI, they’re 85% as effective as Sarah was.

Sarah’s still trying to figure out what she’s supposed to be good at. Last anyone heard, she’s thinking about leaving the industry entirely.

Balance scale showing ten years experience outweighed by fifty dollar AI subscription

The people who are winning aren’t trying to be better at their current job. They’re building new jobs that combine human judgment with agent capability.

Not becoming prompt engineers. Not becoming AI experts. Becoming orchestrators who use agents to do what was previously impossible at their level.

Marcus was a marketing strategist at a retail company. When AI tools emerged, he didn’t try to write better marketing copy than the AI. He started running 50 campaign variations simultaneously. Something that would’ve required a team of 12 people before.

He’d use agents to generate the variations, test them, analyze results, iterate. His role became: design the testing framework, interpret patterns the agents found, make strategic bets based on data no human could process manually.

Within six months, his campaigns were outperforming competitors by 40%. Not because he was better at any single task. Because he could operate at a scale that was previously impossible.

Here’s the pattern that works:

Find the constraint in your domain that exists because of human limitations. What doesn’t get done because it takes too long? What questions don’t get asked because analysis is too expensive? What experiments don’t get run because you’d need a team of 20?

Then use agents to remove that constraint. Not to do your current tasks faster. To do things that were previously impossible.

Then build expertise in the judgment layer. What experiments should we run? Which patterns matter? What do these results mean for strategy? When should we override the agent’s recommendation?

This isn’t vague strategic thinking. It’s specific: you’re the decision maker orchestrating a capability that didn’t exist before.

You’re not competing with the agent. You’re creating a new capability that requires both you and the agent. You’re not defensible because you’re better at the task. You’re defensible because you’ve built something that only exists with you orchestrating it.

The hard truth: this requires letting go of your identity as “the person who does X.” Marcus doesn’t write copy anymore. That bothered him at first. He liked writing. But he likes being valuable more.

Here’s what you can do this month:

Week one: Identify one thing in your job that you’d do 10x more if it didn’t take so long. Customer research? Competitive analysis? Testing variations? Data modeling?

Week two: Use AI agents to do that thing at 10x volume, even if quality drops to 70%. See what becomes possible.

Week three: Find the patterns. What insights emerge at scale that you’d never see doing it manually? What new questions can you answer?

Week four: Pitch this as a new capability to your boss. Not “I’m more efficient now.” But “We can now do this specific thing we couldn’t do before, which creates this specific business value.”

People who do this aren’t getting squeezed. They’re getting promoted or poached. Because they’ve made themselves the linchpin of a new capability, not the executor of an old task.

One critical caveat: this won’t work forever in its current form. Eventually, agents will get better at orchestration too. But it buys you three to five years. And in that time, you’ll see the next evolution coming.

The meta-skill is this: learning to spot what becomes possible when a constraint disappears, then building your value around that new possibility.

Person orchestrating fifty simultaneous campaign experiments using AI agents for scale

Most people currently doing “strategic” knowledge work aren’t actually that strategic.

When agents started handling the execution layer, everyone assumed humans would naturally move up to higher-order thinking. Strategy. Judgment. Vision.

But a different reality is emerging: many senior people with years of experience can’t actually operate at that level. Their expertise was mostly pattern matching and process execution dressed up in strategic language.

The thing nobody says out loud: “We thought Lisa was a strategic thinker because her analyses were thorough. Turns out the thoroughness was the skill. When an agent can be thorough in three minutes, we’re discovering Lisa doesn’t actually have strategic insights to add.”

This isn’t that these people are bad at their jobs. They were excellent at their jobs. The job required diligence, attention to detail, process mastery. They delivered exactly what was asked.

But the industry sold them on the idea that experience equals strategic capability. That putting in the hours would naturally develop judgment. For some people, it did. For many others, they got really good at execution and called it strategy.

Here is what one CEO of a mid-sized company in Canada told me: “We’re discovering that our senior people and our junior people are equally lost when we ask them what we should do, not just how to do it. The seniors are just more articulate about their uncertainty.

The agent economy isn’t just automating tasks. It’s revealing who was coasting on the appearance of strategic thinking versus who actually possesses it.

And there’s no gentle way to tell someone: you’ve spent 15 years building a career, and we’re just now realizing the thing you were good at wasn’t what we actually needed.

Nobody says this publicly because it suggests the problem isn’t just technological adaptation. It’s that our evaluation systems were broken all along. We promoted people for the wrong reasons. We confused “does the work well” with “thinks strategically about the work.”

Admitting that means admitting we don’t actually know how to identify or develop real strategic capability. We’ve been guessing. Using credentials and years of experience as proxies.

Person examining detailed work while shadow reveals absence of strategic thinking

You’re not going to solve this by being better at your current job. That job is dissolving under you in real time.

You’re not going to solve it by learning the tools better. The tools will get easier to use without you.

You’re not going to solve it by going deeper into your specialty. That specialty is being automated.

What works: become the person who spots what just became possible and builds your value around that new capability. Use agents to remove constraints that previously limited what you could do. Become the orchestrator of scale that didn’t exist before.

This isn’t a permanent solution. In three to five years, you’ll need to do it again. The meta-skill is learning to continuously spot the next evolution and position yourself at the edge of what’s newly possible.

The uncomfortable truth: this will separate people who were genuinely strategic from people who were just thorough. There’s no way around that. The system that rewarded thoroughness is breaking down. The new system rewards the ability to see what constraints just disappeared and build something new in that space.

You still have time. But not much. The speed mismatch between agent capability and human adaptation is real. The companies won’t save you because they’re optimized for short-term cost reduction, not long-term workforce transformation. The educational system won’t save you because it’s too slow.

You have to save yourself. And the way you do that is by stopping trying to defend your current role and starting to build the role that didn’t exist six months ago.

Monday morning will keep coming. The question is whether you’re still wondering what you’re supposed to be good at, or whether you’ve already built the answer.

Good ideas spread through people, not algorithms. Share if you know someone who’d enjoy this.

Share

Article you shouldnt missed:


Read the original article

Comments

  • By prng2021 2026-02-0413:243 reply

    The author gives this example of the problem and incorrect way to leverage AI:

    "Sarah was relieved. She thought she could focus on high-value synthesis work. She’d take the agent’s output and refine it, add strategic insights, make it client-ready."

    Then they propose a long winded solution which is essentially the same exact thing but uses the magical term "orchestrate" a few times to make it sound different.

    • By TaupeRanger 2026-02-0413:421 reply

      Well, the article was written by AI, so I wouldn't expect it to make valid arguments through a long article like this.

      • By karmakurtisaani 2026-02-0420:32

        Already the headline is classic AI shitty writing. This isn't just x, it's basically the same thing y.

    • By dccoolgai 2026-02-0413:312 reply

      In fairness to the author, I think their point was that you take _several_ agents (not just one) and find a way to have them work like a team of 20 people. In the example, Sarah is trying to do the same job she did before, just marginally better.

      • By prng2021 2026-02-0413:471 reply

        Yea I guess that's accurate but they also explained that AI capabilities advance every 6-12 months and managing a team of agents buys you a few years. So their proposed solution and conclusion that it keeps you safe for years makes no sense right now. Multi agent orchestration, with an agent doing the orchestrating, is all the craze nowadays.

        • By dccoolgai 2026-02-0414:10

          They made half the point, in my opinion - that you should be "doing the thing that wasn't possible before" but missed the other half - that maybe the thing you should be doing is owning and creating relationships with customers yourself instead of doing it through a company... Which maybe wasn't possible before but is now.

      • By FrustratedMonky 2026-02-0413:501 reply

        I agree. But the article then seems to suggest, 'you be the one left standing to orchestrate'. It didn't offer much of a suggestion about the other 20 people that would be gone.

        It seemed to come down to the old 'just work better , faster, cheaper' , but that is dialed up to 11 now.

        • By dccoolgai 2026-02-0414:16

          I read it more as "look for the thing that was _never done_ because no one was going to hire 20 people to do it" and all the examples were pointing out how you _should not_ try to "better, faster, cheaper" AI because you will lose quickly on all those dimensions.

          I realize the irony, of course, that this article is AI-generated but it provoked something close to an epiphany for me even so.

    • By veggieroll 2026-02-0413:36

      > add strategic insights

      This claim has always been BS in my experience.

  • By penetrarthur 2026-02-0413:138 reply

    God damn it. Can people write interesting articles in NORMAL writing style nowadays? Why is everyone writing in these stupid short "punchline" sentences?

    • By coffeefirst 2026-02-0413:28

      Seriously. This is trash. It presents no evidence, contains no original ideas, it’s just written—excuse me, generated—to be as provocative as possible.

      I think I’ll just start flagging these. They’re just a new kind of spam.

    • By bux93 2026-02-0413:193 reply

      It's exhausting to read.

      I can't quite put my finger on it; obviously the "it's not this. It's that." is part of it, but even without the obvious tells that writing was AI-generated/improved, it's just so tiring to read?

      Maybe a linguist can chime in why all these texts are so samey, cloying and annoying to read? Is it (just) the pacing?

      • By nemomarx 2026-02-0413:25

        I wonder if part of it is that we're mentally trying to get the actual meaning and thoughts out of it. It's inflated like trying to read a bad students essay that's struggling for word count? I wish people would just post their prompts directly.

      • By Retr0id 2026-02-0413:341 reply

        It reminds me of the "overenthusiastic youtuber" presentation style, with jump cuts etc., just in written form. From its prevalence I can only assume that some audiences prefer it - I'd be more interested to know why that is.

      • By ryandrake 2026-02-0418:14

        It's like a LinkedIn post, stretched to 3000 words.

    • By kykat 2026-02-0413:14

      Because it's not people doing the writing.

    • By dude250711 2026-02-0413:141 reply

      That's ChatGPT for you:

      > The ... isn’t just ... . It’s ... .

      • By gchamonlive 2026-02-0413:211 reply

        This is just myth and faith. Even if all AI wrote like that, it doesn't follow that all writing in that style is by AI, hence the belief in the style. Focus less on the aesthetics, more on the message. After all, for this article to have been written in the form of a sonnet was just a prompt away.

        • By dude250711 2026-02-0414:00

          These people do not put in any effort and go for defaults. I see this with images too.

    • By rawgabbit 2026-02-0418:15

      It reads more like a transcript for a podcast. Somewhere along the way, we no longer favor illustrative anecdotes, logical arguments, or dialectical arguments. Everything is a podcast now.

    • By gchamonlive 2026-02-0413:303 reply

      All of a sudden everybody is a writing style critic. The only question that is pertinent is if the message of the post is relevant.

      Think about it. You wouldn't give someone crap for writing in broken English because there are many really smart people that are non English speakers. So why are we giving crap for people using AI to write better posts? If the idea is relevant, what's the point in criticizing the style?

      A fair question would be "is the idea in the post actually the writer's or was it entirely done by AI"? However how can one actually tell if the idea, not the style is original? You can't. So it's pointless to be angry about style. Focus on the message.

      • By coffeefirst 2026-02-0413:421 reply

        No. I have a lot of respect for people who write in a second language. You can often tell because the content is thoughtful and some of the word choices or grammar is quirky.

        This is a bozo who prompted the machine for a viral essay. He did not write anything. He does not know anything.

        • By gchamonlive 2026-02-0413:481 reply

          Do you know this for a fact? How? Honest question.

          • By coffeefirst 2026-02-0414:041 reply

            If I’m wrong I’ll eat my hat.

            But I’ve spent enough time with these tools and coaching people on writing over the years to recognize the extremely low signal to noise ratio and prompted style instructions. I’m equally confident the gentleman in my spam folder is not a Nigerian prince.

      • By Izkata 2026-02-0416:331 reply

        > So why are we giving crap for people using AI to write better posts?

        Pretty sure the point is it's making the post worse, not better.

        • By gchamonlive 2026-02-0418:16

          Not talking about this post anymore, because it being flagged is more reason it's just click bait.

          But in all honesty, for some it's extremely hard to write concisely in English. Maybe what you call worse is actually better in comparison to what the person could write by himself, insights notwithstanding.

      • By fwip 2026-02-0415:47

        "is the idea in the post actually the writer's or was it entirely done by AI"

        Look. If you don't want your readers to worry that your hard-written article is AI slop - just don't run it through the slopifier. Or at the very least, spend 5 minutes tweaking the output.

        If you can't be assed to do that, then it's very likely that you don't have valuable insights to share.

    • By gedy 2026-02-0413:15

      Because they aren't writing, it's vibe blogging or whatever

    • By subpixel 2026-02-0413:15

      Because it’s a lead generation machine

  • By lelandfe 2026-02-0413:131 reply

    > You saw a colleague generate something 80% as good in four minutes using an AI agent. Maybe 90% as good if you’re being honest.

    Wish this were realistic - I'd have enjoyed the read more.

    • By coffeefirst 2026-02-0413:32

      It’s possible the author is such a bad writer that this is really 90% as good as they get.

      Actually that’s probably the only way anyone would publish this without being embarrassed.

HackerNews