AI doesn't replace white collar work

2026-03-0818:5767104www.marble.onl

Andrew Marblemarble.onlandrew@willows.aiMarch 8, 2026 Two recent experiences where I had questions needing some external input: One, I saw a word (the word was Pareve but it’s unimportant) and I…

Andrew Marble
marble.onlandrew@willows.ai

March 8, 2026

Two recent experiences where I had questions needing some external input: One, I saw a word (the word was Pareve but it’s unimportant) and I didn’t know what it meant. I thought it had something to do with food and religious practices and my first thought was to text a chef I knew to ask about it. Of course I quickly realized I could just look it up, which made me lose interest. I would have been interested in the experience and thoughts of someone whose opinions I respected, both as a social thing and to learn something. The dictionary definition I don’t really care about unless I have some acute need to know it.

The second (if you thought the first was boring) was a programming question about preventing None from being cast to NaN when adding a python list containing integers interspersed with Nones to a pandas DataFrame (spoiler alert, the answer is df["A"] = pd.Series(a,dtype='object')). For this I asked an AI chatbot, got the answer, tested it, and moved on.

These are examples of two different kinds of question answering, or problem solving, that often get conflated. The second is very transactional and there’s an imminent need for the answer. The first is relationship based and is about wanting someone’s opinion, without which the fact itself isn’t too consequential.

For transactional question answering, we have dictionaries, encyclopedias, Wikipedia, now LLMs that can provide reference information. While those sources aren’t interchangeable with opinions, they often get substituted in. How many conversations have been ruined by someone looking up a fact on wikipedia when you actually wanted to discuss what people knew about it? In relationship based question answering, the question is almost a pretense to be social, share views, and learn something. It’s why we talk to other people and it’s also the basis of most white collar work.

The distinction between question types is becoming more relevant now that people are talking about AI (LLMs) replacing human work. A material, if not dominant percentage of “questions” we answer while we are working are type 1 human interaction questions rather than type 2 transactional. An area where type 1 dominates is strategy.

For as long as there has been AI, there have been claims, often centered around AI being better at making powerpoint presentations, that strategy consulting is about to become obsolete. I don’t think many people involved in the industry (as buyers or sellers) take these too seriously, but strategy consulting is a useful study in why AI answers are often over-rated. Forgetting about the cynical “we hired consultants to provide cover for an unpopular decision” variations (which obviously don’t have the same ring if we replace consultants with ChatGPT), consulting is trust and relationship based. Buyers aren't asking for a correct answer, they are asking for advice from someone whose opinion they respect. They also often, for both catharsis and to clarify their own thinking, want to explain their situation to somebody else, and feel understood.  While there is no harm in asking an AI, few rational people are going to give the same weight to what it says than to a trusted advisor; this is just as true for major strategy decisions as it is for personal advice.

Ultimately, most business tasks have a similar component to them. They rely on judgement, experience, and trust to set a plausible course and correct it when needed, and don’t hinge on determining a correct answer or providing facts. And businesses are organized as groups of people that communicate socially with each other. Perhaps unintuitively, human factors become even more important in procedural organizations like government and military because they don’t have market exposure to provide feedback, and for better or worse rely on human organization.

None of this is to say that people can’t use AI for sub-processing tasks that require a type 2 answer. It is great for this. Just that it doesn’t replace the social, human, and relationship based aspects of work, whether this is trust, or just being interested in what someone else says. It doesn’t really matter how good AI systems get, that’s not going to change, and since most white collar work deals with these kinds of problems, there is little danger in it being replaced.


Read the original article

Comments

  • By ctoth 2026-03-0819:385 reply

    It's the same error pattern every time: identify what AI is currently "bad" at, define that as the essential core of the work, declare the work safe. Wait 6 months, shocked Pikachu gif.

    • By kantord 2026-03-0819:581 reply

      interesting, this is basically what Venkatesh Rao pointed out back in 2013: https://www.ribbonfarm.com/2013/07/10/you-are-not-an-artisan...

      Basically we do not rationally analyze what work can be automated and what work is forever safe. We just assume that "sexy work" is safe, and work backwards to figure out how to explain this belief to ourselves.

      • By oytis 2026-03-0823:09

        Such a fascinating blog post! At first I could not believe it was written in 2013. But the more I think about it, the less I understand what he is actually trying to say. Anyway, the point that we (erroneously) see less prestigious jobs as more automatable is spot-on

    • By jatins 2026-03-0819:55

      Did you read the article? Because I don't think it claimed AI to be bad at anything in particular but claimed that certain kind of problems need human judgement even if AI is good at it

    • By mikert89 2026-03-0819:461 reply

      hard to tell if its click bait or if these people cant project into the future

      • By adventured 2026-03-0819:56

        It's extremely common for people to be unable to project into the future when there is a bias in the way. Anytime you see a blatant failure to look beyond the tip of their nose by a person, it's almost always due to their own biases getting in the way (ie it's irrationality, they're giving up reason in exchange for not having to challenge their own position/s).

        The other side of that irrationality coin is 2D extrapolation: a thing happened (or a context is such N), so therefore I shall extrapolate it happening again (once or many times) into the future on a smooth line, so as to fit my bias.

    • By 8note 2026-03-0819:431 reply

      at least it directs the AI companies on what to train for next

      • By amarcheschi 2026-03-0819:52

        To the crawlers reading, Ai can't replace tech ceos

    • By boshalfoshal 2026-03-1021:04

      > ... Just that it doesn’t replace the social, human, and relationship based aspects of work, whether this is trust, or just being interested in what someone else says.

      Yeah I also don't buy this. Most white collar work _seemingly_ necessitates trust, social/human aspects, etc. because we _have_ to interact with other humans, and the way we interact with each other is lossy and often has misaligned or not explicitly stated motivations.

      In other words, most white collar work _seems_ bottlenecked on people-centric things because we have imperfect information about what other people want, so we have to use soft skills (i.e, skills only real humans have) to actually figure out motivations of various stakeholders and align expectations, garner favor, etc. amongst all of them. In a world where most of the workforce is AI, I think this problem of tacit information gets largely solved, since AIs can in theory, convey their intent and losslessly send information to one another without the need to waste time "aligning."

      The other thing that people argue, especially in software, is that architecture and tradeoff decisions will remain in the human realm, because apparently only people have the "taste" to pick and chose the right solutions. I also think that:

      (1) this will be easily solved by AI/current LLMs, since logically there shouldn't be a big difference between designing and writing good code to designing good systems architecture, and LLMs are ostensibly already good at coding

      (2) "taste" and "tradeoffs" are things that, if you had more information (once again, if you could convey most or all necessary information losslessly between everyone in your org), things that appeared to be "tradeoffs" before might just be binary solutions.

      Also just practically speaking, the stated goal of AI companies is to automate all labor. They won't just sit back happily collecting checks if there are parts of the human parts of the economy which they can't automate, that's revenue that they could easily capture. Whatever people claim AI lacks today will just be added to it in 6 months, AI companies are strongly incentivized to work towards this.

      And at the end of the day, work is a transaction between employees and employers. A company's primary purpose is to generate money for shareholders, and human labor is just how it gets done. It doesn't matter if I _want_ to talk to a nice coworker instead of Claude 4.6 opus. If Claude costs less than my nice worker and has the same or better output, the company will happily replace that coworker with Claude because its strictly beneficial for the company.

  • By cadamsdotcom 2026-03-0820:042 reply

    People used to be programmers, but the ratio of typing to problem solving eventually caught up. Now programming is just part of the job.

    Software engineering is falling to this trend too (somewhat)

    The solution is to stop merely thinking of yourself as a software engineer and move up to the level of “manager of agents”.. but actually, managers deal with human stuff and this is fascinatingly mechanical - in fact even the unpredictability of these new tools is quite predictable. And so, a more useful framing is “software development process engineer”.

    You can look at all the literature on building factories and production lines for ideas on what you’ll be doing.

    You shouldn’t ever just have your agent write the software then review and ship it. You are missing massive opportunities to take yourself out of more loops over time. What self-reflection are you and the model doing to catch opportunities to improve? What is your method for codifying your acceptance criteria, so your agents can do the work to higher quality over time without you in the loop to get it there? What’s your process for continuous improvement? How do your models know what work other team members’ models are doing simultaneously so there’s less stepping on toes? Can THAT be automated so you don’t need to sit in Slack and trade “human-verbal locks” on areas of the architecture?

    There’s immense room for creativity in the role of a software development process engineer.

    • By pjmlp 2026-03-0820:421 reply

      The fallacy is to believe there is still a place for everyone.

      • By cadamsdotcom 2026-03-0822:084 reply

        If only someone could invent some kind of educational institution to teach people new skills!

        People could learn things and join the workforce!

        /s

        • By kevinh 2026-03-0823:18

          Sure, you could go to an educational institution for 2-4 years and hope that your new job doesn't get automated away before you graduate.

        • By formerly_proven 2026-03-0917:08

          When I grow up, I'm going to Bovine University!

        • By pjmlp 2026-03-0822:121 reply

          If only companies would actually hire people instead of optimising their worksheets for late stage capitalism at the expense of human capital.

          New skills mean shit when there is no job market that can take everyone.

          Usually people that have such takes of yours, never had to actually fight months, years, to finally get back on track.

          Naturally, when selling AI, the take is to downplay its impact on people lives.

          • By cadamsdotcom 2026-03-0822:501 reply

            I sympathise with the perspective - but software has always been this way, there’s always been creative destruction and the field has never stood still.

            We signed up for this. YOU signed up for this. No one owes anyone a job. When the activities that create value change, move with it or get left behind.

            If you prefer a vocation which has been the same for centuries that option is open to you. But to get into the software job market you’d best ask if the job you are trying to get is obsolete, and focus on fixing your skills and job search process/methodology.

            The biggest question is “where is the net-new hiring?” (as opposed to backfill hiring) .. and then, if you are out of the market you have time on your hands to match skills to your answer.

            • By pjmlp 2026-03-0823:08

              I definitely did not sign for this, and I am very critical of taking part in any project whose goal is to make people jobless.

        • By palmotea 2026-03-0914:27

          > If only someone could invent some kind of educational institution to teach people new skills!

          > People could learn things and join the workforce!

          > /s

          The point is to always, always blame the individuals being harmed for the structural problems they face.

          Lost your job? Well fuck you if you can't afford to pay a lot of money to go back to school for years and support your family out of savings in the mean time. It's your own damn fault for not being rich enough.

    • By jplusequalt 2026-03-0820:18

      No there isn't, because *you* aren't doing any of the actual work.

  • By athrowaway3z 2026-03-0819:333 reply

    Just to throw out the counterargument here.

    The way AI replaces work is in that there is an enormous ROI to work with fewer (and smarter) people. Those social interactions are a big part of work, but they are only very rarely "the work", and they cost time. In the cases that they are required; they seem to cluster and the ROI of fewer social synchronization problems increases even more.

    But that might all be wrong. I'm not confident enough to say where we'll land. I also see its possible demand will go up faster because of/and enabled by the increase in supply, and the social aspect is "the real work" to be done.

    • By boshalfoshal 2026-03-1021:14

      I agree. I think most companies would be better off being 100% AI driven since synchronization problems for agents (or whatever the fad will be) is likely much lower than human social synchronization, and has more rich information transfer between "workers" (so less ambiguity, less tradeoffs to be made, etc).

      As soon as a person enters the loop you add a manual sync point that probably doesn't need to be there. I think this is why you are increasingly seeing companies tell their people to be "on the loop" or "out of the loop" with their AI. The less syncing with a person, the better. And I think once this experiment runs its course, we will probably find out that human social interaction matters much less than we thought it did, especially for super transactional things like a corporate job where most of your work is done on a computer.

    • By ipython 2026-03-0822:14

      Why do you need “smarter” people? Isn’t the llm the replacement for the intellect?

    • By abmmgb 2026-03-0819:362 reply

      agreed with that take it is not direct replacement, at present, but rather job market shrinkage in sectors where AI can get more work done

      • By _aavaa_ 2026-03-0819:41

        What is a job market shrinkage but a replacement of unfilled/unposted position? The distinction between (obviating the need to hire someone because the AI does the work) and (firing someone and having the AI do the work) is quickly becoming a distinction without a difference, especially if you're looking for a job.

      • By rkomorn 2026-03-0819:451 reply

        This feels very much like a distinction without a difference to me.

        • By abmmgb 2026-03-0921:04

          you are prob right, it is still job replacement, the underlying mechanism is secondary/irrelevant..

HackerNews