"AI-first" is the new Return To Office

2025-04-3013:38335216www.anildash.com

A blog about making culture. Since 1999.

The latest fad amongst tech CEOs is no longer "founder mode", or taking drugs that they would fire you for taking, or telling everybody to return to the office — it's demanding that all work be AI-first! This is a great idea if you think nobody at your company is great at what they do. It may otherwise be a suboptimal strategy. Let's dive in!

Let's use me as a case study. I'm pretty okay at writing. For example, one time I wrote a fairly technical analysis of Twitter's platform strategy that inspired Will.I.Am of the Black Eyed Peas to start Twitter beef with me two years later when he read the post and took offense to my referring to him as "nobody’s favorite rapper".

This is something your GPTs cannot do, I assure you. An average LLM won't even know that Drake's favorite MIME type is application/pdf. Chalk one up for the greatness of human creativity.

The AI-First Mind Virus

Shopify's CEO Tobi Lütke (personal motto: "what if a Canadian was all the worst things about the United States?") started the "AI-first" trend, with one of those big memos that included, amongst other things, the declaration that "We will add Al usage questions to our performance and peer review questionnaire." This is unusual — did your boss ever have to send you a memo demanding that you use a smartphone? Was there a performance review requiring you to use Slack? I'm actually old enough that I was at different workplaces when they started using spreadsheets and email and the web, and I can tell you, they absolutely didn't have to drive adoption by making people fill out paperwork about how they were definitely using the cool new technology. Isn't that interesting?

Some of the other CEOs talking about the use of AI are a little more reasonable. Duolingo's CEO Luis von Ahn seems to be trying to be somewhat more moderate in his memo, stating plainly that he doesn't see AI replacing his employees. (Though that does immediately raise the "who brought that up?" question...) Yet even in this more even-handed take, we still get the insistence that "Al use will be part of what we evaluate in performance reviews". This is really weird!

The funny thing is, I'm not saying LLMs are without their uses. Let's use me as a case study again. I'm a lousy coder, these days. I haven't had time to keep up my skills, and the area I focused on for most of my dev career (front end web development) changes particularly quickly. So I use some of the modern tools to help me get up to speed and get more done in a limited amount of time, because otherwise I'm woefully unproductive in the short windows I have to code in my free time.

To be explicit: I code on the weekends, not professionally. That means I'm not very good at it. I'm certainly nothing like the incredibly talented developers that I've had the good fortune to work with over the years. I'm just fluent enough to be able to debug the broken code that LLMs generate, or to catch the bugs that they spew out by default. And I'm sure I don't even catch all the bugs that pop up, but fortunately, I'm not making any production systems; I'm just building little toy apps and sites for myself.

This is an important illustration: AI is really good for helping you if you're bad at something, or at least below average. But it's probably not the right tool if you're great at something. So why would these CEOs be saying, almost all using the exact same phrasing, that everyone at their companies should be using these tools? Do the think their employees are all bad at their jobs?

Groupthink and signaling

Big tech CEOs and VCs really love performing for each other. We know they hang out in group chats like high schoolers, preening and sending each other texts, each trying to make sure they're all wearing the latest fashions, whether it's a gold chain or a MAGA hat or just repeating a phrase that they heard from another founder. A key way of showing that they're part of this cohort is to make sure they're having a tantrum and acting out against their workers fairly regularly.

The return to office fad was a big part of this effort, often largely motivated by reacting to the show of worker power in the racial justice activism efforts of 2020. Similarly, being AI-first shows that a company is participating in the AI trend in the "right" way, by imposing it on workers, rather than trusting workers to judge what tools are useful for them to do their jobs.

A more normal policy on AI at a company might be something like this:

Our IT department has evaluated a set of LLM tools and determined that these ones meet our requirements for security, performance, data governance, reliability, manageability and integration with our workflows. We'll be doing a controlled deployment of these tools and you can choose to use them if you think they'll help you with your work; please share your feedback on whether they are helpful, and what might make them more useful for you over time. Here are the ways these AI tools meet our corporate standards for compliance with intellectual property consent, sustainability and environmental goals, and accessibility.

This would not get you invited to the fascist VC group chat, tho!

AI-Second? Third?

How did we get here? What can we do? Maybe it starts by trying to just... be normal about technology.

There's an orthodoxy in tech tycoon circles that's increasingly referred to, ironically, as "tech optimism". I say "ironically", because there's nothing optimistic about it. The culture is one of deep insecurity, reacting defensively, or even lashing out aggressively, when faced with any critical conversation about new technology. That tendency is paired with a desperate and facile cheerleading of startups, ignoring the often equally interesting technologies stories that come from academia, or from mature industries, or from noncommercial and open source communities that don't get tons of media coverage, but quietly push forward innovating without the fame and fortune. By contrast, those of us who actually are optimistic about technology (usually because we either create it, or are in communities with those who do) are just happily moving forward, not worrying when people point out the bugs that we all ought to be fixing together.

We don't actually have to follow along with the narratives that tech tycoons make up for each other. We choose the tools that we use, based on the utility that they have for us. It's strange to have to say it, but... there are people picking up and adopting AI tools on their own, because they find them useful. This is true, despite the fact that there is so goddamn much AI hype out there, with snake oil salesman pushing their bullshit religion of magical thinking machines and overpromising that these AI tools can do tasks that they're simply not capable of performing. It's telling that the creators of so many of the AI tools don't even have enough confidence in their offerings to simply let users choose to adopt them, and are instead forcing them into users' faces in every possible corner of their apps and websites.

The strangest part is, the AI pushers don't have to lie about what AI can do! If, as they say, AI tools are going to get better quickly, then let them do so and trust that smart people will pick them up and use them. If you think your workers and colleagues are too stupid to recognize good tools that will help them do their jobs better, then... you are a bad leader and should step down. Because you've created a broken culture.

But I don't think the audience for these memos is really the people who work at these companies. I think the audience is the other CEOs and investors and VCs in the industry, just as it was for the other fads of the last few years. And I expect that AI will indeed be part of how we evaluate performance in the future, but mostly in that the way CEOs communicate to their teams about technologies like AI will be part of how we all evaluate their performance as leaders.


Read the original article

Comments

  • By JimDabell 2025-04-3015:2413 reply

    > This is unusual — did your boss ever have to send you a memo demanding that you use a smartphone? Was there a performance review requiring you to use Slack? I'm actually old enough that I was at different workplaces when they started using spreadsheets and email and the web, and I can tell you, they absolutely didn't have to drive adoption by making people fill out paperwork about how they were definitely using the cool new technology.

    I’ve been around long enough to see resistance to things like the Internet, version control, bug tracking systems, ORMs, automated tests, etc. Not every advancement is welcomed by everybody. An awful lot of people are very set in their ways and will refuse to change unless given a firm push.

    For instance, if you weren’t around before version control became the norm, then you probably missed the legions of developers who said things like “Ugh, why do I have to use this stupid thing? It just slows me down and gets in my way! Why can’t I just focus on writing code?” Those developers had to be dragged into modern software development when they were certain it was a stupid waste of time.

    AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try. Using AI well is a skill you need to learn and if you don’t see positive results on your first couple of attempts that doesn’t necessarily mean it’s bad, it just means you are a beginner. If you tried a new language and didn’t get very far at first, would you blame the language or recognise that you lack experience?

    An awful lot of people are stuck in a rut where they tried an early model, got poor results to begin with, and refused to use it again. These people do need a firm, top-down push, or they will be left behind.

    This has happened before, many times. Contrary to the article’s claims, sometimes top-down pushes have been necessary even for things we now consider near universally good and productive.

    • By mjr00 2025-04-3015:352 reply

      > I’ve been around long enough to see resistance to things like the Internet, version control, bug tracking systems, ORMs, automated tests, etc. Not every advancement is welcomed by everybody. An awful lot of people are very set in their ways and will refuse to change unless given a firm push.

      There was never any widespread resistance to "the Internet", let's be real here.

      In any case, adoption of all those things was bottom-up rather than top-down. CEOs were not mandating that tech teams use version control or ORMs or automated testing. It was tech leadership, with a lot of support from ICs in their department.

      Tech people in particular are excited about trying new things. I never heard CEOs mandating top-down that teams use Kubernetes and adding people's Kubernetes usage into their performance reviews, yet Kubernetes spread like wildfire--to the point where many software companies which had no business using something as complicated as Kubernetes started using it. Same with other flavor-of-the-month tools and approaches like event sourcing, NoSQL/MongoDB, etc.

      If anything, as a leader you need to slow down adoption of new technology rather than force it upon people. The idea that senior leadership needs to push to get AI used is highly unusual, to say the least.

      • By brummm 2025-04-3016:053 reply

        Isn't Amazon's APIs everywhere another example of just this that came right from the top? In some companies CEOs double as the tech lead, no?

        • By mjr00 2025-04-3016:221 reply

          The API mandate notably specified what rather than how. "It doesn’t matter what technology [you] use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter." In some ways it's quite the opposite of CEO mandates to use AI, which specify how you must build things (using AI!) rather than what.

          The equivalent of the API mandate for AI would be if CEOs were demanding that all products include a "Summarize Content" button. Or that all code repositories contain a summary of their contents in a README. The use of AI to solve these problems would be an implementation detail.

          • By ryandrake 2025-04-3017:26

            Or if CEOs were demanding that everything be written in Python. Programming language should also be an implementation detail, not something a CEO would worry about. Just like "using AI."

        • By jimbokun 2025-04-3018:361 reply

          My recollection is that AWS was extremely popular among developers very early on.

          • By skwirl 2025-04-3021:19

            Do you just mean within Amazon? Because outside of Amazon, there was major resistance to AWS/cloud computing in general from older devs highly invested in the status quo. I have spent a significant amount of effort in my career fighting for cloud adoption.

        • By dowager_dan99 2025-04-3016:44

          to me this was more about guiding towards a desired outcome. An opinionated bet, but not overly prescriptive. "AI first" is saying do everything with AI and then hope you find some efficiencies, almost by accident.

      • By nitwit005 2025-05-010:46

        The trendy stuff does get mandated from the top. It just gets mandated when it's already caught on elsewhere, and the CEO wonders why they aren't cool too.

    • By trilobyte 2025-04-3015:301 reply

      I see you speak from experience. I feel like I'm watching the same cycle play out over and over again, which is that a new, transformative technology lands, people with a vested interest spend a lot of time denouncing it (your examples mostly land for me), the new technology gets over-hyped and fails to meet some bar and the haters all start crowing about how it's just B.S. and won't ever be useful, etc. etc.

      Meanwhile, people are quietly poking around figuring out the boundaries of what the technology really can do and pushing it a little further along.

      With the A.I. hype I've been keeping my message pretty consistent for all of the people who work for me: "There's a lot of promise, and there are likely a lot of changes that could come if things keep going the way they are with A.I., but even if the technology hits a wall right now that stops it from advancing things have already changed and it's important to embrace where we are and adapt".

      • By Centigonal 2025-04-3015:364 reply

        Such a sane, nuanced take on new technologies. I wish more people were outspoken about holding these types of opinions.

        It feels like the AI discourse is often dominated by irrationally exuberant AI boosters and people with an overwhelming, knee-jerk hatred of the technology, and I often feel like reading tech news is like watching two people who are both wrong argue with one another.

        • By pixl97 2025-04-3015:53

          Moderates typically have a lot less to say than extremists and don't feel a need to have their passion heard by the world. The discussion ends up being controlled by the haters and hypers.

          New technologies in companies commonly have the same pitfalls that burn out users. The companies have very little ability to tell if a technology is good or bad at the purchasing level. The c-levels that approve the invoices are commonly swayed not by the merits of the technology, but the persuasion of the salespeople or the fears of others in the same industries. This leads to a lot of technology that could/should be good being just absolute crap for the end user.

          Quite often the 'best' or at least most useful technology shows up via shadow IT.

        • By remich 2025-04-3016:40

          And a subgroup (or cousin?) of the exuberant AI boosters are the people absolutely convinced that LLM research leads to the singularity in the next 18-24 months.

          I really do wish we could get to a place where the general consensus was something similar to what Anil wrote - the greatest gains and biggest pitfalls are realized by people who aren't experienced in whatever domain they're using it for.

          The more experience you have in a given domain, the more narrow your use-cases for AI will be (because you can do a lot of things on your own faster than the time spent coming up with the right prompts and context mods), but paradoxically the better you will be at using the tools because of your increased ability to spot errors.

          *Note: by "narrow" I don't mean useless, I just mean benefits typically accrue as speed gains rather than knowledge + speed gains.

        • By throw234234234 2025-05-013:23

          Is AI like other technologies though? Most technologies require a learning curve that usually increases as the technology develops and adds features. They become "skills" in themselves. They are tools to be used; not the users of the tools themselves.

          AI seems like the opposite to me. It is the technology that is "the learning curve" in the long term. Its whole point long term is to emulate learning/intelligence - it is trying to be the worker, not the workers tool (whether it suceeds or not is another story). The industry seems to treat it as another tech/tool/etc which you need experience/training in which I wonder is the right approach long term.

          Many people will be wondering (incl myself) whether learning to use "AI" is really just an accessibility/interface problem. My time is valuable, should I bother if the productivity gains (which may only last a year or so before it changes again) outweigh the learning time/cost of developing tools/wrappers/etc? Everyone will have a different answer to this question based on their current tradeoffs.

          I ask the question: If I don't need it right now (e.g. code is only 10-20% of my job for example), why bother learning it when the future AI will require even less intelligence/learning to use?

        • By ryandrake 2025-04-3015:42

          Unfortunately, thoughtful, nuanced takes don't make headlines, don't get into Harvard Business Review, and don't end up as memos on the CEO's desk. Breathless advocacy and knee-jerk dismissals get the clicks and those are the takes that end up bubbling to the top and influencing the decision makers.

    • By n4r9 2025-04-3016:002 reply

      > Those developers had to be dragged into modern software development when they were certain it was a stupid waste of time.

      But why do they have to fill out some paperwork? If the new technology is a genuine productivity boost and any sort of meaningful performance review is undertaken, then it will show up if they're performing sub-par compared to colleagues.

      The real problem is that senior management are lazily passing down mandates in lieu of trusting middle management to do effective performance reviews. Just as it was with Return To Office.

      • By Faark 2025-04-3020:481 reply

        Social responsibility?

        I have a few colleagues who like the way they work and would prefer everything to stay the way it is. Such "skilled artisans" might be on the way out, replaced by "Ai factory" mass production.

        Sure, they could just be kicked out and replaced. But they worked with the company, in some case for a decade plus. Giving them a fair picture of what seems to be down the road is the very least I'd expect of a company treating it's workers as more than just replaceable cogwheels.

        • By n4r9 2025-05-0118:21

          For sure. It's not inherently bad that a company encourages their staff to use AI. But there are better ways than just announcing you are now AI-first in everything and they must tell you how they're using AI in performance reviews. That just creates resentment and incentivises gaming the system.

      • By SpicyLemonZest 2025-04-3016:142 reply

        Some people are good enough that they'll do well on performance reviews anyway, and if there's a new technology that's acting as a force multiplier those are exactly the people who the company most wants to adopt it.

        • By n4r9 2025-04-3016:31

          Fair point about comparing performance reviews. It's also practically impossible to judge someone's performance on novel tasks, or whether they're caring enough about tech debt. Even as I wrote my first post there was a nagging voice in my head saying "Almost no one does performance reviews well enough for that".

          In my (limited) experience, the tasks you want to assign to elite devs are less amenable to AI in the first place.

        • By DrillShopper 2025-04-3016:301 reply

          Perhaps then they should focus on getting their 0.1x developers using it to get competent rather than trying to get their 1x developers to 10x using it.

          • By n4r9 2025-05-0118:24

            Interesting point. If eleven 1x developers multiply their productivity by 10, that's roughly the same as one 10x developer doing the same. Assuming of course that such quantities can be meaningfully applied.

    • By JohnFen 2025-04-3015:51

      > if you weren’t around before version control became the norm, then you probably missed the legions of developers

      I was around before version control and I don't remember that reaction from more than an insignificant percentage of devs. Most devs reacted to the advent of version control with glee because it eased a real pain point.

    • By closeparen 2025-04-3016:192 reply

      It is incredibly early. Copilot and Cursor are both incapable of writing a mapping between two structs with identical fields - some of the most menial coding imaginable - because they either don’t have or won’t use basic coding mechanics like looking up the signature of a thing before writing code about it. This is the technology that should be making me 10x more productive? This honestly feels like an emperor has no clothes situation. Being charitable, maybe the hype is all from people generating code into empty projects with no existing context?

      • By lnenad 2025-05-0114:29

        I keep running into these sorts of messages in hn but my experience couldn't be more different. Even autocomplete does this automatically for me, let alone using chat/agent in cursor/augment code.

      • By jayGlow 2025-04-3020:421 reply

        that's weird I'm pretty sure I've done that exact thing multiple times with chatgpt. I've noticed copilot doesn't always work well but it's still frequently useful for me.

        • By closeparen 2025-05-0517:08

          I'm sure if I copy and pasted to setup the problem in a ChatGPT browser window, it would work. But I would expect a $10 billion "AI powered" editor to be capable of using the go-to-definition on my behalf...

    • By dowager_dan99 2025-04-3016:42

      >> AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try.

      My take-away was this is exactly what the OP is targeting. Management's job is to convince you to try and help you make it demonstrate value; mandating "though shall be AI-first" does neither of these effectively - ironically some of your best developers will: require the most evidence to be convinced, fight the hardest, and have the best options to jump ship if you go far enough. It's just weak management when there's the obvious alternative. Dash is a developer relations/evangelist so it's not surprising he bristles at this approach.

    • By bluefirebrand 2025-04-3018:27

      > AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try. Using AI well is a skill you need to learn and if you don’t see positive results on your first couple of attempts that doesn’t necessarily mean it’s bad, it just means you are a beginner

      I'm not a beginner though. In fact I'm actually very experienced at doing my job

      Which is why I don't need non-technical management and AI consultants to be telling me what tools I should be using

      If I thought AI was going to be a useful tool for me then I would use it

      But so far it hasn't, so I don't

      I'm not investing my time and energy into a "skill" that doesn't seem like it is going to pay off

    • By tbrownaw 2025-04-3016:44

      > If you tried a new language and didn’t get very far at first, would you blame the language or recognise that you lack experience?

      This way of phrasing it rejects the possibility that maybe the new thing really does suck, and that this can sometimes be identified pretty quickly.

    • By suddenlybananas 2025-04-3015:29

      There have also been many fads that were forced on people which fizzled out and didn't amount to much.

    • By gwbas1c 2025-04-3020:46

      > These people do need a firm, top-down push, or they will be left behind.

      > even for things we now consider near universally

      We aren't at the point where AI tools provide a major productivity boost. Sometimes they help, sometimes they don't, sometimes working with AI has negative productivity.

      Assuming AI improves to the point where employees who use it are significantly more productive... They'll excel relative to their peers. The people who can't figure it out will underperform.

    • By rienbdj 2025-05-016:38

      > Using AI well is a skill you need to learn and if you don’t see positive results on your first couple of attempts that doesn’t necessarily mean it’s bad, it just means you are a beginner.

      I think it’s a very easy skill to acquire, which is why we’ve see such fast user growth for these products.

      Most of them have the same interaction model as talking to someone on iMessage / WhatsApp / Messenger.

    • By bootsmann 2025-04-3022:04

      I think it would be a worthwhile exercise for yourself to find and replace every mention of AI in your post with blockchain or metaverse. Just because something is new doesn’t mean its useful and if you’re having to force knowledge workers to adopt something supposedly making them more productive then its probably a bad sign.

    • By beaugunderson 2025-05-026:05

      > I’ve been around long enough to see resistance to things like the Internet, version control, bug tracking systems, ORMs, automated tests, etc. Not every advancement is welcomed by everybody

      Version control is vegetables. You often have to cajole kids to eat their vegetables. AI is being sold as if it's dessert. You don't usually have to cajole kids to eat their dessert, and it's weird to make that part of a performance review. If it really made people more efficient it would spread like wildfire as people sold all of their friends on it. Instead people try it, encounter some small wins and some annoying losses, judge that it's basically "meh", and we get legions of non-technical managers who for some reason feel they need to mandate its use.

  • By andy99 2025-04-3015:294 reply

      did your boss ever have to send you a memo demanding that you use a smartphone? Was there a performance review requiring you to use Slack? 
    
    I see this is already a favorite quote amongst commentors. It's mine too: I had a job ~15 years ago where the company had introduced an internal social network, that was obviously trying to ride on the coattails of Facebook et al without understanding why people liked social networks.

    Nobody used it because it was useless, but management evidently was invested in it because your profile and use of that internal site did in fact factor in to performance reviews.

    This didn't last long, maybe only one review cycle before everyone realized it was irretrievably lost. The parallel with the article is very apt thought. The stick instead of the carrot is basically an indication that a dumb management idea is in its death throes.

    • By alabastervlog 2025-04-3015:493 reply

      That "Facebook for business" fad was so fucking stupid and managed to last what, about a decade?

      It's a great example of how executive group-think can drive whole multi-industry initiatives that are very-obviously, to anyone outside that bubble, pure waste.

      • By pimlottc 2025-04-3016:51

        Nobody wants Facebook at work, what they want is a comprehensive org chart that's actually up-to-date.

      • By rsynnott 2025-04-3019:461 reply

        Longer; a lot of the metaverse hype was pretty much a reframing of it.

        • By alabastervlog 2025-04-3020:18

          I once worked at a company that'd acquired a virtual-office company. I wasn't around for the acquisition, but it really smelled like one rich person bailing out another's failed investment (or someone on the board had a bunch of money in the acquisition target, or something along those lines).

          To justify owning the useless damn thing, they insisted everyone use it, basically like Slack if it ate 3-4x the resources (really saying something, given Electron already eating 5-10x the resources it ought to need for any given task), monopolized a screen when in use, and added all the awkward elements of physical environments to virtual ones for no reason ("is it weird if 'I' 'sit' in this chair 'next to' this other 'person' when there are other chairs available in the room?", or "oh shit where's that meeting room 'physically' located, again? I think I'm lost...") while removing none of the awkwardness of virtual interactions.

          Truly, bizarrely pointless. It was like some shit out of the Silicon Valley TV show, so absurd it was hard to believe it was real. I swear to god, I'm not making this up, they even had in-world presentations, so you could add all the fun of having a bad angle on a screen or being too far away to comfortably read the text to the joy of a Zoom screen-share. Totally nuts. Luckily you could also maximize whatever was being presented, but... hooray, your best feature is that I can ignore all the things that make your dumb crap distinctive? What a win.

          This is what I think of every time I see anyone trying to promote Zuckerberg's weird, bad idea. I assure you, being in VR goggles would not have made the experience either more productive or more pleasant. Nobody who's ever tried to work like this even for one week could possibly think it's a good idea to invest in it.

    • By nkrisc 2025-04-3015:38

      I would say we might have worked at the same company, but there were so many companies trying that at the time we may very well not have.

      Where I worked, it was an open secret that the CEO had an alter ego he used on the site. I have no idea if he knew that we all knew who that really was (I have to assume he did), but every played along.

      By the time I had worked there it had been around for a few years already and once a quarter the head of our group set time aside for everyone to "engage" with it for an hour so that no one would be dinged on their performance review.

    • By romellem 2025-04-3018:41

      Oh wow, totally forgot about [Yammer][1]. What a waste of time.

      [1]: https://en.wikipedia.org/wiki/Viva_Engage

    • By ramesh31 2025-04-3015:362 reply

      A social network never did a weeks worth of work for me in 10 minutes. Keep swinging that hammer, John Henry.

      • By skybrian 2025-04-3015:401 reply

        If it makes that much of a difference and quality is the same, nobody will care whether you used AI or not. It’s an implementation detail.

      • By h2zizzle 2025-05-0115:15

        John Henry won, he just died doing it. Another victim of racial healthcare disprities, probs.

        EDIT: Also, depends on your chronological window. Is, "Posting a video advertising your services that reaches 30,000 people," a week's worth phone-calling? Of mailing letters? Of knocking on doors? Productivity increases are incremental and built on a ton of up-front capex, but they're there all the same. AI-pushers, of all people, should understand and be ready to defend this fact.

  • By srveale 2025-04-3014:033 reply

    I don't necessarily disagree with the main argument, but

    > did your boss ever have to send you a memo demanding that you use a smartphone

    Yes, there were tons of jobs that required you to have a smartphone, and still do. I remember my second job, they'd give out Blackberries - debatably not smartphones, but still - to the managers and require work communication on them. I know this was true for many companies.

    This isn't the perfect analogy anyway, since one major reason companies did this was to increase security, while forcing AI onto begrudging workers feels like it could have the opposite effect. The commonality is efficiency, or at least the perception of it by upper management.

    One example I can think of where there was worker pushback but it makes total sense is the use of electronic medical records. Doctors/nurses originally didn't want to, and there are certainly a lot of problems with the tech, but I don't think anyone is suggesting now that we should go back to paper.

    You can make the argument that an "AI first" mandate will backfire, but the notion that workers will collectively gravitate towards new tech is not true in general.

    • By Uehreka 2025-04-3015:28

      > Yes, there were tons of jobs that required you to have a smartphone, and still do. I remember my second job, they'd give out Blackberries - debatably not smartphones, but still - to the managers and require work communication on them.

      Anil is referring specifically to the way that people who were told to use a Blackberry would bring an iPhone to work anyway and demand that IT support it because it was so much better. In the late 2000s Blackberries were a top-down mandate that failed because iPhones were a bottom-up revolution that was too successful to ban.

      So look for situations where employees are using their personal AI subscriptions for work and are starting to demand that IT budget for it so they don’t have to pay out of pocket. I’m seeing this right now at my job with GitHub Copilot.

    • By pxx 2025-04-3015:201 reply

      I don't think your example is really a counterexample. Work-provided Blackberries allowed you to be more responsive to work messages while communicating over an ostensibly secure medium.

      on the other hand, making sure that people use AI for performance reviews would be akin to measuring the percentage of work days that you used your blackberry. that's not something that anyone sane ever did.

      somewhat in the same vein, nobody ever sent a directive saying that all interoffice memoranda must be typed in via blackberry.

      • By ryandrake 2025-04-3015:31

        Yea, the point is, if a product or technology is useful, people will want to use it. They'll bang down your door to be allowed to use it. They'll even surreptitiously use it if you don't allow it. If you have to mandate that they use it, what does that really say about the tool?

        A better example is probably source control. It might not have been true in the past, but these days, nobody has to mandate that you use source control. We all know the benefits, and if we're starting a new software business, we're going to use source control by default from day one.

    • By anildash 2025-04-3015:302 reply

      Yeah, I think that's fair, but those bosses that made us get Blackberries were mostly doing that because they wanted to be able to call us and make us work, not because we had to be convinced that smartphones had value, right? We all ended up buying smartphones on our own as well.

      • By srveale 2025-04-3018:011 reply

        You may underestimate how many people do not need to be convinced. Again, I'll refrain from making a value judgment, but the hard numbers show that LLMs have been one of the most quickly adopted technologies in the history of mankind, including the time before anyone was forced to use them.

        Not sure if these are the best stats to illustrate the point, but ChatGPT was released November 2022, 2.5 years ago, and they currently claim ~1 billion users [1]

        By comparison, iPhone sales were something like 30 million over the same time period, June 2007 through 2009. [2]

        In other words, what took ChatGPT several months took smartphones several years.

        Of course there are problems with the comparison (iPhones are expensive, but many people bought each version of the iPhone making the raw user count go down, Sam Altman is exaggerating, people use LLMs other than ChatGPT, blah blah blah), so maybe let's not concentrate on this particular analogy. The point is: even a very skeptical view of how many people use LLMs day-to-day has to acknowledge they are relatively popular, for better or worse.

        I think we're better served trying to keep the cat from scratching us rather than trying to put it back in the bag. Ham-fisted megalomaniac CEOs forcing a dangerous technology on workers before we all understand the danger is a big problem, that's for sure. To the original point, "AI-first is the new RTO", there's definitely some juice there, but it's not because the general public is anti-AI.

        [1] https://www.forbes.com/sites/martineparis/2025/04/12/chatgpt...

        [2] https://www.globaldata.com/data-insights/technology--media-a...

        • By bluefirebrand 2025-04-3018:371 reply

          > In other words, what took ChatGPT several months took smartphones several years

          You are comparing a cheap subscription service to an expensive piece of hardware that would replace hardware that most people already owned

          Of course smartphones were slower to adopt. Everyone had phones already, and they were expensive!

          ChatGPT is *free

          • By srveale 2025-04-3019:59

            Do you have any thoughts on the second half of my comment?

      • By remich 2025-04-3016:52

        Well, we all ended up buying smartphones eventually. But the delta between when Blackberries first were adopted in corporate environments and when iPhones/Androids were can't-miss technologies wasn't small.

HackerNews