AI adoption and Solow's productivity paradox

2026-02-181:40792752fortune.com

In the 1980s, economist Robert Solow made an observation that reminded economists of today’s AI boom: “You can see the computer age everywhere but in the productivity statistics.”

In 1987, economist and Nobel laureate Robert Solow made a stark observation about the stalling evolution of the Information Age: Following the advent of transistors, microprocessors, integrated circuits, and memory chips of the 1960s, economists and companies expected these new technologies to disrupt workplaces and result in a surge of productivity. Instead, productivity growth slowed, dropping from 2.9% from 1948 to 1973, to 1.1% after 1973.

Newfangled computers were actually at times producing too much information, generating agonizingly detailed reports and printing them on reams of paper. What had promised to be a boom to workplace productivity was for several years a bust. This unexpected outcome became known as Solow’s productivity paradox, thanks to the economist’s observation of the phenomenon.

“You can see the computer age everywhere but in the productivity statistics,” Solow wrote in a New York Times Book Review article in 1987.

New data on how C-suite executives are—or aren’t—using AI shows history is repeating itself, complicating the similar promises economists and Big Tech founders made about the technology’s impact on the workplace and economy. Despite 374 companies in the S&P 500 mentioning AI in earnings calls—most of which said the technology’s implementation in the firm was entirely positive—according to a Financial Times analysis from September 2024 to 2025, those positive adoptions aren’t being reflected in broader productivity gains.

A study published this month by the National Bureau of Economic Research found that among 6,000 CEOs, chief financial officers, and other executives from firms who responded to various business outlook surveys in the U.S., U.K., Germany, and Australia, the vast majority see little impact from AI on their operations. While about two-thirds of executives reported using AI, that usage amounted to only about 1.5 hours per week, and 25% of respondents reported not using AI in the workplace at all. Nearly 90% of firms said AI has had no impact on employment or productivity over the last three years, the research noted.

However, firms’ expectations of AI’s workplace and economic impact remained substantial: Executives also forecast AI will increase productivity by 1.4% and increase output by 0.8% over the next three years. While firms expected a 0.7% cut to employment over this time period, individual employees surveyed saw a 0.5% increase in employment.

In 2023, MIT researchers claimed AI implementation could increase a worker’s performance by nearly 40% compared to workers who didn’t use the technology. But emerging data failing to show these promised productivity gains has led economists to wonder when—or if—AI will offer a return on corporate investments, which swelled to more than $250 billion in 2024.

“AI is everywhere except in the incoming macroeconomic data,” Apollo chief economist Torsten Slok wrote in a recent blog post, invoking Solow’s observation from nearly 40 years ago. “Today, you don’t see AI in the employment data, productivity data, or inflation data.”

Slok added that outside of the Magnificent Seven, there are “no signs of AI in profit margins or earnings expectations.”

Slok cited a slew of academic studies on AI and productivity, painting a contradictory picture about the utility of the technology. Last November, the Federal Reserve Bank of St. Louis published in its State of Generative AI Adoption report that it observed a 1.9% increase in excess cumulative productivity growth since the late-2022 introduction of ChatGPT. A 2024 MIT study, however, found a more modest 0.5% increase in productivity over the next decade.

“I don’t think we should belittle 0.5% in 10 years. That’s better than zero,” study author and Nobel laureate Daron Acemoglu said at the time. “But it’s just disappointing relative to the promises that people in the industry and in tech journalism are making.”

Other emerging research can offer reasons why: Workforce solutions firm ManpowerGroup’s 2026 Global Talent Barometer found that across nearly 14,000 workers in 19 countries, workers’ regular AI use increased 13% in 2025, but confidence in the technology’s utility plummeted 18%, indicating persistent distrust.

Nickle LaMoreaux, IBM’s chief human resources officer, said last week the tech giant would triple its number of young hires, suggesting that despite AI’s ability to automate some of the required tasks, displacing entry-level workers would create a dearth of middle managers down the line, endangering the company’s leadership pipeline.

To be sure, this productivity pattern could reverse. The IT boom of the 1970s and ’80s eventually gave way to a surge of productivity in the 1990s and early 2000s, including a 1.5% increase in productivity growth from 1995 to 2005 following decades of slump. 

Economist and Stanford University’s Digital Economy Lab director Erik Brynjolfsson noted in a Financial Times op-ed the trend may already be reversing. He observed that fourth-quarter GDP was tracking up 3.7%, despite last week’s jobs report revising down job gains to just 181,000, suggesting a productivity surge. His own analysis indicated a U.S. productivity jump of 2.7% last year, which he attributed to a transition from AI investment to reaping the benefits of the technology. Former Pimco CEO and economist Mohamed El-Erian also noted job growth and GDP growth continuing to decouple as a result in part of continued AI adoption, a similar phenomenon that occurred in the 1990s with office automation.

Slok similarly saw the future impact of AI as potentially resembling a “J-curve” of an initial slowdown in performance and results, followed by an exponential surge. He said whether AI’s productivity gains would follow this pattern would depend on the value created by AI. 

So far, AI’s path has already diverged from its IT predecessor. Slok noted in the 1980s, an innovator in the IT space had monopoly pricing power until competitors could create similar products. Today, however, AI tools are readily accessible as a result of “fierce competition” between large language model-buildings driving down prices.

Therefore, Slok posited, the future of AI productivity would depend on companies’ interest in taking advantage of the technology and continuing to incorporate it into their workplaces. “In other words, from a macro perspective, the value creation is not the product,” Slok said, “but how generative AI is used and implemented in different sectors in the economy.”


Read the original article

Comments

  • By crazygringo 2026-02-182:2627 reply

    Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1].

    Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.

    The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.

    And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.

    [1] https://en.wikipedia.org/wiki/Productivity_paradox

    • By kace91 2026-02-182:4325 reply

      The comparison seems flawed in terms of cost.

      A Claude subscription is 20 bucks per worker if using personal accounts billed to the company, which is not very far from common office tools like slack. Onboarding a worker to Claude or ChatGPT is ridiculously easy compared to teaching a 1970’s manual office worker to use an early computer.

      Larger implementations like automating customer service might be more costly, but I think there are enough short term supposed benefits that something should be showing there.

      • By abraxas 2026-02-183:0521 reply

        What if LLMs are optimizing the average office worker's productivity but the work itself simply has no discernable economic value? This is argued at length in Grebber's Bullshit Jobs essay and book.

        • By hattmall 2026-02-183:404 reply

          I find that highly unlikely, coding is the AIs best value use case by far. Right now office workers see marginal benefits but it's not like it's an order of magnitude difference. AI drafts an email, you have to check and edit it, then send it. In many cases it's a toss up if that actually saved time, and then if it did, it's not like the pace of work is break neck anyway, so the benefit is some office workers have a bit more idle time at the desk because you always tap some wall that's out of your control. Maybe AI saves you a Google search or a doc lookup here and there. You still need to check everything and it can cause mistakes that take longer too. Here's an example from today.

          Assistant is dispatching a courier to get medical records. AI auto completes to include the address. Normally they wouldn't put the address, the courier knows who we work with, but AI added it so why not. Except it's the wrong address because it's for a different doctor with the same name. At least they knew to verify it, but still mistakes like this happening at scale is making the other time savings pretty close to a wash.

        • By Aurornis 2026-02-184:247 reply

          > but the work itself simply has no discernable economic value? This is argued at length in Grebber's Bullshit Jobs essay and book.

          That book was very different than what I expected from all of the internet comment takes about it. The premise was really thin and did't actually support the idea that the jobs don't generate value. It was comparing to a hypothetical world where everything is perfectly organized, everyone is perfectly behaved, everything is perfectly ordered, and therefore we don't have to have certain jobs that only exist to counter other imperfect things in society.

          He couldn't even keep that straight, though. There's a part where he argues that open source work is valuable but corporate programmers are doing bullshit work that isn't socially productive because they're connecting disparate things together with glue code? It didn't make sense and you could see that he didn't really understand software, other than how he imagined it fitting into his idealized world where everything anarchist and open source is good and everything corporate and capitalist is bad. Once you see how little he understands about a topic you're familiar with, it's hard to unsee it in his discussions of everything else.

          That said, he still wasn't arguing that the work didn't generate economic value. Jobs that don't provide value for a company are cut, eventually. They exist because the company gets more benefit out of the job existing than it costs to employ those people. The "bullshit jobs" idea was more about feelings and notions of societal impact than economic value.

      • By gruez 2026-02-182:524 reply

        How viable are the $20/month subscriptions for actual work and are they loss making for Anthropic? I've heard both of people needing to get higher tiers to get anything done in Claude Code and also that the subscriptions are (heavily?) subsidized by Anthropic, so the "just another $20 SaaS" argument doesn't sound too good.

      • By vidarh 2026-02-1811:03

        A $20 Claude subscription lets you scratch the surface. A $20 Claude subscription without training means you have a lot of people spending time figuring out how to use it, and then maybe getting a bit of payback, but earning back that training is going to take time.

        Getting people to figure out how to enter questions is easy. Getting people to a point where they don't burn up all the savings by getting into unproductive conversations with the agent when it gets something wrong, is not so easy.

      • By jstummbillig 2026-02-189:151 reply

        I see no reason to believe that just handing a Claude subscription to everyone in a company simply creates economic benefit. I don't think it's easier than "automating customer service". It's actually very strange.

        I think it could definitely already create economic benefit, after someone instructed clearly how to use it and how to integrate it in your work. Most people are really not good at figuring that out on their own, in a busy workday, when left to their own devices and companies are just finding out where the ball is moving and what to organize around too.

        So I can totally see a lot of failed experiments and people slowly figuring stuff out, and all of that not translating to measurable surpluses in a corp, in a setup similar to what OP laid out.

      • By 46493168 2026-02-183:201 reply

        >I think there are enough short term supposed benefits that something should be showing there.

        As measured by whom? The same managers who demanded we all return to the office 5 days a week because the only way they can measure productivity is butts in seats?

      • By azuanrb 2026-02-183:531 reply

        $20 is not useable, need $100 plan at least for development purposes. That is a lot of money for some countries. In my country, that can be 1/10 of their monthly salary. Hard to get approval on it. It is still too expensive right now.

      • By MSFT_Edging 2026-02-1812:33

        > A Claude subscription is 20 bucks per worker

        Only until the loans come due. We're still in the "uber undercutting medalian cabs" part of the game.

      • By meager_wikis 2026-02-182:50

        If anything, the 'scariness' of an old computer probably protected the company in many ways. AI's approachability to the average office worker, specifically how it makes it seem like it easy to deploy/run/triage enterprise software, will continue to pwn.

      • By overgard 2026-02-185:31

        I've never looked at enterprise licensing, but regular license wise, a Claude subscription is actually $200 a month. I don't count the $20 or $100 tiers because they're too limited to be useful (especially professionally!)

      • By analog31 2026-02-1815:46

        I'm not sure about the comparison either, but the cost of operating the LLM should include the worker's wages.

        I read an article yesterday about people working insane hours at companies that have bet heavily on AI. My interpretation is that a worker runs out of juice after a few hours, but the AI has no limit and can work its human tender to death.

      • By steveBK123 2026-02-1820:45

        I keep seeing this argument thrown around, but API usage on business plans is token spend based and can be orders of magnitude more than this $20/head per month.

        My company is spending 20-50x that much per head easily from the firmwide cost numbers being reported.

        They had to set circuit breakers because some users hit $200/day.

      • By vessenes 2026-02-184:231 reply

        Agreed.

        We do have a way to see the financial impact - just add up Anthropic and oAI's reported revenues -> something like $30b in annual run rate. Given growth rates, (stratospheric), it seems reasonable to conclude informed buyers see economic and/or strategic benefit in excess of their spend. I certainly do!

        That puts the benefits to the economy at just around where Mastercard's benefits are, on a dollar basis. But with a lot more growth. Add something in there for MS and GOOG, and we're probably at least another $5b up. There are only like 30 US companies with > $100bn in revenues; at current growth rates, we'll see combined revenues in this range in a year.

        All this is sort of peanuts though against 29 trillion GDP, 0.3%. Well not peanuts, it's boosting the US GDP by 10% of its historical growth rate, but the bull case from singularity folks is like 10%+ GDP growth; if we start seeing that, we'll know it.

        All that said, there is real value being added to the economy today by these companies. And no doubt a lot of time and effort spent figuring out what the hell to do with it as well.

      • By delaminator 2026-02-189:14

        You still need to teach a 2020s employee how to use Claude.

        - protect yourself from data loss / secret leaks - what it can and can't do - trust issues & hallucinations - Can't just enable Claude for Excel and expect people to become Excel wizards.

      • By TimByte 2026-02-1810:06

        I think the subscription price is only the visible tip of the iceberg

      • By Zardoz84 2026-02-186:571 reply

        And nobody talks that the "20 bucks per worker" it's selling it at loss. I'm waiting to see when they put a price that expects to generate some net income...

      • By dahcryn 2026-02-187:36

        not true at all, onboarding is complex too. E.g. you cant just connect claude to your outlook, or have it automate stuff in your CRM. As a office drone, you don't have the admin permissions to setup those connections at all.

        And that's the point here: value is handicapped by the web interface, and we are stuck there for the foreseeable future until the tech teams get their priorities straight and build decent data integration layers, and workflow management platforms.

      • By u1hcw9nx 2026-02-1811:31

        I think crazygringo mispresents Solow paradox. None of the main explanations say it's the cost that removed the productivity.

      • By geraneum 2026-02-1810:08

        > A Claude subscription is 20 bucks per worker

        Talking about macro economics, I don’t think that number is correct.

      • By giantg2 2026-02-1815:38

        "if using personal accounts"

        InfoSec and Legal would like a word with you...

      • By latchkey 2026-02-183:021 reply

        Like Uber/Airbnb in early days, this is heavily subsidized.

      • By y42 2026-02-188:501 reply

        Problem is, that just having a Claude subscription doesn't make you productive. Most of those talks happen in a "tech'ish" environments. Not every business is about coding.

        Real life example: A client came to me asking how to compare orders against order confirmation from the vendor. They come as PDF files. Which made me wonder: Wait, you don't have any kind of API or at least structured data that the vendor gives you?

        Nope.

        And here you are. I am not talking about a niche business. I assume that's a broader problem. Tech can probably automate everything and this since 30 years. Still business lack of "proper" IT processes, because at the end every company is unique and requires particular measures to be "fully" onboarded to IT based improvements like that.

      • By 8note 2026-02-196:30

        the expensive work is reorganizing organizations to be able to make use of the improved output.

      • By MagicMoonlight 2026-02-1813:08

        A computer lets you save a fortune in storage rooms, admin staff, delivery fees etc. It lets you reinvent how everything runs.

        ChatGPT just lets you generate slop, that may be helpful. For the vast majority of industries it doesn’t actually offer much. Your meme departments like HR might be able to push out their slop quicker, but that doesn’t improve profitability.

      • By nurettin 2026-02-1816:34

        > A Claude subscription is 20 bucks per worker

        I mean, sure. If you want to use it for 20 minutes and wait for two hours at a time.

      • By boxingdog 2026-02-1815:51

        [dead]

    • By _aavaa_ 2026-02-183:211 reply

      For more on this exact topic and an answer to Solow’s Paradox, see, the excellent, The Dynamo and the Computer by Paul David [0].

      [0]: https://www.almendron.com/tribuna/wp-content/uploads/2018/03...

      • By gsf_emergency_6 2026-02-184:431 reply

        Stanford prof rebutts David's idea[0] that it's difficult to extract productivity from the data

        https://www.nber.org/system/files/working_papers/w25148/w251...

        I don't agree that real GDP measures what he thinks it measures, but he opines

        >Data released this week offers a striking corrective to the narrative that AI has yet to have an impact on the US economy as a whole. While initial reports suggested a year of steady labour expansion in the US, the new figures reveal that total payroll growth was revised downward by approximately 403,000 jobs. Crucially, this downward revision occurred while real GDP remained robust, including a 3.7 per cent growth rate in the fourth quarter. This decoupling — maintaining high output with significantly lower labour input — is the hallmark of productivity growth.

        https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419...

        [0] on the basis that IT and AI are not general technologies in the mold of the dynamo, keyword "intangibles", see section 4 p21, A method to measure intangibles

        • By PowerElectronix 2026-02-1811:581 reply

          GDP growth measurements have a big bias due to tariffs on, tariffs off, tariffs on again policies wrecking imports and exports numbers. Consumer spending is up, too, so I too fail to see that gdp growth while jobs are not as up as expected is due to AI making us more productive and not just people spending more after months of increased savings due to tariffs.

    • By whynotminot 2026-02-184:4510 reply

      It’s also pretty wild to me how people still don’t really even know how to use it.

      On hacker news, a very tech literate place, I see people thinking modern AI models can’t generate working code.

      The other day in real life I was talking to a friend of mine about ChatGPT. They didn’t know you needed to turn on “thinking” to get higher quality results. This is a technical person who has worked at Amazon.

      You can’t expect revolutionary impact while people are still learning how to even use the thing. We’re so early.

      • By overgard 2026-02-185:235 reply

        I don't think "results don't match promises" is the same as "not knowing how to use it". I've been using Claude and OpenAI's latest models for the past two weeks now (probably moving at about 1000 lines of code a day, which is what I can comfortably review), and it makes subtle hard-to-find mistakes all over the place. Or it just misunderstands well known design patterns, or does something bone headed. I'm fine with this! But that's because I'm asking it to write code that I could write myself, and I'm actually reading it. This whole "it can build a whole company for me and I don't even look at it!" is overhype.

      • By politelemon 2026-02-184:59

        You are assuming that we all work on the same tasks and should have exactly the same experience with it, which is it course far from the truth. It's probably best to start with that base assumption and work on the implications from there.

        As for the last example, for all the money being spent on this area, if someone is expected to perform a workflow based on the kind of question they're supposed to ask, that's a failure in the packaging and discoverability aspect of the product, the leaky abstraction only helps some of us who know why it's there.

      • By harrall 2026-02-185:05

        I’ve been helping normal people at work use AI and there’s two groups that are really struggling:

        1. People who only think of using AI in very specific scenarios. They don’t know when you use it outside of the obvious “to write code” situations and they don’t really use AI effectively and get deflated when AI outputs the occasional garbage. They think “isn’t AI supposed to be good at writing code?”

        2. People who let AI do all the thinking. Sometimes they’ll use AI to do everything and you have to tell them to throw it all away because it makes no sense. These people also tend to dump analyses straight from AI into Slack because they lack the tools to verify if a given analysis is correct.

        To be honest, I help them by teaching them fairly rigid workflows like “you can use AI if you are in this specific situation.” I think most people will only pick up tools effectively if there is a clear template. It’s basically on-the-job training.

      • By illiac786 2026-02-1814:451 reply

        > On hacker news, a very tech literate place, I see people thinking modern AI models can’t generate working code.

        I am completely flooded with comments and stories about how great LLMs are at coding. I am curious to see how you get a different picture than this? Can you point me to a thread or a story that supports your view? At the moment, individuals thinking AI cannot generate working code seem almost inexistent to me.

      • By tstrimple 2026-02-185:111 reply

        > On hacker news, a very tech literate place

        I think this is the prior you should investigate. That may be what HN used to be. But it's been a long time since it has been an active reality. You can still see actual expert opinions on HN, but they are the minority more and more.

      • By 3form 2026-02-199:18

        >You can’t expect revolutionary impact while people are still learning how to even use the thing. We’re so early.

        What makes you think people this will ever change? Have you seen how well people know their already existing tools?

      • By mrtksn 2026-02-185:162 reply

        In a WhatsApp group full of doctors, managers, journalist and engineers (including software) in age of 30-60 I asked if anyone heard of openclaw and only 3 people heard of it from influencers, none used it.

        But from my social feed the impression was that it is taking over the world:)

        I asked it because I am building something similar since some tome and I thought its over they were faster than me but as it appears there’s no real adoption yet. Maybe there will be some once they release it as part of ChatGPT but even then it looks like too early as actually few people are using the more advanced tools.

        It’s definitely in very early stage. It appears that so far the mainstream success in AI is limited to slop generation and even that is actually small number of people generating huge amounts of slop.

      • By bigbuppo 2026-02-186:031 reply

        And it will get worse once the UX people get ahold of it.

      • By KellyCriterion 2026-02-186:321 reply

        A neighbour of me has a PhD and is working in research at a hospital. He is super smart.

        Last time he said: "yes yes I know about ChatGPT, but I do not use it at work or home."

        Therefore, most people wont even know about Gemini, Grok or even Claude.

      • By slopinthebag 2026-02-185:216 reply

        > I see people thinking modern AI models can’t generate working code.

        Really? Can you show any examples of someone claiming AI models cannot generate working code? I haven't seen anyone make that claim in years, even from the most skeptical critics.

    • By kamaal 2026-02-182:381 reply

      One part of the system moving fast doesn't change the speed of the system all that much.

      The thing to note is, verifying if something got done is harder and takes time in the same ballpark as doing the work.

      If people are serious about AI productivity, lets start by addressing how we can verify program correctness quickly. Everything else is just a Ferrari between two traffic red lights.

      • By hawaiianbrah 2026-02-185:58

        Really? I disagree that verifying is as hard as doing the work yourself. It’s like P != NP.

    • By __jf__ 2026-02-187:031 reply

      Paul Strassmann wrote a book in 1990 called "Business Value of Computers" that showed that it matters where money on computers is spent. Only firms that spent it on their core business processes showed increased revenues whereas the ones that spent it on peripheral business processes didn't.

      • By Underqualified 2026-02-1810:09

        This is my feeling about both IT and AI. It enables companies to do a lot of things which don't really bring value. One of the biggest use case for AI in the company I work for now is powerBI report generation. Fine, but a couple of years ago we didn't even have all these graphs and reports. I'm not sure they bring actual value, since I see decisions still being made mostly on intuition.

    • By masteruvpuppetz 2026-02-187:00

      An old office colleague used to tell us there was a time when he'd print a report prepared with Lotus123 (Ancient Excel) and their boss would verify the calculations on a calculator saying computers are not reliable. :o

    • By Waterluvian 2026-02-184:591 reply

      Wow I didn’t realize that. But I always thought it. I was bewildered that anyone got any real value out of any of that pre-VisiCalc (or even VisiCalc) computer tech for business. It all looked kinda clumsy.

      • By KellyCriterion 2026-02-186:42

        (pre) VisiCalc: You have to understand that the primary users (accountants etc.) do not care about how a thing looks in their working process: If a tool helps them, they will use it even if its ugly according to aesthetical frontend questions :-)

        (Think about this old black/white or green mainframe screens - horrible looking but it gets their job done)

    • By unkulunkulu 2026-02-189:24

      Ok, this article inspired some positivity in my view. Here comes, of course a disclaimer that this is just "wishful thinking", but still.

      So we are in the process of "adapting a technology". Welcome, keep calm, observe, don't be ashamed to feel emotions like fear, excitement, anger and all else.

      While adapting, we learn how to use it better and better. At first, we try "do all the work for me", then "ok, that was bad, plan what you would do, good, adjust, ok do it like this" etc etc.

      A couple of years into the future this knowledge is just "passed on". If productivity grew and we "figured out how to get more out of the universe", then no jobs had to be lost, just readapted. And "investors" get happy not by "replacing workers", but by "reaping win-win rewards" from the universe at large.

      There are dangers of course, like "maybe this is truly a huge win-win, but some loses can be hidden, like ecology", but "I hope there are people really addressing these problems and this win-win will help them be more productive as well".

    • By overgard 2026-02-185:191 reply

      The coding tools are not hard to pick up. Agent chat and autocomplete in IDE's are braindead simple, and even TUI's like Claude are extremely easy to pickup (I think it took me a day?) And despite what the vibers like to pretend, learning to prompt them isn't that hard either. Or, let me clarify, if you know how to code, and you know how you want something coded, prompting them isn't that hard. I can't imagine it'll take that long for an impact to be seen, if there is a major impact to be seen.

      I think it's more likely that people "feel" more productive, and/or we're measuring bad things (lines of code is an awful way to measure productivity -- especially considering that these agents duplicate code all the time so bloat is a given unless you actively work to recombine things and create new abstractions)

      • By kylebyte 2026-02-185:391 reply

        It reminds me a lot of adderall's effect on people without ADHD. A pretty universal feeling that it's making you smarter, paired with no measurable increase in test scores.

    • By TimByte 2026-02-1810:03

      Yet with IT, the bottleneck was largely technical and capital-related, whereas with AI it feels more organizational and cognitive

    • By heresie-dabord 2026-02-184:291 reply

      > It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.

      The 1990s boom was in large part due to connectivity -- millions[1] of computers joined the Internet.

      [1] _ In the 1990s. Today, there are billions of devices connected, most of them Android devices.

      • By ericfr11 2026-02-184:54

        Android in the 90s Not really

    • By joering2 2026-02-184:15

      > And so we should expect AI to look the same

      Is that somewhat substantiated assumption? I recall learning on University in 2001 the history of AI and that initial frameworks were written in 70's and that prediction was we will reach human-like intelligence by 2000. Just because Sama came up with this somewhat breakthrough of an AI, it doesn't mean that equal improvement leaps will be done on a monthly/annual basis going forward. We may as well not make another huge leaps or reach what some say human intelligence level in 10 years or so.

    • By JMKH42 2026-02-1815:121 reply

      >And so we should expect AI to look the same -

      Maybe! Or it might never pan out, or it may pan out way better. Complicated things like this rarely turn out the way people expect, no matter how smart.

      • By bunderbunder 2026-02-1815:34

        I’m thinking survivorship bias here. “Information Technology” is such a wide term, and we immediately think of the IT we currently use. Many of us can’t even remember all the blind alleys we wasted resources on in the ‘80s, especially those of us who weren’t there. I count myself among that group because I was a kid and didn’t pay much attention to business.

        But I can say that, judging by historical artifacts, a lot of it was along the same broad lines as AI. And we maybe don’t realize how serious people were about it back then. The technology that actually changed the world was so comparatively boring and pragmatic that the stuff that was being hyped back then seems comically overwrought. It’s easy to assume it must have been a joke all along.

    • By matsemann 2026-02-189:43

      Is this like the hotels first jumping on the wifi bandwagon? Spent lots of money up front for expensive tech. Years later, anyone could buy a cheap router and set up, so every hotel had wifi. But the original high-end hotels that were first out with wifi and paid much for it, has the worst and old wifi and charge users for it, still trying to recoup the costs.

    • By globular-toast 2026-02-188:29

      If things like computer-aided design and improved supply chain management, for example, make manufactured goods last longer and cause less waste, I would expect IT to cause productivity to go down. I drive a 15 year old car and use a 12 year old PC. It's a good thing that productivity goes down, or stays the same.

    • By ozgrakkurt 2026-02-183:32

      I don’t think LLMs are similar to computers in terms of productivity boost

    • By dang 2026-02-1818:03

      Thanks! we've swapped the baity title with that phrase above.

    • By calvinmorrison 2026-02-183:082 reply

      > it's helping lots of people, but it's also costing an extraordinary amount of money

      Is it fair to say that wall street is betting America's collective pensions on AI...

      • By autoexec 2026-02-186:10

        They're betting a lot more than that, but since all their chips are externalities they don't care.

      • By HWR_14 2026-02-183:491 reply

        Very few people have pensions anymore. People now direct their own retirement funds.

    • By pragmatic 2026-02-1814:49

      You’re missing the true value, the network.

      Wide spread internet access turned expensive toys (PCs) into useful assets.

    • By bitgeist 2026-02-1815:57

      I like your username.

    • By arisAlexis 2026-02-189:02

      Only it's much more exponential

    • By killingtime74 2026-02-184:161 reply

      productivity may rise with time, and costs may come down. The money is already spent

      • By johnnienaked 2026-02-1817:09

        Some of the money is spent. What happens when better models, more efficient cooling techniques, and other technologies hit? Seems like the best strategy at this point isn't dumping your entire FCF into datacenters, but wait and see if there's even a viable business efficiency improvement first.

    • By nz 2026-02-1817:46

      I mean the productivity paradox was only temporarily remedied. Around 2005 we entered a second version of the paradox and it persists to this day. I'll note that 2005 was when the internet became dominated by walled-gardens and social-media, _and_ it was the last year that people got to use the internet without smartphones (in 2006 LG released a smartphone, with Apple releasing iPhone in 2007).

      The combination of attention-draining social media walled gardens, and the high performance pocket-computers (which are really designed for consumption instead of productivity), created a positive feedback loop that helped destroy the productivity that we won by defeating the paradox in the 1990s. And we have been struggling against this new paradox for twenty years, since. AI seems like it should defeat the paradox because it is a kind of hands-free system, perfect for mobile phones -- but this is really just a very expensive solution to a problem that we have created and allowed to fester. We could just shun the walled gardens, and demand to be paid for our attention and data.

      The new productivity paradox (which I do not think AI in its current form can fix[1][2]), is the price that we pay for a prosperous and valuable advertising industry. And as long as the web is seen as an ad-channel, and as long as the web is always vibrating in your pocket, we will keep paying this price. We will eventually end up (metaphorically) lobotomizing our children, and families, and communities, so that the grand-children of ad-executives and tech-bros and frat-bros can grow up healthy, psychologically stable, educated, and comfortably wealthy. (Brain drain: now available literally everywhere).

      [1]: It is telling that most LLMs are centralized, and are most useful as search-engines/information-retrieval-systems. The centralization makes them _spyware_, and their ability to directly answer any question, encourages users to actually ask direct questions, instead of stringing search-terms together. This makes the prompts high-signal advertising data (i.e. instead inferring what you are looking for from the search-string, these companies can see _exactly_ what you are looking for and why -- and with LLMs, they can probably turn these promps into joint-probability-tables or whatever other kind of serialization they need to figure out which products to sell you (either on the web or directly in the response to your prompt)).

      [2]: As far as copyright infringement goes, LLM outputs may require mass clean-room rewrites (so your productivity, as pathetic as it already is, now gets _halved_ long term) of text, prose, code, and anything else that is produced with them, because of how copyright law works. In legal arts this is called _the fruit of the poison tree_, and any short-term productivity gains, may become long term liabilities that need to be replaced due to _legal mandate_ -- so even if LLMs can eventually produce perfect and faultless outputs, the copyright laws _in all 200+ countries_ would have to be torn down and rebuilt (and this will certainly come at great expense).

    • By gsf_emergency_6 2026-02-183:43

      Fwiw fortune had another article this week saying this J-curve of "General Technology" is showing up in the latest BLS data

      https://fortune.com/2026/02/15/ai-productivity-liftoff-doubl...

      Source of the Stanford-approved opinion: https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419...

      https://www.apolloacademy.com/waiting-for-the-ai-j-curve/

    • By kittbuilds 2026-02-189:14

      [dead]

    • By lawstkawz 2026-02-182:55

      [dead]

  • By tabs_or_spaces 2026-02-186:214 reply

    My experience has been

    * If I don't know how to do something, llms can get me started really fast. Basically it distills the time taken to research something to a small amount.

    * if I know something well, I find myself trying to guide the llm to make the best decisions. I haven't reached the state of completely letting go and trusting the llm yet, because the llm doesn't make good long term decisions

    * when working alone, I see the biggest productivity boost in ai and where I can get things done.

    * when working in a team, llms are not useful at all and can sometimes be a bottleneck. Not everyone uses llms the same, sharing context as a team is way harder than it should be. People don't want to collaborate. People can't communicate properly.

    * so for me, solo engineers or really small teams benefit the most from llms. Larger teams and organizations will struggle because there's simply too much human overheads to overcome. This is currently matching what I'm seeing in posts these days

    • By TimByte 2026-02-1810:082 reply

      I suspect the real breakthrough for teams won't be better raw models, but better ways to make the "AI-assisted thinking" legible and shareable across the group, instead of trapped in personal prompt histories

      • By datsci_est_2015 2026-02-1812:36

        This seems like a problem simply stated but not simply solved. I think Grokipedia or whatever it was called was a real exercise in “no one cares about cached LLM output”. The ephemeral nature of LLM output is somehow a core property of its utility. Kind of like I never share a Google search with a coworker, I share the link I found.

      • By giancarlostoro 2026-02-1815:17

        I sort of have this indirectly solved with a project I'm working on inspired by Beads. One thing I added is as you have the LLM work on tasks, you can sync them directly to GitHub, I would love to add other ticketing / task backends to it, but I mostly just use GitHub. You can also create them on GitHub and sync them down and claim a task (the tool will post a comment on GitHub that you've claimed the work). I can see people using it to collaborate easier, but for the time being it's just me using it for myself. ;)

        These tasks become your prompt once refined. I basically braindump to Claude, have it make tasks from my brain dump. Then I tell Claude to ask me clarifying questions, it updates the tasks and then I have Claude do market research for some or all tasks to see what the most common path is to solve a given problem and then update the tasks.

        https://github.com/Giancarlos/guardrails

    • By aurareturn 2026-02-186:329 reply

      The future of work is fewer human team members and way more AI assistants.

      I think companies will need fewer engineers but there will be more companies.

      Now: 100 companies who employ 1,000 engineers each

      What we are transitioning to: 1000 companies who employ 10 engineers each

      What will happen in the future: 10,000 companies who employ 1 engineer each

      Same number of engineers.

      We are about to enter an era of explosive software production, not from big tech but from small companies. I don't think this will only apply to the software industry. I expect this to apply to every industry.

      • By storus 2026-02-1810:091 reply

        It will lead to hollowing out of the substance everywhere. The constant march to more abstraction and simplicity will inevitably end up with AI doing all the work and nobody understanding what is going on underneath, turning technology into magic again. We have seen people losing touch with how things work with every single move towards abstraction, machine code -> C -> Java -> JavaScript -> async/await -> ... -> LLM code generation, producing generations of devs that are more and more detached from the metal and living in a vastly simplified landscape not understanding trade-offs of the abstractions they are using, which leads to some unsolvable problems in production that inevitably arise due to the choices made for them by the abstractions.

      • By Wilder7977 2026-02-1811:473 reply

        And those companies will do what? Produce products in uber-saturated markets?

        Or magically 9900 more products or markets will be created, all of them successful?

      • By matwood 2026-02-188:093 reply

        > smaller companies

        And large companies. The first half of my career was spent writing internal software for large companies. I believe it's still the case that the majority of software written is for internal software. AI will be a boon for these use cases as it will make it easier for every company big and small to have custom software for its exact use case(s).

      • By itake 2026-02-186:442 reply

        yeah, I agree.

        When Engineering Budget Managers see their AI bills rising, they will fire the bottom 5-10% every 6-12 months and increase the AI assistant budget for the high performers, giving them even more leverage.

      • By mirsadm 2026-02-188:181 reply

        This seems like a bot comment.

      • By lnsru 2026-02-187:191 reply

        That means the system will collapse in the future. Now from bunch of people some good programmers are made. Rest go into marketing, sales, agile or other not really technical roles. When the initial crowd will be gone there will be no experienced users of AI. Crappy inexperienced developer will make more crap without prior experience and ability to judge the design decisions. Basically no seniors without juniors.

      • By kilroy123 2026-02-1810:34

        I think we were headed that way before LLMs came on to hunt scene.

        LLMs just accelerated this trend.

      • By roncesvalles 2026-02-1810:51

        By and large "AI assistant" is not a real thing. Everyone talks about it but no one can point you to one, because it doesn't exist (at least not in a form that any fair non-disingenuous reading of that term would imply). It's one big collective hallucination.

      • By vjk800 2026-02-188:161 reply

        > I think companies will need fewer engineers but there will be more companies.

        This would be strange, because all other technology development in history has taken things the exact opposite direction; larger companies that can do things on scale and outcompete smaller ones.

    • By stephenr 2026-02-1811:181 reply

      > llms can get me started really fast. Basically it distills the time taken to research something

      > the llm doesn't make good long term decisions

      What could possibly go wrong, using something you know makes bad decisions, as the basis of your learning something new.

      It's like if a dietician instructed a client to go watch McDonald's staff, when they ask how to cook the type of meals that have been recommended.

      • By datsci_est_2015 2026-02-1812:561 reply

        I’m bearish on AI, but I still think this is disingenuous. My grade school math teachers were probably not well-versed in Calculus and Real Analysis, but they helped me learn my time tables just as well.

        AI is great at exposing you to what you don’t even know you don’t know: your personal unknown unknowns, the complexity you’re completely unaware of.

    • By nutjob2 2026-02-1813:25

      To me the biggest benefit of LLMs has always been as a learning tol, be it for general queries or "build this so I can get an idea of how it works and get started quickly". There are so many little things that you need to know when trying anything new.

  • By Herring 2026-02-182:159 reply

    My compsci brain suggests large orgs are a distributed system running on faulty hardware (humans) with high network latency (communication). The individual people (CPUs) are plenty fast, we just waste time in meetings, or waiting for approval, or a lot of tasks can't be parallelized, etc. Before upgrading, you need to know if you're I/O Bound vs CPU Bound.

    • By al_borland 2026-02-184:342 reply

      When my company first started pushing for devs to use AI, the most senior guy on my team was pretty vocal about coding not being the bottleneck that slowed down work. It was an I/O issue, and maybe a caching issue as well from too many projects going at the same time with no focus… which also makes the I/O issues worse.

      • By noosphr 2026-02-186:361 reply

        Ironically using Ai on records of meetings across an org is amazing. If you can find out what everyone is talking about you can talk to them.

        Privacy is non existent, every word said and message sent at the office is recorded but the benefits we saw were amazing.

      • By koiueo 2026-02-188:541 reply

        So how is it going for him? Was he able to prove his point?

    • By kjellsbells 2026-02-182:311 reply

      Maybe experienced people are the L2 cache? And the challenge is to keep the cache fresh and not too deep. You want institutional memory available quickly (cache hit) to help with whatever your CPU people need at that instant. If you don´t have a cache, you can still solve the problem, but oof, is it gonna take you a long time. OTOH, if you get bad data in the cache, that is not good, as everyone is going be picking that out of the cache instead of really figuring out what to do.

      • By canyp 2026-02-182:55

        L2? I'm hot L1 material, dude.

        But I like your and OP's analogy. Also, the productivity claims are coming from the guys in main memory or even disk, far removed from where the crunching is taking place. At those latency magnitudes, even riding a turtle would appear like a huge productivity gain.

    • By notepad0x90 2026-02-187:121 reply

      In my opinion, you're very wrong. There is typically lots of good communication -- one way. The stuff that doesn't get communicated down to worker bees is intentional. "CPUs" aren't all that fast either, unless you make them by providing incentives. if you're a well paid worker who likes their job, i can see why you would think that, but most people aren't that.

      Meetings are work, as much as IPC and network calls are work. Just because they're not fun, or what you like to do, it doesn't mean they're any less of a work.

      I think you're analyzing things from a tactical perspective, without considering strategic considerations. For example, have you considered that it might not be desirable for CPUs to be just fast, or fast at all? is CISC faster than RISC? different architectural considerations based on different strategic goals right?

      If you're an order picker at an amazon warehouse, raw speed is important. being able to execute a simpler and more fixed set of instructions (RISC), and at greater speed is more desirable. if you're an IT worker, less so. IT is generally a cost-center, except for companies that sell IT services or software. if you're in a cost center, then you exist for non-profit-related strategic reasons, such as to help the rest of the company work efficiently, be resilient, compete, be secure. Some people exist in case they're needed some day, others are needed critically but not frequently, yet others are needed frequently but not critically. being able to execute complex and critical tasks reliably and in short order is more desirable for some workers. Being fast in a human context also means being easily bored, or it could mean lots of bullshit work needs to be invented to keep the person busy and happy.

      I'd suggest taking that compsci approach but considering not just the varying tasks and workloads, but also the diversity of goals and user cases of users (decision makers/managers in companies). There are deeper topics with regards or strategy and decision making surrounding the state machines of incentives and punishments, and decision maker organization (hierarchical, flat, hub-and-spoke,full-mesh,etc..).

      • By zelphirkalt 2026-02-1810:121 reply

        Meetings can be work, but often they are a waste of time. Often they are only done, because the company has not found a better way to structure itself, which is also accepted by the management lawyer, who often has a profound fear of loss of control and likes to micromanage. If you can zone out for most of the meeting, and not experience negative effects from that, then the meeting was a waste of your time.

    • By amrocha 2026-02-182:272 reply

      Then where are all the amazing open source programs written by individuals by themselves? Where are all the small businesses supposedly assisted by AI?

    • By TimByte 2026-02-1810:11

      In some cases it might even make the mismatch worse. If one person can produce drafts, specs, or code much faster, you just create more work for reviewers, approvers, and downstream dependencies, which increases queueing

    • By 8note 2026-02-182:292 reply

      operationally, i think new startups have a big advantage on setting up to be agent-first, and they might not be as good as the old human first stuff, but theyll be much cheaper and nimble for model improvements

      • By kamaal 2026-02-182:431 reply

        Start ups mostly move fast skipping the necessary ceremony which large corps have to do mandatorily to prevent a billion dollar product from melting. Its possible for start ups because they don't have a billion dollar to start with.

        Once you do have a billion dollar product protecting it requires spending time, money and people to keep running. Because building a new one is a lot more effort than protecting existing one from melting.

    • By MrDarcy 2026-02-183:46

      Interesting analogy to explore a Distributed System as compared to Organizational Dynamics.

    • By hackable_sand 2026-02-188:20

      None of this fits

    • By Haven880 2026-02-182:26

      I think both. Most organizatuons lack someone like Steve Jobs to prime their product lines. Microsoft is a good example where you see their products over the years are mostly meh. Then meetings are pervasive and even more so in most companies due to msteam convenience. But currently they faced reduced demands due softer market as compare 2-3 years ago. If you observed that no effect while they layoff many and revenue still hold or at least no negative growth, I would surmise that AI is helping. But in corporate, it only counta if directly contributed sales numbers.

HackerNews