Several factors are in confluence to make this different from previous tech booms.
First, it's just too much too fast. Both the companies that make AI their business like OpenAI and the companies bolting AI onto everything have been forceful and abrasive with their pushing. Normally technology has more time to seep in and organically normalize with people for a while before the pushing begins, but this time the gas pedal was floored shortly after OpenAI had shipped a usable MVP.
Second, the value is far from clear for a lot of people, partly from lazy bolt-on integrations, but also just because people don't actually want/need it for many of the tasks it's being sold for and because it's not good/reliable for some tasks.
Third, as noted in the article, the surrounding environment isn't right. Many of your average people feel like the dog in the "this is fine" meme[0] and aren't really in the mood to be sold something that could ultimately further concentrate wealth and make their lives harder. It's like parking an ice cream truck in front of a burning office building and wondering why nobody running out is buying a cone.
I say this as someone who finds AI useful for some things. All of this is pretty plainly visible. Either the big names in the industry are horrifically out of touch or they're pretending to not see it in hopes of faking it until they make it, I'm not sure which.
[0]: https://www.npr.org/2023/01/16/1149232763/this-is-fine-meme-...
Anyone who's paid attention to the last 15+ years of tech and business knows that it's all about capture and extraction. All the feel-good language about "democratizing" tech or "making the world more open and connected" or "don't be evil" is just a smokescreen for people who want to bring about modern feudalism.
It's hard to see AI as anything but the latest accelerant for that.
We don't know if there will be software improvements leaving the AI data centers as stranded assets.
We don't know if software product like Adobe suite will be irrelevant or cloned with vibe coding.
The assumption that inference with sota won't be local in 5 years is not certain.
I think things are pretty clear. I don’t know when the markets will agree, sadly.
We do know technological advancements will leave the data centers as stranded assets. There’s not enough money in the most optimistic revenue projections to pay for them, and models are simultaneously getting better and cheaper to operate.
Adobe (and similar companies) will either improve or be replaced by vibe coding. I think the assumption a lot of wall street and management is making is that Adobe can replace itself with vibe coding and vibe customer support, and then not be simultaneously out-innovated by a few dozen companies founded by folks they laid off.
Local inference is 6-12 months behind SOTA. If that holds, you can have a 2029 SOTA locally on a Rapberry Pi 8, or 2030 SOTA for $500/month (in 2026 dollars). If 2030 SOTA is qualitatively better at that point, then we’ll be way past AGI, and the economy will be unrecognizable.
It is basically impossible for AI software improvements to devalue the AI compute investments.
It's the other way around, software improvements make the hardware more valuable. Let's say that one unit of compute can generate one unit of value. As the software improves on any of the principal axes (cheaper cost for same quality, or new capabilities that you could previously not get for any price), that same unit of compute will produce more value.
What would threaten those compute investments? Basically order of magnitude improvements in the hardware, but that kind of thing will take longer to happen than the projected lifetime of the hardware. (Or the demand for AI evaporating, but that tends to be an issue of faith that is hard to have a useful discussion on.)
That's assuming all existing LLM investments divided by the all existing LLM usage is net valuable as baseline. But if that is not yet like that, then software improvements may or may not bring those investments over the valuable threshold.
That's an interesting take.
It does assume that more intelligence is both possible and useful -- that's probably not unlikely.
Exactly, my view is intellectually honest because it's falsifiable. I would love to live in a world where tech largely respects and empowers end-users instead of trapping them in engineered dependency. Tech companies just need to act humanely.
That's just not the world we live in currently.
I'm not sure I see that with the big tech I'm an end user of, the biggest being Google. I get free search, email, youtube. It's provided a lot of value for me and never really caused problems at that end.
I think the downsides of Google are more driving competitors and companies that have to pay them out of business, like for example the online travel business has suffered because they need to pay a lot to get any customers.
I'm not sure what the answer is. Maybe some monopoly laws that make their service worse so others can compete?
> and because it's not good/reliable for some tasks.
It's even worse than that. I'm not aware of any tasks which it's good at. Even after several years of effort, LLMs suck at coding, the thing they are supposedly best at. Maybe it'll get good, but right now it just isn't.
Meanwhile, according to Jensen Huang, the reason growth isn't on track is all the negative skeptics:
> In Mr. Huang’s view, the critics want regulations that will hamper the A.I. industry and slow it down. Meanwhile, the skeptics are “scaring people from making the investments in A.I.” that would make it better.
What a weak, out of touch statement. This guy is at the helm of the most valuable company in world history and according to him the thing that's threatening their growth is.. negative vibes?
Where are the adults?
... Bloody hell, he actually said that? I mean, even if he actually believed it, it's a bizarre thing to say; makes him look ridiculous.
I wouldn't say it sucks, I'd say it's very far away from what people with vested interest claim it to be.
If you’re not aware of what it’s good at, given what very smart people are saying and doing with it, I think you’re either not paying attention or aren’t being intellectually honest with yourself
Or those people aren't actually very smart, or they're caught up in the hype, or since they are very smart they exist in a mode where their experience doesn't translate to normal, everyday situations.
It seems that AI coding tools are very sensitive to codebase structure. If you work on a monolith with relatively simple, straightforward structure this is the happy path. A bird's nest of microservices is not. If your team has taken the time and effort to structure the codebase in a way that's amenable to AI, and you invest in the tooling, and you keep up that effort over time, then AI does seem to work.. Not "10x productivity gain" as they try to sell it to us, but maybe >1.0x. It's not clear, though, that for the vast majority of developers AI provides any speedup whatsoever. That's the problem. If it only works for the top 5% or whatever, that addressable market is very, very small.
Instead of appealing to authority you could have given direct examples of how it's transformed your ways of working, that could've continued the conversation somewhere.
there are a lot of smart people with eggs in this basket that stand to benefit from boosting AI hard.
I've seen a lot of very rich people* say it's amazing, it's changing my life, it's going to change your lives (it's going to take away all your jobs so we don't have to pay you anymore), we're about to hit the singularity and start a new golden age with it.
I've seen some apparently-smart people say they're using it for all kinds of things and it's doing great for them.
I've seen roughly the same number of apparently-smart people say they've tried it, they've given it a really good shot, but it doesn't work well for them, and in fact, when they tried, it made them less productive.
When I've personally tried it (almost exclusively on local generation), I've found it entertaining, but not reliable enough to use for more than that. And I do not trust any of the hosted models not to take everything I feed them and monetize it, including by selling it to organizations like ICE which I find utterly reprehensible.
So while I'm not bigstrat2003, about me, at least, you're wrong: I am paying attention, and I'm being intellectually honest. I'm also evaluating it for more than just "does this make me more money in the short term?"
* Who just so happen to be heavily invested in AI companies...
Like your third point, I think people have "novelty fatigue". The iPod may have been the last bit of tech people actually enjoyed. Everything else seems to have come with a a lot of baggage—a heaping, helping of shit (cell phones and the internet included).
Thanks for articulating exactly how I feel.
The way I look at it, from a quality perspective, this is the worst it will be. I certainly won't ever go back to coding without AI. If you extrapolate from there and the general need for tokens compounding with the demand, it is only upwards, whether you agree with it or not.
We can debate endlessly whether the horse and buggy is better than the car, or the cell phone will replace the film camera. But at the end of the day, history has shown that none of that matters. We're better off just agreeing to it and working to improve it.
We can debate whether conventional weapons are better than nuclear weapons or not. But none of that matters. We’re better off just agreeing to it and working to improve it…
The problem with your analogies are that there is no path where a constant improvement to cars leads to anything but better outcomes for human.
There is no realistic or likely path where improvements to cellphones leads to anything but better outcomes for humans.
However, if AI keeps getting better to the extent we can imagine, ie Super Intelligence, the outcomes are more likely to be extinction level negative than positive.
I have a much more positive outlook on humanity and I don't share the same doomsday scenario. For me, I made the conscious decision to turn off the news after covid. Something obviously flipped in that period where the negativity just compounded, and it just feeds into itself. I watch my own family sucked into the cycle of "what did trump do now!" kind of stuff and I found it really wasn't healthy for my own mental being. Ignorance is bliss.
Negativity exists outside of your own perception of it, or in your terms, ignorance is bliss.
You can always choose to believe things will be good, but after a certain point it stops being a belief and is moreso a mental illness that requires medication. When reality and your perception diverge too much, you can believe anything.
In the current economic systems, AGI would bring about mass starvation, war, and enslavement. We would require a complete destruction of capitalism to make AGI not the worst thing to happen to humanity since the black plague. Now, look at our leaders. How likely is that?
If you're happy because your perception of reality is very far off from reality, yes. That is psychosis. If I see myself in a mansion and I'm currently in a box under the freeway that is an hallucination.
You're conflating an emotion (happiness) with perception (living somewhere you don't live).
You certainly can give someone medication because they have a negative emotion, but you don't typically give them medication for positive emotions. Unless, it is affecting their life in a negative way.
> I certainly won't ever go back to coding without AI.
That’s not up to you to decide. Whatever company’s service you are using can and will eventually pull the rug.
That’s why open models and distributed inference are so important. Can’t pull kimi25 from me.
> We can debate endlessly whether the horse and buggy is better than the car, or the cell phone will replace the film camera. But at the end of the day, history has shown that none of that matters. We're better off just agreeing to it and working to improve it.
I don't know why people keep pointing to history to argue adoption is inevitable. Isn't history is littered with no-code solutions that no one uses anymore?
I spent a year in high school with this at the top of the chalk board of my history class: "Those who don't study history are destined to repeat it"
The internet has been entwined in my life since 1991, when I got my first email. Before that it was BBS's. The context and parallels that I'm witnessing now very much align with what I've seen before over the last 35 years. I've bet on some history based predictions in this cycle that few else saw, that absolutely have come true.
This isn't a no-code solution, and not even close to that. It is very much of a more code than ever solution.
> I spent a year in high school with this at the top of the chalk board of my history class: "Those who don't study history are destined to repeat it"
I agree. Just make sure you're not cherry-picking your data. Make sure you include the NFT hype cycle in your corpus.
And I suppose Germany in the 1930s taught you to avoid the National Socalist German Workers Party?
Who are the actors working to "improve it" though?
You have big tech oligarchs salivating at the idea of moar profits by firing a bunch of people.
You have elected officials who might mean well but won't be able to react quickly and don't understand the nuance of a lot of tech things.
You have ordinary people trying to figure out how to make use of this stuff without losing their own jobs. But they don't have a ton of influence.
> You have big tech oligarchs salivating at the idea of moar profits by firing a bunch of people.
For big tech to start relying on vibe coding without code reviews etc is a huge risk.
Big tech has so much red tabe preventing people from getting stuff done. Security reviews needed, etc. This inertia will hold back even a super intelligence from getting stuff done.
Some nerds in a garage trying to apply vibe coding to a problem won't have this red tape.
Red tape is necessary in big orgs because you can't have 100k people running around shipping new half broken, semi supported software with security holes. So you established release processes, approvals, code reviews, etc.
All I'm saying is: big tech is also at risk of being disrupted by AI.
I do agree with you, there is too much grift, but that's to be expected.
I'm one of the actors and I sided with AMD early on.
Probably because at that time, normal people weren't rolling off the back of a massive inflationary period where they can no longer afford to look forward to having enough money to buy/use/participate in whatever cool new technology arises.
People did not. How quickly everyone forgets.
There was constant sneering at dot-com businesses and venture capitalists. There was FuckedCompany.com [0]. The Pets.com superbowl ad was seen as a cautionary tale.
Startup.com [1] portrayed paying parking tickets online as Sisyphean. People thought the internet was for porn and weirdos. Krugman famously said "By 2005 ... it will become clear that the Internet's impact on the economy has been no greater than the fax machine's." [2]
Clay Shirky: "The truth is no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works." [3]
A lot of the above was from mid to late 1990s but, in my opinion, living through it, it carried over into the 2000s with people being highly skeptical and quick to engage in shadenfruende whenever a company didn't live up to the hype.
[0] https://en.wikipedia.org/wiki/Fucked_Company
[1] https://en.wikipedia.org/wiki/Startup.com
[2] https://web.archive.org/web/20030226083257/http://www.redher...
[3] https://www.newsweek.com/clifford-stoll-why-web-wont-be-nirv...
People sneered at dotcoms but they weren't afraid of them. People are afraid of AI. Maybe they shouldn't be, but they are.
The claims of "adopt Internet/AI or be left behind" were similar but for some reason the reactions are different.
You're completely forgetting "all your jobs are going to get outsourced to India". There was panic that internet connectivity would make local talent obsolete.
Microsoft was in full swing with trying to strangle the computing space. "Embrace, extend, extinguish" was a term coined from that era. Ballmer called Linux "a cancer". [0]
People were in a panic about Napster and how the internet would steal billions of dollars.
It does seem like people are much more against AI now than the dot-com boom then, but it's all looks and sounds very familiar to me.
[0] https://www.theregister.com/2001/06/02/ballmer_linux_is_a_ca...
> You're completely forgetting "all your jobs are going to get outsourced to India". There was panic that internet connectivity would make local talent obsolete.
That was largely in the latter part of the boom and part of the bust afterward. I recall some words from Carly Fiorina being said (“Forget the engineers”) that seemed to foretell the more extractive future.
> People were in a panic about Napster and how the internet would steal billions of dollars
lol, absolutely not. The music industry was afraid of this, yes. The normies? Couldn't get enough of it.
Depends on how you define "normies". Sure, students happily napstered away, but a lot of adults (even those with no financial stake in the music industry) seriously believed the claims of the music executives that this "piracy" was going to destroy music and needed to be stopped.
So ask the students how they feel about AI?
Right before the Millenium, mainstream media like the NYT was blaming the internet and "violent games like Tribe, Doom and Quake" for the Columbine Massacre [0] and other similar mass shootings in the 90s.
A lot of those reporters are now leadership at major newspapers like the NYT (eg. Applebome who linked Doom with Columbine and is now the Deputy National Editor for the NYT).
A large amount of reporters (both techno-optimists and techno-pessimists) discussing technology today are literally boomers who have been fighting this battle against each other since the 1990s and taking all the airtime away from alternative younger voices on both sides.
[0] - https://www.nytimes.com/1999/05/02/weekinreview/the-nation-a...
Just seconding this…people have a starry eyed view of the dotcom boom but there was a lot of waste and outright fraud. A lot of theoretical improvements to business processes were lost because…the businesses didn't want to change their processes.
I joined CS education in 2000. There were jobs everywhere. Classmates were leaving after a few months, or working part-time. And this was in Sweden. It was not only creating jobs, but reinventing the IT field, creating lots more opportunities.
Today, the message is that (Dear leaders,) your workers can be replaced by machines. Not that you together can do more with this new tool, but that you can slim down your operation. Maybe I'm just older, but the optimism I saw then is now divided into opportunity (AI consultants) and skepticism (workers.)
This is a narrative the AI industry created, because they want to tap into the huge salary money pool. They tell a story of anti-innovation cost-cutting rather than "do more with these tools."
>no computer network will change the way government works.
Well, they were right on that one.