
You don’t “train a human.”
This article was featured in the One Story to Read Today newsletter. Sign up for it here.
Last Friday, onstage at a major AI summit in India, Sam Altman wanted to address what he called an “unfair” criticism. The OpenAI CEO was asked by a reporter from The Indian Express about the natural resources required to train and run generative-AI models. Altman immediately pushed back. Chatbots do require a lot of power, yes, but have you thought about all of the resources demanded by human beings across our evolutionary history?
“It also takes a lot of energy to train a human,” Altman told a packed pavilion. “It takes, like, 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took, like, the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science and whatever to produce you, and then you took whatever, you know, you took.”
He continued: “The fair comparison is, if you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question, versus a human? And probably, AI has already caught up on an energy-efficiency basis, measured that way.”
Altman’s comments are easy to pick apart. The energy used by the brain is significantly less than even efficient frontier models for simple queries, not to mention the laptops and smartphones people use to prompt AI models. It is true that people have to consume actual sustenance before they “get smart,” though this is also a helpful bit of redirection on Altman’s part—the real concern with AI is not really the resources it demands, but the amount it contributes to climate change. Atmospheric carbon dioxide is at levels not seen in million of years—it has been driven not by the evolution of the 117 billion people and all of the other critters to have ever existed in the course of evolution, but by contemporary human society and combustion turbines akin to those OpenAI is setting up at its Stargate data centers. Other data centers, too, are building private, gas-fired power plants—which collectively will likely be capable of generating enough electricity for, and emitting as much greenhouse-gas emissions as, dozens of major American cities—or extending the life of coal plants. (OpenAI, which has a corporate partnership with the business side of this magazine, did not respond to a request for comment when I reached out to ask about Altman’s remarks.)
But what’s really significant about Altman’s words is that he thought to compare chatbots to humans at all. Doing so suggests that he views people and machines on equal terms. He didn’t fumble his words; this is a common, calculated position within the AI industry. Altman made an almost identical statement to Forbes India at the same AI summit. And a week ago, Dario Amodei—the CEO of Anthropic, and Altman’s chief rival—made a similar analogy, likening the training of AI models to human evolution and day-to-day learning. The mindset trickles down to product development. Anthropic is studying whether its chatbot, Claude, is conscious or can feel “distress,” and allows Claude to cut off “persistently harmful or abusive” conversations in which there are “risks to model welfare”—explicitly anthropomorphizing a program that does not eat, drink, or have any will of its own.
AI firms are convinced either that their products really are comparable to humans or that this is good marketing. Both options are alarming. A genuine belief that they are building a higher power, perhaps even a god—Altman, in the same appearance, said that he thinks superintelligence is just a few years away—might easily justify treating humans and the planet as collateral damage. Altman also said, in his response to concerns about energy consumption, that the problem is real because “the world is now using so much AI”—and so societies must “move towards nuclear, or wind and solar, very quickly.” Another option would be for the AI industry to wait.
If Altman’s comparison of chatbots and people is purely a PR tactic, it is a deeply misanthropic one. He is speaking to investors. The notion that AI labs are building digital life has always been convenient to their myth, of course, and OpenAI is reportedly in the middle of a fundraising round that would value the company at more than $800 billion—nearly as much as Walmart.
Tech companies may genuinely want to develop AI tools for the benefit of all humanity, to echo OpenAI’s founding mission, and genuinely believe that they need to raise amounts of cash to do so. But to liken raising a child—or, for that matter, the evolution of Homo sapiens—to developing algorithmic products makes very clear that the industry has lost touch, if it ever had any, with what it means to be human. To “train a human”—that is, to live a life—is to struggle, to accept the possibility of failure, and to sometimes meander simply in search of wonder and beauty. Generative AI is all about cutting out that process and making any pursuit as instant, efficient, and effortless as possible. These tools may serve us. But to put them on the same plane as organic life is sad.
I have a different perspective. You'll probably hate it.
AI data centers should be operated according to two limiting factors.
1) No energy from grid. Can't use coal or fossil fuel energy sources. Must have plan to provide excess TO grid.
2) No use of fresh water from municipal or fresh groundwater for cooling. Can use waste water. Must transition to providing excess fresh water to common supply.
No loopholes. Massive penalties for use of loopholes or breaking rules, not limited to but including complete shutdown of data center.
Those two limits will spur innovation AND prevent AI being criticized for energy use. These rules would hard burn improvements in energy storage and renewables as well as other methods of energy production.
Give them five years to comply to some useful progress percentage. Plenty of time to come up with a transition plan and show sufficient progress to justify further extensions. Realistically it will take 20 years at least to fully realize this plan.
Don't bring up cost. If you do, let me remind everyone that the climate change issue is real enough to hurt now. There's the very real cost of not pursuing these rules. AI has had plenty of time to bootstrap off grid. Now it can begin to migrate to something else instead.
Those with experience with energy generation will realize this plan has ridiculously high reward for those who follow it. Have your cake and eat it too definitely applies.
Good rules. How about we apply them to alfalfa farms (that send their water to Saudi Arabia) or football stadiums (I don't like football)?
The point is, I don't see the logic in singling out data centers over anything else.
It’s an article about data centers so we’re talking about data centers. 100% agree we should be pushing all industries to use their resources not those of the commons. Data centers do happen to be easier to mostly close loop though than alfalfa farms. Football stadiums on the other hand 100% should be.
That would be great if there weren't the easy arguments that "if we don't build it bigger, China will", "it's for national security", etc. Far from forcing regulation on them, they're reaping windfalls of deregulation. To build a thing which is far from convincingly beneficial to national security or society.
> if we don't build it bigger, China will
ja, und?
And they have a minecraft gap.
The intent is good, but it will end up just pricing out everyone but Google, Amazon, and Microsoft (and the start-ups those companies bankroll).
> 1) No energy from grid. Can't use coal or fossil fuel energy sources. Must have plan to provide excess TO grid.
This is easy: the companies will simply build some nuclear power plants near to their data centers. Perhaps even nuclear power plants that are vibe-designed by their AIs. :-)
Faster and cheaper than nuclear power would be building a virtual power plant, adding big-ass banks of batteries charged during times of low demand or excess power capacity, and peak-shaving consumption when the rest of us need power.
This is an amendment I'd add to those rules to allow. There are other ways of storing energy but battery banks are the obvious one. Works well with shedding excess as well.
They are already doing that.
They are just firing up natural gas portable generators.
https://www.reuters.com/sustainability/boards-policy-regulat...
I don't hate it, but I suspect that incentives this would generate would not end up producing results that strictly align with the ones you envision and desire.
For one thing, you're essentially mandating data centers to be colocated with power plants and waste water treatment plants, instead of these things each being located independently according to the requirements of their different functions. If that really leads to "ridiculously high reward", why isn't it being done already?
OTOH, then AIs could functionally drink, pee and poop
The trickle-down economy dictates that data centers get first access to electricity and fresh water (and any other resource it needs). People get whatever is leftover and like it. This is america.
As always, what trickles-down isn't green.
I’m gonna buy me some DRAM wafers for now. No one else done that before. It’s innovative.
mm. Think big and start an ETF.
Sure, if we’re in the business of making arbitrary requests, how about every data center operator has to bring 1 Epstein accused to justice for every data center they’re allowed to build?
The hard part has never been the “what”, it’s always been the “how”.
There is no way that letting these clowns run nuclear power plants is a good idea. Also the percentage of land that is allowed to be used by datacenters should be limited. Let them set them up in deserts or something.
Antarctica is a great place for datacenters. They can power them with any of the 300 national security reactors and easily keep them secret from the taxpayers forever.
To some extend I think people are putting way to much weight on his exact words, and not why he says them.
Altman is a man who is quickly running out of lies, so now he starts slinging random arguments that can't stand up to even the briefest of scrutiny.
OpenAI is burning cash and fuel. There are results, and they are, to some extend, impressive, but not impressive enough to justify the cost and Altman are no longer able to cover that up.
I think he's also slinging random business deals too.
The "A letter from Sam & Jony" page is one of the weirdest I've seen in a while: https://openai.com/sam-and-jony/
Step after step it seems like a CEO throwing everything at the wall in an insecure manner.
I don't pretend to know if Open AI is doomed or just absurdly over valued but still valuable. I pay for and use their product, it works for my use cases, but watching the business deals and weird quotes from a far makes me less confidant.
I knew OpenAI was in trouble the instant they chose Altman over Ilya Sutskever.
> I knew OpenAI was in trouble the instant they chose Altman over Ilya Sutskever.
I am not so sure:
This decision rather tells something important about the priorities of the string-pullers behind the curtain:
They clearly want(ed) to monetize what is there, with the risk that only smaller improvements for the AI models will happen from OpenAI, and thus OpenAI might get outcompeted by competitors who are capable of building and running a much better model.
If this is the priority (no matter whether you like or despise Sam Altman), you will likely prefer Sam Altman over Ilya Sutskever.
If, on the other hand, a fast monetization is less important than making further huge leaps towards much better AI models, you will, of course, strongly prefer Ilya Sutskever over Sam Altman.
Thus, I wouldn't say that choosing Sam Altman over Ilya Sutskever is a sign that OpenAI is in trouble, but a very strong sign where the string-pullers behind the curtain want OpenAI to be. Both Sam Altman and Ilya Sutskever are just marionettes for these string pullers. When they have served their role, they get put back into the box.
Yes I agree. Altman was the rational choice if you realise that eventually the huge R&D bill will need to stop for atleast a moderate period (<5 years).
You want to ride that out before capitalising on the eventual cheaper training costs once the rug has been pulled.
Altman has already succeeded here as it seems inference for API and chat is profitable but offset with massive R&D costs.
All your competitors benefit from your training costs. They’ll lose on inference pretty quickly if they stop training new models, no?
I don't think they will lose on inference because that assumes that compute becomes cheap for all evenly.
Their spending today has secured their compute for the near future.
If every GPU, stick for RAM and SSD is already paid for. Who can afford to sell cheap inference?
Z.ai is trying to deal with this by using domestic (basically Huwawei silicon not Nvidia). And with their state subsidy they will do well.
Anthropic has a 50bn USD plan to build data centres for 2026.
OpenAI similarly has secured extraordinary amounts of other people's money for data centres.
All these will be sunk costs and "other people's money" while money is easy to get hold off. But will be a moat when R&D ends.
Once all the models become basically the same who you go with will be who you're already with (mostly OpenAI), and who you end up with (say people who use Gemini because they have a Google 2TB account).
Some upstart can put themselves into the ground borrowing compute and selling at a loss but the moment they catch up and need to raise prices everyone will simply leave.
ChatGPT is what is most likely to remain a sustained frontier model. Maybe Claude jumps ahead further a few times, Gemini will have its moment. But it'll all be a wash with ChatGPT tittering along as rarely the best. But never the worst.
> Once all the models become basically the same who you go with will be who you're already with (mostly OpenAI)
Imho, people are undervaluing the last mile connection to the customer.
The last Western megacorp to bootstrap its way there was Facebook, and control over cloud identity and data was much less centralized circa-late-00s.
The real clock OpenAI is running against is creating a durable consumer last-mile connection (killer app, device, etc).
"Easy to use chat app / coding tool" doesn't even begin to approach the durability of Microsoft, Apple, Google, or Meta. And without it, OpenAI risks any one of them pulling an Apple Maps at any time.
Unless it continually plows money into R&D to maintain the lead and doesn't pull an Intel and miss a beat.
Maybe they do, but that's a lot of coin flips that need to continually come up heads, in perpetuity.
> Altman is a man who is quickly running out of lies
Oh I doubt that! The quality of the lies may deteriorate but he ain't ever gonna run out.
My understanding is that subscription based inference and API usage is now profitable.
Subscriptions are highly profitable for the typical chat user.
And API is overall net profitable.
What is extremely taxing to their finances is R&D, training and in particular development of frontier models.
My assessment is that when the music stops those who have the most subs will win.
Companies like Apple who had sat out the battle and built niche moats (privacy), and companies like OpenAI and Anthropic who have the market share will be fine.
In 6-12 months, nearly any lead they have will be eaten by distillation.
What will then happen is they will lose subscriptions to services which offer AI as a tack on like Gemini with Googles regular cloud subscriptions.
This will continue. Companies like Apple will have deep pockets to move on the businesses that go underwater and then can restart training in a much less congested market.
All this is assuming a relatively graceful collapse but that is what's likely given how aware everyone is that the bubble must pop.
Training costs will fall. Companies like Nvidia and other shovel businesses (i.e. selling GPUs and not using them) mostly have their revenue secured with funding from the present.
What I see as confirmations of this pattern is if we stop getting ground busting frontier models and then coast for 3-5 years when competition becomes more incremental.
This is an unpopular opinion, Will OpenAI go bust? No chance. Nor will Anthropic.
It probably doesn't matter that subscriptions are profitable, when some estimates put the number of users in the free tier at 96%.
I sort of agree with you, not that it's the most subscriptions necessarily that will be the deciding factor, but the there's going to be some companies better positioned to survive when the free money stops. OpenAI has the brand, so that might help, but mostly I think they'll get absorbed into Microsoft. I don't think they can stand on their own. It doesn't seem like a particularly well managed company, so to me it makes more sense that they are simply acquired for pennies on the dollar by someone with better leadership.
> when some estimates put the number of users in the free tier at 96%.
It's certainly almost everyone today, but that's because enshitificarion has yet to start properly.
The risk to OpenAI is that their free tier are captured by the tack on markets (i.e. Gemini with 2TB of cloud storage).
But otherwise they will make free more annoying until people just buy the cheap tier and then move up from there. Like chatgpt Go.
They will open up to be undercut by players like Gemini that can provide a less-shitty free tier, and capture their market share.
I seriously doubt, at this moment, that OpenAI can come up with a offering good enough to entice people to pay for them when there will be other free to use services around. Google seems to be well positioned to eat their lunch.
Google does indeed. But that ignores that Microsoft is playing the same game.
For example in the UK's NHS, the worlds sixth largest employer is now fully committed to Microsoft 365. That's a lot of Copilot money if Microsoft sees it that way.
And OpenAI is funded via Microsoft, I also have a Microsoft 2TB subscription. And so do many people have both work based and personal home subscriptions.
It's a complete mess of a situation. If Microsoft moves away from GPT (it can since it's advertised under the copilot brand) OpenAI is dead in the water of course.
Microsoft has already publicly stated they plan to move away from OpenAI [0].
[0] https://www.ft.com/content/f1ec830c-2f08-4b1a-b70f-7330f2607...
I'm a bit of an Apple optimist for this exact reason. I think the moat is collapsing, and Apple is best positioned to dispatch their own models in a year on their own widely sold consumer hardware, unless someone has a breakthrough which they can't replicate. Which I don't anticipate.
I'm not really sure what OpenAI's moat is. Anthropic has a chance being so widely accepted by developers, and being a bit better at developing models when it comes to code.
People might not want to hear this but AI is already smarter and more useful than most people ever will be. We are not even talking about by the end of the year or decade anymore.
Read and think about what you wrote. How can an ai, completing specific scoped tasks, be in any way comparable to the scale of a human life? Maybe the same thing these execs forgot.
> Read and think about what you wrote.
A lot of software engineers are bad at value judgements, and often feel smart by confidently taking propaganda at face value.
It's kind of mind boggling that, if someone genuinely believes what the GP wrote, that they don't immediately follow the statement with "smash it!."
I am comparing competency not the "scale of a human life" or whatever that is supposed to mean. AI still lacks taste so it is still hard to replace human originality or creativity but that's almost it when it comes to work that can be done on a computer. It will very clearly surpass everyone in verifiable domains and already has surpassed most people.
We are already at that point where we just don't fully know what to do with what we already have and simply haven't fully internalised it. But all it will take is one economic shakeup to redistribute human intelligence from what we are familiar with.
That is the crux of the problem we're facing as a society: many, many leaders have this idea that they are better served by an AI that is 70% (?), 80% (?) correct when helping them make decisions about their business, than trusting humans - consultants, employees, pundits - that they don't even trust their judgments, bias, own goals, much less paying them.
For those people, an AI better (much better?) than a coin toss is the goal, if it means not relying on people.
Personally, I already deal weekly with people that veemently antagonizes every line of thinking if it isn't what ChatGPT told them before a meeting.
The root issue is epistemological.
If one puts their faith in answers that come out of a black box, then one must justify the black box's omniscience, specifically by prioritizing it above human intellect and deprioritizing attempts to reason through its logic.
You saw it with older people blindly following sat navs because they'd forgotten how to navigate. And those were much less believable sounding devices!
It's not going to stop until/if the first execs are thrown in jail because the 'I just trusted AI' defense fails.
There’s running out of words and there’s coming across like a complete psychopath who has lost all perspective.
I'd like to point out that this article seems to focus on minutiae like how much energy a human brain takes to solve a problem, when the obvious question is: Which is more worthwhile to us as a species - even assuming the same inputs and energy and planetary damage? Spending that energy on fostering humans who can solve problems? Or on machines which render humans superfluous?
The issue is who is making the decision on where energy resources are spent - on people or machines. And it turns out efficiency or even humanity play a very small role in that decision matrix. Control, on the other hand, is important. Rich and powerful people tend to want to control things, and machines are better at that than people.
This is the conspiratorial mindset. It's not much better than the mindset of people who seek power.
It's wrong, because it assumes that everything is about control.
For example, if I told you that a certain rich and powerful person was spending resources on sending vaccines to poor countries, you might think that was because they wanted to control things. If I said that someone sent books and teachers to a poor country, you might say they were trying to control people.
There's no way to have the conversation, in a conspiratorial mindset, about whether it's better or worse for humans or AI to do this stuff - because no matter what, the conspiratorial mindset will conclude that it's only about power for the humans involved, and always assume the worst. AND YET - there are things people can do which might be for their own self-gratification, but are definitely NOT as bad as some other things they could do. They hold back from doing the worst things.
That's why, I know this lens of looking at the world seems like it's the only smart way to understand things, but looking at the whole world through that lens prevents you from making the important distinction between OK, BAD and REALLY FUCKING BAD.
If you told me an individual powerful person did a good thing, then I could believe it. But I'm speaking about the characteristics of groups of people and a very long history of human civilization teaches us that, as a group (individual exceptions exist both in people and in actions), that the rich and powerful do things primarily for the purpose of increasing their own power, wealth, status, and control. No conspiracy is required. Its just people in a position to gain power use that power to get more power. The people who do that the most and are the best at it are disproportionately the ones with the most power. So, as a group, the rich and powerful are much more likely to do things in self serving interest (even at the cost of wider spread harm and suffering) than a random group a people. The proportion of sociopaths in CEOs is many times that of the regular population.
Not to sound incredibly pessimistic, but I spend a lot of time in bars, and I can tell you that most of the people I meet who haven't got two cents to rub together are as capable if not more so of being dishonest, greedy and malicious as anyone with a billion dollars. A random group of people in your opinion contains less sociopaths than a group of people who powered their way or lucked into some money... I guess you're the optimist. I don't think the CEOs are any worse or better than the rest, and I think you kinda nailed it about individuals. We're all individuals. Faced with individual situations, some of us refuse to do harm, some try to do good, some of the time. That's all. That's what I mean by there being no conspiratorial way to frame the world. It's just chaos and a bunch of assholes making bad decisions, occasionally doing something alright.
I find it amusing that you are denigrating people as "conspiratorial" in year 2026 when the we are all finding out about how there really was a global elite child rape cabal operating in plain sight :)
Leave that self comforting lie in yesteryear. The world is only getting weirder.
> the obvious question is: Which [of having babies vs. building datacenters] is more worthwhile to us as a species
Uh... no, it's not. You're making an argument in the opposite direction as Altman, but recapitulating the same moral flaw.
The point is that viewing human procreation as "worthwhile" or having "value" or being "expensive" is the logic of slavery. People are people, not assets. We don't have babies to make our lives better, we have babies to make THEIR lives better.
Altman’s moral flaw stems partly from an identity failure: he does not see himself as human. He sees himself as The Chosen One. He is transhuman. He can sympathize with humans but never empathize.
It's calling him out on his BS. That might work for mindless investors who want number go up. But he's actively making the world worse for people.
Nevermind hoarding 40% of the worlds silicone 'just because'.
It's like Musk's lofty and outright BS claims. "Well I'm rich so I can do and say what I like, give me your money".
When did bald face lies become the norm in business ?
Edit.. Im keeping silicone in there.
Maybe starting with WorldCom?
The banking sector committed fraud along the way, and early after Lehman collapsed, observers wondered aloud about the moral hazard of bailing out everyone without making an example of someone.
Worldcom had a different ending. Enron had a different ending. But Wells Fargo left 2008 with attitudes that tolerated widespread fraud.
It was 40% more fun the first way you wrote it ;)
>> When did bald face lies become the norm in business ?
When capital has nowhere to go, and the middle class become gamblers and speculators, and people who work become 'superfluous', lots of bad things start to happen. At least, that was my takeaway from "Origins of Totalitarianism".
[edit] To be more specific: Lying (and foreign wars, too) become viable business models when the moneyed class has so much to invest that they no longer know what to do with it, and lack any new markets to pry open, or the education or creativity to produce anything new of value that isn't extractive, and even the extractive methods of generating wealth have begun to dry up locally. Call it a bubble, call it fascism; fascism is basically just a way of keeping a bubble from collapsing indefinitely by pirating neighboring peoples' wealth and cannibalizing one's own society. So there's not a great difference between that and the stated vision of the major AI companies ATM.
Yeah, my semi-informed WAG for the core cause of many of the ills of our current situation is that we're wildly over-capitalized. The "trickle down" focus to put more of the "pie" on the investment side (setting aside whether the "trickle down" effect itself was meaningfully real, not relevant to this "take") was probably barely justified at the time, if at all, and we've just kept shoveling more fuel into that fire in the decades since.
You can't have demand without spreading buying power around. If you just keep shifting toward investment, that capital gets up to some really stupid shit because there's not enough demand to provide anything useful for it to do. Build a factory? Why would I do that, the returns are terrible. No, I'll "invest" in bitcoin... or do an NFT startup... or reserve way more memory capacity than I'll plausibly need just to fuck with my competition, in ways and in an environment that's so bubbly that this activity likely won't provoke capacity build-out to compensate...
We really, really needed some capital (by which I just mean investor-money, not real productive assets) destruction in the '08 crisis, and we didn't get it.
Ehhhh, you have some elementary points, but the market, is in theory, self adjusting right? So the 'idea' of trickle down is a real thing.
It's not as dramatic or loud as something like OpenAI. But consider it as access to loans to fund a new business, the idea is you'll fund a company and gain some sales and employment, money moves around so happy. Basic stuff.
You've shot too far and considered everything black and white. You build a factory if demand is high and it can increase supply while making profit..
> We really, really needed some capital (by which I just mean investor-money, not real productive assets) destruction in the '08 crisis, and we didn't get it.
No, but that wasn't where the majority of money was locked up in poop, so maybe you might get at best a 90's dotcom bust. I'm old and I remember it was apocolyptic. But Maybe it's due.
When you’re a psychopath, the choice is clear.