
Scale at all costs.
In a paper published in 2003, the philosopher Nick Bostrom sketched out a thought experiment aimed at illustrating an existential risk that artificial intelligence might eventually pose to humanity. An advanced AI is given, by its human programmers, the objective of optimizing the production of paperclips. The machine sets off in monomaniacal pursuit of the objective, its actions untempered by common sense or ethical considerations. The result, Bostrom wrote, is “a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.” It destroys everything, including its programmers, in a mad rush to gather resources for paperclip production.
Bostrom went on to refine his “paperclip maximizer” thought experiment in subsequent writings and interviews, and it soon became a touchstone in debates about AI. Eminences as diverse as Stephen Hawking and Elon Musk would routinely bring it up in discussing the dangers of artificial intelligence. Others were skeptical. They found the story far-fetched, even by thought-experiment standards. It seemed, as The Economist wrote, a little too “silly” to be taken seriously.
I was long in the skeptic camp, but recently I’ve had a change of heart. Bostrom’s story, I would argue, becomes compelling when viewed not as a thought experiment but as a fable. It’s not really about AIs making paperclips. It’s about people making AIs. Look around. Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an “AI maximizer” scenario?
“The intelligence of an AI model roughly equals the log of the resources used to train and run it,” OpenAI CEO Sam Altman wrote a year ago. The important word here is “log.” As Donald MacKenzie explains in an insightful article on AI in the London Review of Books:
A logarithmic function, at least of the kind that is relevant here, is characterised by diminishing returns. The more resources you put in, the better the results, but the rate of improvement steadily diminishes.
To maintain a linear path of improvement in the performance of today’s neural-network-based AI models requires an exponential increase in resources. Ever larger inputs achieve ever smaller gains. But people like Altman remain absolutely committed to making those escalating resource investments, no matter the monetary or social cost. Because they believe that vast winner-take-all rewards will come to whichever company achieves a scale advantage in AI, they will devote all available resources—energy, water, real estate, data, chips, people—to the pursuit of even a tiny scale advantage.
Elon Musk, having abandoned his earlier misgivings about AI, announced last week that he was merging xAI into SpaceX. The combined companies were “scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars!” he declared. “In the long term, space-based AI is obviously the only way to scale.” It’s exactly what Bostrom predicted. The monomaniacs will not stop with the resources of the Earth. They’ll extend their plundering to the heavens. Everything is raw material.
This post is an installment in Dead Speech, the New Cartographies series on AI and its cultural and economic consequences.
I'm amazed no one has linked the game version - https://www.decisionproblem.com/paperclips/index2.html
Charlie Stross gives a great talk about Slow AI in which he argues that you don’t actually need a computer to build a paperclip oprimiser, and money is already a great paperclip.
>Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence?
No, we are not madly harvesting the world's resources in a monomaniacal attempt to optimize artificial intelligence. We are, however, harvesting the world's resources in an attempt to optimize artificial intelligence.