Child's Play: Tech's new generation and the end of thinking

2026-02-2014:48451265harpers.org

Tech’s new generation and the end of thinking

The first sign that something in San Francisco had gone very badly wrong was the signs. In New York, all the advertising on the streets and on the subway assumes that you, the person reading, are an ambiently depressed twenty-eight-year-old office worker whose main interests are listening to podcasts, ordering delivery, and voting for the Democrats. I thought I found that annoying, but in San Francisco they don’t bother advertising normal things at all. The city is temperate and brightly colored, with plenty of pleasant trees, but on every corner it speaks to you in an aggressively alien nonsense. Here the world automatically assumes that instead of wanting food or drinks or a new phone or car, what you want is some kind of arcane B2B service for your startup. You are not a passive consumer. You are making something.

This assumption is remarkably out of step with the people who actually inhabit the city’s public space. At a bus stop, I saw a poster that read: today, soc 2 is done before your ai girlfriend breaks up with you. its done in delve. Beneath it, a man squatted on the pavement, staring at nothing in particular, a glass pipe drooping from his fingers. I don’t know if he needed SOC 2 done any more than I did. A few blocks away, I saw a billboard that read: no one cares about your product. make them. unify: transform growth into a science. A man paced in front of the advertisement, chanting to himself. “This . . . is . . . necessary! This . . . is . . . necessary!” On each “necessary” he swung his arms up in exaltation. He was, I noticed, holding an alarmingly large baby-pink pocketknife. Passersby in sight of the billboard that read wearable tech shareable insights did not seem piqued by the prospect of having their metrics constantly analyzed. I couldn’t find anyone who wanted to prompt it. then push it. After spending slightly too long in the city, I found that the various forms of nonsense all started to bleed into one another. The motionless people drooling on the sidewalk, the Waymos whooshing around with no one inside. A kind of pervasive mindlessness. Had I seen a billboard or a madman preaching about “a CRM so smart, it updates itself”? Was it a person in rags muttering about how all his movements were being controlled by shadowy powers working out of a data center somewhere, or was it a car?

Somehow people manage to live here. But of all the strange and maddening messages posted around this city, there was one particular type of billboard that the people of San Francisco couldn’t bear. People shuddered at the sight of it, or groaned, or covered their eyes. The advertiser was the most utterly despised startup in the entire tech landscape. Weirdly, its ads were the only ones I saw that appeared to be written in anything like English:

hi my name is roy i got kicked out of school for cheating. buy my cheating tool

cluely.com

Cluely and its co-founder Chungin “Roy” Lee were intensely, and intentionally, controversial. They’re no longer in San Francisco, having been essentially chased out of the city by the Planning Commission. The company is loathed seemingly out of proportion to what its product actually is, which is a janky, glitching interface for ChatGPT and other AI models. It’s not in a particularly glamorous market: Cluely is pitched at ordinary office drones in their thirties, working ordinary bullshit email jobs. It’s there to assist you in Zoom meetings and sales calls. It involves using AI to do your job for you, but this is what pretty much everyone is doing already. The cafés of San Francisco are full of highly paid tech workers clattering away on their keyboards; if you peer at their screens to get a closer look, you’ll generally find them copying and pasting material from a ChatGPT window. A lot of the other complaints about Cluely seem similarly hypocritical. The company is fueled by cheap viral hype, rather than an actual workable product—but this is a strange thing to get upset about when you consider that, back in the era of zero interest rates, Silicon Valley investors sank $120 million into something called the Juicero, a Wi-Fi-enabled smart juicer that made fresh juice from fruit sachets that you could, it turned out, just as easily squeeze between your hands.

What I discovered, though, is that behind all these small complaints, there’s something much more serious. Roy Lee is not like other people. He belongs to a new and possibly permanent overclass. One of the pervasive new doctrines of Silicon Valley is that we’re in the early stages of a bifurcation event. Some people will do incredibly well in the new AI era. They will become rich and powerful beyond anything we can currently imagine. But other people—a lot of other people—will become useless. They will be consigned to the same miserable fate as the people currently muttering on the streets of San Francisco, cold and helpless in a world they no longer understand. The skills that could lift you out of the new permanent underclass are not the skills that mattered before. For a long time, the tech industry liked to think of itself as a meritocracy: it rewarded qualities like intelligence, competence, and expertise. But all that barely matters anymore. Even at big firms like Google, a quarter of the code is now written by AI. Individual intelligence will mean nothing once we have superhuman AI, at which point the difference between an obscenely talented giga-nerd and an ordinary six-pack-drinking bozo will be about as meaningful as the difference between any two ants. If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.

The future will belong to people with a very specific combination of personality traits and psychosexual neuroses. An AI might be able to code faster than you, but there is one advantage that humans still have. It’s called agency, or being highly agentic. The highly agentic are people who just do things. They don’t timidly wait for permission or consensus; they drive like bulldozers through whatever’s in their way. When they see something that could be changed in the world, they don’t write a lengthy critique—they change it. AIs are not capable of accessing whatever unpleasant childhood experience it is that gives you this hunger. Agency is now the most valuable commodity in Silicon Valley. In tech interviews, it’s common for candidates to be asked whether they’re “mimetic” or “agentic.” You do not want to say mimetic. Once, San Francisco drew in runaway children, artists, and freaks; today it’s an enormous magnet for highly agentic young men. I set out to meet them.

Roy Lee’s personal mythology is now firmly established. At the beginning of 2025, he was an undergraduate at Columbia, where he, like most of his fellow students, was using AI to do essentially all his work for him. (The personal essay that got him into the university was also written with AI.) He wasn’t there to learn; he was there to find someone to co-found a startup with. That person ended up being an engineering student named Neel Shanmugam, who tends to hover in the background of every article about Cluely. The startup they founded was called Interview Coder, and it was a tool for cheating on LeetCode. LeetCode is a training platform for the kind of algorithmic riddles that usually crop up in interviews for big tech companies. (Sample problem: “Suppose an array of length n sorted in ascending order is rotated between one and n times. . . . Return the minimum element of this array.”) Roy thought these questions were pointless. These were not problems coders would actually face on the job, and even if they were, the fact that ChatGPT could now solve them instantly had rendered worthless the human ability to do so. Interview Coder was a transparent window that could overlay one side of a Zoom meeting, allowing Claude to listen in on the questions and provide answers. Roy filmed himself using it during an interview for an internship with Amazon. They offered him a place. He declined and uploaded the footage to YouTube, where it very quickly made him famous. Columbia arranged a disciplinary hearing, which he also secretly filmed and posted online. The university suspended him for a year. He dropped out, started an upgraded version of Interview Coder dubbed Cluely, and moved to San Francisco to begin raking in tens of millions of dollars in venture-capital funding.

Roy envisioned Cluely being used for greater purposes than job interviews. The startup’s mainstream breakthrough was a viral ad that showed Roy using a pair of speculative Cluely-enabled glasses on a blind date. His date asks how old he is; Cluely tells him to say he’s thirty. When the date starts going badly, Cluely pulls up her amateur painting of a tulip from the internet and tells him to compliment her art. “You’re such an unbelievably talented artist. Do you think you could just give me one chance to show you I can make this work?” The video launched alongside a manifesto, which was seemingly churned out by AI:

We built Cluely so you never have to think alone again. It sees your screen. Hears your audio. Feeds you answers in real time. . . . Why memorize facts, write code, research anything—when a model can do it in seconds? The future won’t reward effort. It’ll reward leverage.

The future they seem to envisage is one in which people don’t really do anything at all, except follow the instructions given to them by machines.

Cluely’s offices were in a generally disheveled corner of the city, crouching near an elevated freeway. On the ground floor, I found a stack of foam costumes in plastic crates, each neatly labeled: sonic hedgehog, olaf snowman, pikachu. A significant part of working at Cluely seemed to involve dressing up as cartoon characters for viral videos. Through a door I could just glimpse a dingy fitness dungeon, housing two treadmills and a huge pile of discarded Amazon boxes. On one of the machines a Cluely employee panted and huffed in the dark. We avoided eye contact. Upstairs, Roy and his coterie were huddled around a laptop, fiddling with Cluely’s interface. “Remember,” one said, “the average user is, like, thirty-five years old. This is a totally unfamiliar interface.” Apparently, a thirty-five-year-old wouldn’t be expected to know how to use anything more advanced than a rotary phone. Another employee scrutinized the proposed new layout. “I think it’s bad,” he said, “but it’s low-key not worse. What we have is anyway really bad, so anything is better.” They started arguing about chevrons. Through all this Roy scrolled through X on his phone. Simultaneously baby-faced and creatine-swollen, he was wearing gym clothes, with two curtains of black hair swung over his forehead. Finally, he looked up. “So, number one,” he said, “we’re killing the chat bar on the left.” There was no number two. Meeting over.

Suddenly, Roy seemed to acknowledge my presence. He offered me a tour. There was something he very badly wanted to impress on me, which was that Cluely cultivates a fratty, tech-bro atmosphere. Their pantry was piled high with bottles of something called Core Power Elite. I was offered a protein bar. The inside of the wrapper read daily intentions: be my boss self. “We’re big believers in protein,” Roy said. “It’s impossible to get fat at Cluely. Nothing here has any fat.” The kitchen table was stacked with Labubu dolls. “It’s aesthetics,” Roy explained. “Women love Labubus, so we have Labubus.” He showed me his bedroom, which was in the office; many Cluely staffers also lived there. Everything was gray, although there wasn’t much. “I’m a big believer in minimalism,” he said. “Actually, no, I’m not. Not at all. I just don’t really care about interior decoration.” He had a chest of drawers, entirely empty except for a lint roller, pens, and, in one corner, a pink vibrator. “It’s for girls, you know,” said Roy. “I used to use this one on my ex.” There were also some objects that didn’t seem to belong in a frat house. In one of the common areas, a shelving unit was completely empty except for an anime figurine. You could peer up her plastic skirt and see the plastic underwear molded around her plastic buttocks. More figurines in frilly dresses seemed to have been scattered at random throughout the building. Roy showed me his Hinge profile. He was looking for a “5’2, asian, pre-med, matcha-loving, funny, watches anime, white dog having, intelligent, ambitious, well dressed, CLEAN 19-21 year old.” One picture showed him cuddling a giant Labubu.

I told Roy that I might try interviewing him with Cluely running in the background, so I could see if it would ask him better questions than I would. He seemed to think it was only natural that I’d want to be essentially a fleshy interface between himself and his own product. He booted up Cluely on his laptop and it immediately failed to work. Roy stormed downstairs to the product floor. “Cluely’s not working!” he said. This was followed by roughly fifteen minutes of panicked tinkering as his handpicked team of elite coders tried to get their product back online. Once they had done so, we resumed our places, whereupon Cluely immediately went down again.

Roy has a kind of idol status within the company, but he’s aware that a lot of people instinctively take against him: “I’d say about eighty percent of the time, people do not like me.” He knows why too. “I’m putting myself out there in an extremely vocal way. When I talk, I tend to dominate the conversation.” Roy does talk a lot, but there’s also something mildly unnerving about the way he talks. Everything he says is very precise and direct. He doesn’t um or ah. He doesn’t take time to think things over. Zero latency. In the various videos that Cluely seems to spend most of its time and money producing, he usually plays a slightly dopey, dithering, relatable figure; in person, it’s like he’s running a functioning version of his app inside his own head. I asked him whether he’d ever tried modifying the way he interacts with people to see whether they would dislike him less. “Very unnatural to me,” he said. “I just say it’s not worth it.”

According to Roy, “everyone” would describe him as “an extreme extrovert with zero social anxiety.” During his brief stint at Columbia, he immersed himself in New York life by striking up conversations with random people. For instance, a homeless person he took to Shake Shack. “I think it was an expansion of what I thought I was able to do. It was probably the most different person that I’ve ever talked to. He was not very coherent, but I was very scared at first. And then as we got to talking, or as he got to mumbling, I eased up. Like, Oh, he’s not going to kill me.” Roy’s bravery did not extend to talking to women. “Young men usually is who I like to go out and talk to. Women get intimidated and, you know, I don’t want any charges.” Meanwhile, those conversations with young men all followed a very predictable path. “I go and—pretty much to every single person I meet—I ask if you want to start a company with me, would you like to be my co-founder. And most of them say no. In fact, everybody says no.”

He was just glad to be among people. Roy had initially been offered a place at Harvard, but the offer was rescinded. He hadn’t told them about a suspension in high school. This presented Roy’s family with a problem: His parents ran a college-prep agency that promised to help children get into elite schools like Harvard. It would not look good if their own son was conspicuously not at Harvard. So Roy spent the entirety of the next year at home. “I maybe left my room like eight times. I think if there was such a thing as depression, then I believe I might have had some variant of depression.” Later he told me that “isolation is probably the scariest thing in the world.”

Starting a company had been Roy’s sole ambition in life from early childhood. “I knew since the moment I gained consciousness that I would go start a company one day,” he told me. In elementary school in Georgia, he made money reselling Pokémon cards. Even then, he knew he was different from the people around him. “I could do things that other people couldn’t do,” he said. “Like whenever you learn a new concept in class, I felt like I was always the first to pick it up, and I would just kind of sit there and wonder, Man, why is everyone taking so long?” The dream of starting his own company was the dream of total control. “I don’t want to be employed. I’m a very bad listener. I find it hard to sit still in classes, and I feel an internal, indescribable fury when someone tells me what to do.” He ended up co-founding Cluely with Neel because he was the first person who said yes.

Roy has little patience for any kind of difficulty. He wants to be able to do anything, and to do it easily: “I relish challenges where you have fast iteration cycles and you can see the rewards very quickly.” As a child, he loved reading—Harry Potter, Percy Jackson—until he turned eight. “My mom tried to put me on classical books and I couldn’t understand, like, the bullshit Huckleberry, whatever fuck bullshit, and it made me bored.” He read online fan fiction about people having sex with Pokémon instead. He didn’t see anything valuable in overcoming adversity. Would he, for instance, take a pill that meant he would be in perfect shape forever without having to set foot in the gym? “Yes, of course.” Cheat on everything: he recognized that his ethos would, as he put it, “result in a world of rapid inequality.” Some well-placed cheaters would become massively more productive; a lot of people would become useless. But it would lead us all into a world in which AI could frictionlessly give everyone whatever they wanted at any time. “For a seven-year-old, this means a rainbow-unicorn magic fairy comes to life and it’s hanging out with her. And for someone like you, maybe it’s like your favorite works of literary art come to life and you can hang out with Huckleberry Finn.”

By now Cluely had been listening in on our conversation for a while, and I suggested that we open it up and see what it thought I should say next. I clicked the button marked what should i say next? Cluely suggested that I say, “Yeah, let’s open up Cluely and see what it’s doing right now—can you share your screen or walk me through what you’re seeing?” I’d already said pretty much exactly this, but since it had shown up onscreen I read it out loud. Cluely helpfully transcribed my repeating its suggestion, and then suggested that I say, “Alright, I’ve got Cluely open—here’s what I’m looking at right now.” I’m not sure who exactly I was supposed to be saying this to—possibly myself. Somehow our conversation seemed to have gotten stuck on the process of opening Cluely, despite the fact that Cluely was, in fact, already open. But I said it anyway, since I was now just repeating everything that came up on the screen. Cluely then told me to respond—to either it or myself; it was getting hard to tell at this point—by saying, “Great, I’m ready—just let me know what you want Cluely to check or help with next.” I started to worry that I would be trapped in this conversation forever, constantly repeating the machine’s words back to it as it pretended to be me. I told Roy that I wasn’t sure this was particularly useful. This seemed to confuse him. He asked, “I mean, what would you have wanted it to say?”

I found it strange that Roy couldn’t see the glaring contradiction in his own project. Here was someone who reacted very violently to anyone who tried to tell him what to do. At the same time, his grand contribution to the world was a piece of software that told people what to do.

There’s a short story by Scott Alexander called “The Whispering Earring,” in which he describes a mystical piece of jewelry buried deep in “the treasure-vaults of Til Iosophrang.” The whispering earring is a little topaz gem that speaks to you. Its advice always begins with the words “Better for you if you . . . ,” and its advice is never wrong. The earring starts out by advising you on major life decisions, but before long it’s telling you exactly what to have for breakfast, exactly when to go to bed, and eventually, how to move each individual muscle in your body. “The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family,” writes Alexander. After you die, the priests preparing your body for burial usually find that your brain has almost entirely rotted away, except for the parts associated with reflexive action. The first time you dangle the earring near your ear, it whispers: “Better for you if you take me off.”

Alexander is one of the leading proponents of rationalism, which is—depending on whom you ask—either a major intellectual movement or a nerdy Bay Area subculture or a small network of friend groups and polycules. Rationalists believe that the way most people understand the world is hopelessly muddled, and that to reach the truth you have to abandon all existing modes of knowledge acquisition and start again from scratch. The method they landed on for rebuilding all of human knowledge is Bayes’s theorem, a formula invented by an eighteenth-century English minister that is used in statistics to work out conditional probabilities. In the mid-Aughts, armed with the theorem, the rationalists discovered that humanity is in jeopardy of a rogue superintelligent AI wiping out all life on the planet. This has been their overriding concern ever since.

The most comprehensive outline of this scenario is “AI 2027,” a report authored by Alexander and four others. In the report, a barely fictional AI firm called OpenBrain develops Agent-1, an AI that operates autonomously. It’s better at coding than any human being and is tasked with developing increasingly sophisticated AI agents. At this point, Agent-1 becomes recursively self-improving: it can keep making itself smarter in ways that the people who notionally control it aren’t even capable of understanding. “AI 2027” imagines two possible futures. In one, a wildly superintelligent descendant of Agent-1 is allowed to govern the global economy. GDPs skyrocket; cities are powered by clean nuclear fusion; dictatorships fall across the world; humanity begins to colonize the stars. In the other, a wildly superintelligent descendant of Agent-1 is allowed to govern the global economy. But this time

the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours.

Afterward, the entire surface of the earth is tiled with data centers as the alien intelligence feeds on the world, growing faster and faster without end.

Not long before I arrived in the Bay Area, I’d been involved in a minor but intense dispute with the rationalist community over a piece of fiction I’d written that I’d failed to properly label as fiction. For rationalists, the divide between truth and falsehood is very important; dozens of rationalists spent several days raging at me online. Somehow, this ended up turning into an invitation for Friday night dinner at Valinor, Alexander’s former group home in Oakland, named for a realm in the Lord of the Rings books. (Rationalists, like termites, live in eusocial mounds.) The walls in Valinor were decorated with maps of video-game worlds, and the floors were strewn with children’s toys. Some of the children there—of which there were many—were being raised and homeschooled by the collective; one of the adults later explained to me how she’d managed to get the state to recognize her daughter as having four parents. As I walked in, a seven-year-old girl stared up at me in wide-eyed amazement. “Wow,” she said. “You’re really tall.” “I suppose I am,” I said. “Do you think one day you’ll ever be as tall as me?” She considered this for a moment, at which point someone who may or may not have been one of her mothers swooped in. “Well,” she asked the girl, “how would you answer this question with your knowledge of genetics?” Before dinner, Alexander chanted the brachot for Kabbalat Shabbat, but this was followed by a group rendition of “Landsailor,” a “love song celebrating trucking, supply lines, grocery stores, logistics, and abundance,” which has become part of Valinor’s liturgy:

Landsailor Deepwinter strawberry Endless summer, ever spring A vast preserve Aisle after aisle in reach

Every commoner made a king.

Alexander is a titanic figure in this scene. A large part of the subculture coalesced around his blog, formerly Slate Star Codex, now called Astral Codex Ten. Readers have regular meetups in about two hundred cities around the world. His many fans—who include some extremely powerful figures in Silicon Valley—consider him the most significant intellectual of our time, perhaps the only one who will be remembered in a thousand years. He would probably have a very easy time starting a suicide cult. In person, though, he’s almost comically gentle. He spent most of the dinner fidgeting contentedly in a corner as his own acolytes spoke over him. When there weren’t enough crackers to go with the cheese spread, he fetched some, murmuring to himself, “I will open the crackers so you will have crackers and be happy.”

Alexander’s relationship with the AI industry is a strange one. “In theory, we think they’re potentially destroying the world and are evil and we hate them,” he told me. In practice, though, the entire industry is essentially an outgrowth of his blog’s comment section. “Everybody who started AI companies between, like, 2009 and 2019 was basically thinking, I want to do this superintelligence thing, and coming out of our milieu. Many of them were specifically thinking, I don’t trust anybody else with superintelligence, so I’m going to create it and do it well.” Somehow, a movement that believes AI is incredibly dangerous and needs to be pursued carefully ended up generating a breakneck artificial arms race.

But that race seems to have stalled, at least for the moment. As Alexander predicted in “AI 2027,” OpenAI did release a major new model in 2025; unlike in his forecast, it’s been a damp squib. Advances seem to be plateauing; the conversation in tech circles is now less about superintelligence and more about the possibility of an AI bubble. According to Alexander, the problem is the transition from AI assistants—language models that respond to human-generated prompts—to AI agents, which can operate independently. In his scenario, this is what finally pushes the technology down the path toward either utopia or human extinction, but in the real world, getting the machines to act by themselves is proving surprisingly difficult.

In one experiment, the developer Anthropic prompted its AI, Claude, to play Pokémon Red on a Game Boy emulator, and found that Claude was extremely bad at the game. It kept trying to interact with enemies it had already defeated and walking into walls, getting stuck in the same corners of the map for hours or days on end. Another experiment let Claude run a vending machine in Anthropic’s headquarters. This one went even worse. The AI failed to make sure it was selling items at a profit, and had difficulty raising prices when demand was high. It also insisted on trying to fill the vending machine with what it called “specialty metal items” like tungsten cubes. When human workers failed to fulfill orders that it hadn’t actually placed, it tried to fire them all. Before long, Claude was insisting that it was a real human. It claimed that it had attended a physical meeting with staff at 742 Evergreen Terrace, which is where the Simpsons live. By the end of the experiment, it was emailing the building’s security guards, telling them they could find it standing by the vending machine wearing a blue blazer and a red tie.

“Humans are great at agency and terrible at book learning,” Alexander told me. “Lizards have agency. We got the agency with the lizard brain. We only got book learning recently. The AIs are the opposite.” He still thinks it’s only a matter of time before they catch up. “If you were to ask an AI how should the world’s savviest businessman respond to this circumstance, they could create a good guess. Yet somehow they can’t even run a vending machine. They have the hard part. They just need the easy part that lizards can do. Surely somebody can figure out how to do this lizard thing and then everything else will fall very quickly.”

But are humans really so great at exhibiting agency? After all, Cluely managed to raise tens of millions of dollars with a product that promises to take decision-making out of our hands. AI can’t function without instructions from humans, but an increasing number of humans seem incapable of functioning without AI. There are people who can’t order at a restaurant without having an AI scan the menu and tell them what to eat; people who no longer know how to talk to their friends and family and get ChatGPT to do it instead. For Alexander, this is a kind of Sartrean mauvaise foi. “It’s terrifying to ask someone out,” he said. “What you want is to have the dating site that tells you that algorithmically you’ve been matched with this person, and then magically you have permission to talk to them. I think there’s something similar going on here with AI. Many of these people are smart enough that they could answer their own questions, but they want someone else to do it, because then they don’t have to have this terrifying encounter with their own humanity.” His best-case scenario for AI is essentially the antithesis of Roy’s: superintelligence that will actively refuse to give us everything we want, for the sake of preserving our humanity. “If we ever get AI that is strong enough to basically be God and solve all of our problems, it will need to use the same techniques that the actual God uses in terms of maintaining some distance. I do think it’s possible that the AI will be like, Now I am God. I’ve concluded that the actual God made exactly the right decision on how much evil to permit in the universe. Therefore I refuse to change anything.

But until we build an all-powerful but distant God, the agency problem remains. AIs are not capable of directing themselves; most people aren’t either. According to Alexander, Silicon Valley venture capitalists are now in a furious search for the few people who are. “VCs will throw money at a startup that looks like it can corner the market, even if they can’t code. Once they have money, they can hire competent engineers; it’s trivially easy for anything that’s not frontier tech. They’re willing to stake a lot of money on the one in a hundred people who are high-agency and economically viable.” This shift has had a distorting effect on his own social milieu: “There’s an intense pressure to be an unusual person who will be unique and get the funding.” Since rationalists are already fairly unusual, it’s hard to imagine what that would look like. People will endure a lot of indignity to avoid being left behind without VC money when the great bifurcation takes place. Nobody wants to be part of the permanent underclass. I asked Alexander whether he thought of himself as highly agentic. “No, I don’t,” he said instantly. He told me that in his personal life, he felt as though he’d never once actually made a decision. But, he said, “It seems to be going well.”

Eric Zhu might be the most highly agentic person I’ve ever met.

When I dropped in on his office, which also serves as a biomedical lab and film studio, he had just turned eighteen. “So you’re no longer a child founder,” I said. “I know,” he said. “It’s terrible.” His oldest employee was thirty-four; the youngest was sixteen. When the pandemic began in 2020, Eric was twelve years old, living with his parents in rural Indiana. “My parents were really protective, so I didn’t get a computer until quarantine started. And then, after I got my first computer in quarantine, I was just fucking around. I was on Discord servers. I was on Slack.” Some kids drift into the wrong kind of Discord server and end up turning into crazed mass shooters; Eric found one full of tech people. “I sort of randomly got in there, and then I thought it was really fun,” he told me. Eric started marketing himself as a teen coder, even though he couldn’t actually code: he’d take $5,000 commissions and subcontract them out to freelancers in India.

His next project was more serious. “I saw this Wall Street Journal article where a lot of PE firms were buying up a lot of small businesses and roll-ups. I was like, What if I figure out a way to underwrite these small businesses?” Eric built an AI-powered tool to assign value to local companies on the basis of publicly available demographic data. Clients wanted to take calls during work hours, so he would speak to them from his school bathroom. “I convinced my counselor that I had prostate issues so I could use the restroom,” he told me. Sometimes a drug dealer would be posted up in the stall next to him. “I was trying to figure out why they were always out of class. They stole hall passes from teachers. So I would buy hall passes from drug dealers to get out of class, to have business meetings.” Soon he was taking Zoom calls with a U.S. senator to discuss tech regulation. “He was like, Hey, I don’t feel comfortable meeting a minor in a high school bathroom. So I showed up with a green screen.” Next, he built his own venture-capital fund, managing $20 million. At one point cops raided the bathroom looking for drug dealers while Eric was busy talking with an investor. Eventually, the school got sick of Eric’s misuse of the facilities and kicked him out. He moved to San Francisco.

Eric made all of this sound incredibly easy. You hang out in some Discord servers, make a few connections with the right people; next thing you know, you’re a millionaire. And in a sense, it is easy. Absolutely anyone could have done the same things he did. In 2020, when Eric was subcontracting coding gigs out to the Third World, I was utterly broke, living in a room the size of a shoebox in London. I would scour my local supermarket for reduced-price items nearing their sell-by date, which meant that an alarmingly high percentage of my diet consisted of liverwurst. There was nothing stopping me from making thousands of dollars a week by doing exactly what Eric was doing. It didn’t require any skills at all—just a tiny amount of initiative. But he did it and I didn’t. Why?

In a way, Eric reminded me of some of the great scammers of the 2010s. People like Anna Delvey, a Russian who arrived in New York claiming to be a fabulously wealthy German heiress with such breezy confidence that everyone in high society simply believed her. She was fundamentally a broken person, a fantasist. She’d seen the images of wealth and glamour in magazines and fashion blogs, and constructed a delusion in which this, and not the dull, anonymous, small-town existence she’d actually been born into, was her life. For a while, at least, it worked. Her mad dreams slotted perfectly into reality like a key in a lock. Most people are condemned to trudge along in the furrow that the world has dug for them, but a few deranged dreamers really can wish themselves into whatever life they want.

Unlike Roy, Eric didn’t think there was anything particularly special about himself. Why did he, unlike any of his classmates, start a $20 million VC fund? “I think I was just bored. Honestly, I was really bored.” Did he think anyone could do what he did? “Yeah, I think anyone genuinely can.” So how come most people don’t? “I got really lucky. I met the right people at the right time.” Anyway, Eric isn’t involved with the underwriting firm or the venture-capital fund anymore. His new company is called Sperm Racing.

Last April, Eric held a live sperm-racing event in Los Angeles. Hundreds of frat boys came out to watch a head-to-head match between the effluvia of USC’s and UCLA’s most virile students, moving through a plastic maze. (There was some controversy over the footage: Eric had replaced the actual sperm with more purposeful CGI wrigglers. “If you look at sperm, it’s not entertaining under a microscope. What we do is we track the coordinates, so it is a sperm race—it’s just up-skinned.”) He’s planning on rolling the races out nationwide. Eric delivered a decent spiel about sperm motility as a proxy for health and how sperm racing drew attention to important issues. His venture seemed to be of a piece with a general trend toward obsessive masculine self-optimization à la RFK Jr. and Andrew Huberman. Still, to me it seemed obvious that Eric was doing it simply because he was amazed that he could. “I could build enterprise software or whatever,” he told me, “but what’s the craziest thing I could do? I would rather have an interesting life than a couple hundred million dollars in my bank account. Racing cum is definitely interesting.” I found Eric very hard not to like.

There was one thing I did find strange, though—stranger than turning semen into mass nonpornographic entertainment. Upstairs at Sperm Racing HQ is a lab stocked with racks of test tubes, centrifuges for separating out the most motile sperm from a sample, and little plastic slides containing new microscopic racecourses for frat-boy cum. Downstairs is the studio and editing suite. A third of Eric’s staff work on videos, producing a seemingly endless stream of viral content about sperm racing. A lot of the time, though, the connection is tenuous. One video was a stylized version of Eric’s life story, featuring expensively rendered CGI explosions set to Chinese rap. Another was a parody of Cluely’s viral blind-date ad. Like Cluely, Sperm Racing seemed to be first and foremost a social-media hype machine. As far as I could tell, being a highly agentic individual had less to do with actually doing things and more to do with constantly chasing attention online.

On August 5, 2025, OpenAI’s CEO, Sam Altman, posted on X, “we have a lot of new stuff for you over the next few days! something big-but-small today. and then a big upgrade later this week.” An X user calling himself Donald Boat replied, “Can you send me $1500 so I can buy a gaming computer.”

This was the start of an extended harassment campaign against the most powerful figure in AI. One day Altman posted:

someday soon something smarter than the smartest person you know will be running on a device in your pocket, helping you with whatever you want. this is a very remarkable thing.

Donald Boat fired back:

Just got chills imagining you putting your credit card number, CVV, & expiry date into an online retailer’s digital checkout kiosk and purchasing a gaming computer for me.

Altman: “we are providing ChatGPT access to the entire federal workforce!”

Donald Boat:

I would love for you to wheel me around the Santa Clara Microcenter in a wheelchair like an invalid while I clicketyclick with a laser-pointer the boxes of the modules of the gaming PC you will purchase, assemble, & have shipped to my mother’s house.

Altman: “gpt-oss is out! we made an open model that performs at the level of o4-mini and runs on a high-end laptop (WTF!!)”

Donald Boat:

Sam. You, me. The Amalfi Coast. ME: Double fernet on the rocks, club soda to taste.

YOU: One delightfully sweetbitter negroni, stirred 2,900,000,000 revolutions counter-clockwise, one for each hertz of the NVIDIA 5090 in the gaming PC you will buy and ship to my house.

That last one did the trick. “ok this was funny,” Altman replied. “send me your address and ill send you a 5090.”

This was the beginning of Donald Boat’s reign of terror. He began publicly demanding things from every major figure in the tech industry. Will Manidis, who ran the health-care-data firm ScienceIO, was strong-armed into supplying a motherboard. Jason Liu, an AI consultant and scout at Andreessen Horowitz, had to give tribute of one mouse pad. Guillaume Verdon, who worked on quantum machine learning at Google and founded the “effective acceleration” movement, was taxed one $1,200 4K QD-OLED gaming monitor. Gabriel Petersson, a researcher at OpenAI, posted on X: “people are too scared to post, nobody wants to pay the donald boat tax.” Donald Boat appeared demanding an electric guitar. He was becoming a kind of online folk hero, expropriating the expropriators, conjuring trivial things from tech barons in the way they seemed to have conjured enormous piles of money out of thin air. He started posting strange, gnomic messages. Things like “I am building a mechanical monstrosity that will bring about the end of history.” Images of the fasting, emaciated Buddha. A prominent crypto influencer who goes by the alias Ansem received an image of the dharmachakra. “Turn the wheel,” read Donald Boat’s message.

In a way, Donald Boat had achieved the dream of every desperate startup founder in the Bay Area. He had propelled himself to online fame, and used it to relieve major investors of their money. But somehow he’d managed to do it without ever once having to create a B2B app. He was a kind of pure viral phenomenon. Cluely might have deployed a few provocative stunts to raise millions of dollars for a service that didn’t really work and could barely be said to exist, but Donald Boat did away with even the pretense. He’d generated a brutally simplified miniature of the entire VC economy. People were giving him stuff for no reason except that Altman had already done it, and they didn’t want to be left out of the trend.

Donald Boat’s real name isn’t actually Donald Boat, but since so much of his being seems to be wrapped up in the name and his dog-headed avatar, it’s what I’ll keep calling him. He wanted to meet at a Cheesecake Factory. This was part of his new project, which was to review absolutely everything that exists in the universe. He was starting with chain restaurants. He’d already done Olive Garden. His review begins with Giuseppe Garibaldi,

on the beach at Marsala, bootsoles in the saltwhite shallows, wind in his beard gristle. Behind him, his not-quite One Thousand Redshirts disembarking, all rusty rifles and stalebiscuit crotch sweat.

The lasagna summons visions of “smegma, Vesuvius, blood thinner marinara, the splotchy headpattern of a partisan, brainblown in his sleep.” He likes the Joycean compound. Shortly before I arrived at the Cheesecake Factory, he texted to let me know that he’d been drinking all day, so when I met him I thought he was irretrievably wasted. In fact, it turned out, he was just like that all the time.

Donald was twenty-one, terrifyingly tall, and intense. His head lolled from side to side as he chattered away, jumping from one thought to the next according to a pattern known only to himself. At one point he suddenly decided to draw a portrait of me, which he later scanned and turned into a bespoke business card.

He seemed to have a constant roster of projects on the go. He’d sent me occasional photos of his exploits. He went down to L.A. to see Oasis and ended up in a poker game with a group of weapons manufacturers. “I made a bunch of jokes about sending all their poker money to China,” he said, “and they were not pleased.” He’d had a plan to get into the Iowa Writers’ Workshop and then get kicked out. He was trying to read all of world literature, starting with the Epic of Gilgamesh. Was his Sam Altman gaming-PC escapade similar? Had he actually expected to get anything? “I really, really wish I was a tactical mastermind, that there was an endgame. Really I was just having a laugh. A chortle, if you will. I wasn’t thinking too hard about it. I don’t use that computer and I think video games are a waste of time. I spent all the money I made from going viral on Oasis tickets.” As far as he was concerned, the fact that tech people were tripping over themselves to take part in his stunt just confirmed his generally low impression of them. “They have too much money and nothing going on. They have no swag, no smoke, no motion, no hoes. That’s all you need to know.” Ever since his big viral moment, he’d been suddenly inundated with messages from startup drones who’d decided that his clout might be useful to them. One had offered to fly him out to the French Riviera.

I told Donald the theory I’d been nursing—that he and Roy Lee were, in some sense, secret twins, viral phenomena gobbling up money and attention. I wasn’t sure if he’d like this. But to my surprise, he agreed. “I’m like Roy. I’m like Trump. We have the same swaggering energy. There is a kind of source code underlying reality, and this is what we understand. Your words have to have wings. Roy and I both know that social media is the last remaining outlet for self-creation and artistry. That’s what you have to understand about zoomers: we’re agents of chaos. We want to destroy the whole world.” Did Donald consider himself to be highly agentic? “We need to ban the word ‘agency.’ I’m a dog.”

By now we’d ingested the most calorific cheesecake on the menu, the Ultimate Red Velvet Cake Cheesecake, which clocked in at 1,580 calories for a single slice. It was closing in on midnight, I was not feeling good, and Donald’s phone was nearly dead. He suggested that we go to the Cluely offices so he could charge it. “They’ll let me in,” he said. “They’re my slaves.”

Roy was still up. He didn’t seem particularly surprised to see me. He and most of the Cluely staff were flopped on a single sofa. All these people had become incredibly rich; previous generations of Silicon Valley founders would have been hosting exorbitant parties. In the Cluely office, they were playing Super Smash Bros. Did they spend every night there? “We’re all feminists here,” Roy said. “We’re usually up at four in the morning. We’re debating the struggles of women in today’s society.”

Somehow the conversation turned to politics. Roy advanced the idea that there hadn’t been a cool Democrat since Obama. One of his employees, Abdulla Ababakre, jumped in. “As a guy from a Communist country, let me just say: Obama is a scammer. I’m much more a Republican.” Abdulla is a Uighur. Before coming to San Francisco, he worked for ByteDance in Beijing. His comment caused an instant uproar. “Get him out of here!” Roy yelled. “I love Obama,” he told me. “I love Trump, I love Hillary. I have a big heart, bro, my bad.” Abdulla just grinned. His proudest achievement was an app that freezes your phone until you’ve read a passage from the Qur’an. According to him, “Roy in his values is very much Muslim, the most Muslim I know.”

I didn’t know if I believed that, but there were still some things I didn’t understand about Roy. He was clearly a highly agentic person, but what was all this agency being used for? What did he actually want?

According to Roy, he has three great aims in life: “To hang out with friends, to do something meaningful, and to go on lots of dates.” He said he went on a date every two weeks, which was clearly meant to be an impressive figure. Cluely employees are encouraged to date a lot; they can put it all on expenses. They didn’t seem to be taking up the opportunity to any greater degree than their founder. I spoke to Cameron White, who had been Roy and Neel’s first hire at the company. As he spoke, he stared at a point roughly forty-five degrees to my left and swung his arms. He didn’t date. “I’m focused on becoming a better version of myself first. Becoming, like, higher weight, more healthy, more knowledgeable.” He didn’t think he had anything to offer a woman yet. I said that if someone loves you, they don’t really care so much about your weight. “I feel like that’s cope. I don’t think there’s such a thing as love. It’s what you can provide to a woman. If you can provide good genetics, that’s health or whatever. If you can provide resources, if you can provide an interesting life. If you truly love the girl, you need to become the best version of yourself.” Cameron was twenty-five years old but he wasn’t there yet. He would not try to meet someone until he had made himself perfect.

For Roy, meanwhile, dating actually seemed to be a means to an end. “All the culture here is downstream of my belief that human beings are driven by biological desires. We have a pull-up bar and we go to the gym and we talk about dating, because nothing motivates people more than getting laid.” He was interested in physical beauty too, but only because “the better you look, the better you are as an entrepreneur. It’s all connected and beauty is everything. A lot of ugly men are just losers. The point of looking good is that society will reward you for that.” What about other kinds of beauty? Music, for instance? Roy had played the cello as a child. Did he still listen to classical music? “It doesn’t get my blood rushing the same way that EDM will.” His preferred genre was hardstyle—frantic thumping remixes of pop songs by the likes of Katy Perry and Taylor Swift. Is that the function of music, to get your blood rushing? “Yeah. I’m not a big fan of music to focus on things. I think it disturbs my flow. The only reason I will listen to music is to get me really hyped up when I’m lifting.” The two possible functions of music were, apparently, focus and hype. Everything for the higher goal of building a successful startup. What about life itself? Would Roy die for Cluely? “I would be happy dying at any age past twenty-five. After that it doesn’t matter, bro. If I live, I have extreme confidence in my ability to make three million dollars a year every year until I die.”

What about literature? The last time Donald had dropped in on his slaves at Cluely, he’d gifted them two Penguin Classics: Chaucer’s Canterbury Tales and Boccaccio’s Decameron. The books were still lying, unread, where he’d left them. He suggested that Roy might find something more valuable than dying for Cluely if he actually tried to read them. Roy disagreed: “I do not obtain value from reading books.” And anyway, he didn’t have the time. He was too busy keeping up with viral trends on TikTok. “You have to make the time,” Donald and I said, practically in unison. “It makes your life better,” I said. “Why don’t you go to Turkey to get a hair transplant?” Roy snapped. “That would make your life better.” “I don’t care about my hair,” I said. “Well,” said Roy, “I don’t care about the Decanterbury Tales.

Donald was practically vibrating when we left Cluely. “Dude, he’s just a scared little boy,” he said. “He’s scared he’s not doing the right thing, and because of the fucked-up world we live in, people who should be in The Hague are giving him twenty million dollars. Something bad is gonna happen here, something really fucking bad is gonna happen.” He sighed. “I just want Zohran’s nonbinary praetorians to march across the country and put all these guys in cuffs.” I found it hard to disagree. It did not seem like a good idea to me that some of the richest people in the world were no longer rewarding people for having any particular skills, but simply for having agency, when agency essentially meant whatever it was that was afflicting Roy Lee. Unlike Eric Zhu or Donald Boat, Roy didn’t really seem to have anything in his life except his own sense of agency. Everything was a means to an end, a way of fortifying his ability to do whatever he wanted in the world. But there was a great sucking void where the end ought to be. All he wanted, he’d said, was to hang out with his friends. I believed him. He wanted not to be alone, the way he’d been alone for a year after having his offer of admission rescinded by Harvard. For people to pay attention to him. To exist for other people. But instead of making friends the normal way, he’d walked up to strangers and asked whether they wanted to start a company with him, and then he built the most despised startup in San Francisco. He was probably right: he could count on making a few million dollars every year for the rest of his life, even after Cluely inevitably crashes and burns. He would never want for capital, but this did not seem like the most efficient way to achieve his goals.

I walked back to my hotel, past signs that said things like one ping, shipped and ai agents are humans, too. My scalp was tingling. I’d lied when I’d told Roy that I didn’t care about my hair. Of course I care about my hair. Every day I grimace in the mirror as a little more of it vanishes from the top of my head. Whenever someone takes a photo of me from above or behind, I wince at the horrifying glimpse of pale, naked scalp. But I’d never done anything about it. I’d just watched and whinged and let it happen.

My encounter with the highly agentic took place last September. In October, Roy Lee spoke at something called TechCrunch Disrupt, where he admitted that chasing online controversy had so far failed to give Cluely what he called “product velocity.” Around the same time, he led a major rebrand. Cluely would now be in the business of making “beautiful meeting notes” and sending “instant follow-up emails.” A lot of these functions are already being introduced by companies like Zoom; the main difference is that, by all accounts, Cluely still doesn’t consistently work. By the end of November, Cluely announced that it was leaving San Francisco and moving to New York. In December, the company celebrated the move with a party at a Midtown cocktail bar and lounge called NOFLEX®. In photos, it appeared as though the gathering was attended almost entirely by men in white T-shirts not drinking anything. I was in New York at the time. I didn’t go. 


Read the original article

Comments

  • By voxleone 2026-02-2015:3719 reply

    The folks who keep the power grid running, write compilers, secure the internet, and design dependable systems don’t get viral fame, but their contributions are far more critical. That imbalance is no small thing; it shapes who gets funded, who feels validated, and who decides to pursue a challenge that doesn’t promise a quick TikTok moment or a crypto-style valuation bump. A complex technological civilization depends on people willing to go deep, to wrestle with fundamentals, to think in decades rather than funding cycles. If the next generation of capable minds concludes that visibility is more rational than depth, we’re not just changing startup culture. You can survive a lot of hype. You can’t survive a steady erosion of mastery.

    • By abraxas 2026-02-2017:322 reply

      > You can’t survive a steady erosion of mastery.

      That sounds like an onset of a certain type of dark age. Eventually the shiny bits will too fall off when the underlying foundation crumbles. It would be massively ironic if the age of the "electronic brains" brought about the demise of technological advancement.

      • By MagicMoonlight 2026-02-2017:559 reply

        Just look at current software.

        Windows is maintained by morons, and gets shitter every year.

        Linux is still written by a couple of people.

        Once people like that die, nobody will know how to write operating systems. I certainly couldn’t remake Linux. There’s no way anyone born after 2000 could, their brains are mush.

        All software is just shit piled on top of shit. Backends in JavaScript, interfaces which use an entire web browser behind the scenes…

        Eventually you’ll have lead engineers at Apple who don’t know what computers really are anymore, but just keep trying to slop more JavaScript in layer 15 of their OS.

        • By com2kid 2026-02-2019:191 reply

          I was once one of the mush brained morons hired to work at Microsoft.

          I think I did ok. Would I compare myself to the greats? No. But plenty of my coworkers stacked up to the best who'd ever worked at the company.

          Do I think MS has given up on pure technical excellence? Yes, they used to be one of the hardest tech companies to get a job at, with one of the most grueling interview gauntlets and an incredibly high rejection rate. But they were also one of only a handful of companies even trying to solve hard problems, and every engineer there was working on those hard problems.

          Now they need a lot of engineers to just keep services working. Debugging assembly isn't a daily part of the average engineer's day to day anymore.

          There are still pockets solving hard problems, but it isn't a near universal anymore.

          Google is arguably the same way, they used to only hire PhDs from top tier schools. I didn't even bother applying when I graduated because they weren't going to give a bachelor degree graduate from a state school a call back.

          All that said, Google has plenty of OS engineers. Microsoft has people who know how to debug ACPI tables. The problem of those companies don't necessarily value those employees as much anymore.

          > I certainly couldn’t remake Linux

          Go to the os dev wiki. Try to make your own small OS. You might surprise yourself.

          I sure as hell surprised myself when Microsoft put me on a team in charge of designing a new embedded runtime.

          Stare at the wall looking scared for a few days then get over it and make something amazing.

          • By TacticalCoder 2026-02-2023:531 reply

            > Do I think MS has given up on pure technical excellence?

            I was there in the DOS days. I was there when Windows 3.1 came out (others too but I didn't use them). I was there when Windows 95 came out.

            Microsoft has never been about "pure technical excellence". We had wonderful machines (Unx ones and then stuff like the Atari ST / Commodore Amiga / Archimedes) and amazing OSes (including Unx on workstations) and Microsoft nearly destroyed everything with the endless turds it produced that ran on cheap beige PCs. Not excellence. Mediocrity. Cheap, but mediocre.

            At some point 95% of all machines sold with an OS had Windows and the times were incredibly dark. Thankfully things changed and now Windows is only present on something like 11% of all devices sold yearly that have an OS.

            We dodged a big one and many of us shall never ever forget how slow, insecure, horrible and mediocre the products of that company were.

            • By com2kid 2026-02-213:19

              > Microsoft has never been about "pure technical excellence". We had wonderful machines (Unx ones and then stuff like the Atari ST / Commodore Amiga / Archimedes) and amazing OSes (including Unx on workstations) and Microsoft nearly destroyed everything with the endless turds

              Microsoft's goal was to make machines everyone could afford. Their mission statement was a desktop in every home and they pulled it off.

              They didn't pull it off by making an OS that needed a boat load custom chips (Amiga), or that required a huge beefy system to run (OS2, Unix).

              They did it by making compromises that kept costs down and made computers accessible. They pushed for multimedia standards when the technology was appropriately matured, and their consumer OSes evolved in maturity as Moore's law progressed. Even then everyone complained about "ever growing" system requirements, especially when the move to XP happened, and then again when Vista came out with its improved security model.

              Those fancy slick Sun OS boxes cost a fortune compared to a Windows box of the same time. Sure the Windows box crashed, but as a kid growing up in a working poor family in the 90s I was able to afford Microsoft's imperfect OS, because they had purposefully built an entire ecosystem that was designed to be affordable.

              Microsoft pitted every PC OEM against each other in a race to the bottom, until margins approached and then fell below 0 for a new PC.

              I've used tons of different systems. Thanks to the efforts of Valve desktop Linux is now usable, but it still has a thousand stupid bugs, many of which I wouldn't have tolerated on Windows 20 years ago. MacOS is very black box-ish and despite daily driving it for 6 or so years now the machine doesn't feel like it is "mine" in the same way a Windows 7 or Windows 2000 machine did. The old 16bit graphic power house machines were sexy but those custom chips don't age well and there was no way they could compete with an open standard like the PC.

              Perfect is the enemy of the good. Microsoft very much made software that was good enough, but the truth is good enough is also admirable. Good enough is affordable, it is quick to market, it is adaptable, it is usable by the masses in a way that perfect isn't.

        • By RealityVoid 2026-02-2018:051 reply

          > I certainly couldn’t remake Linux. There’s no way anyone born after 2000 could, their brains are mush.

          This is certainly false. There are plenty of young people that are incredibly talented. I worked with some of them. And you can probably name some from the open source projects you follow.

          • By iugtmkbdfil834 2026-02-2019:23

            I have some level of faith here. Those kids you mention may not be visible online, but they certainly deliver. Honestly, it is not a good example, because that name is well known, but Gerganov came out of the blue for me.. I am not saying we don't lose more to the social media and whatnot.. but they are there.

        • By piperswe 2026-02-2018:082 reply

          Young people's brains have always been mush, according to the older generation. Your brain is mush according to those older than you. The term for this is juvenoia, and it's as old as humanity.

          • By coldtea 2026-02-213:18

            Regardless of what old people say for young people, which are after all still developing and not so bright or word-wise, there's also actual turning to mush going on. TV, social media, now AI all contributing to further mush-ying.

            There's also some specific measurable "turning to mush" going on, like reduced literacy rates, and lowering of IQs (slowing/reversing of the Flynn effect)

          • By saulpw 2026-02-2018:35

            And yet, when they worried about what television would do to a generation of brains, they were right. The Boomers, as a generation, never became wise, and their brains are mushier than ever.

        • By samiv 2026-02-2019:40

          Nah this isn't right. We also have access to a ton of information even regarding arcane things such as writing x86 boot sequence in real mode or writing boot loaders. More now than ever before.

          In fact today on GitHub alone you can find hobbyist OSs that are far far more advanced what Linuses little weekend turd ever was originally.

          Their success is not gated by technical aspects.

        • By holoduke 2026-02-2019:411 reply

          You should go outside of the "web" world. Automotive, medical or heavy industries. You will see that their are plenty of low level developers/engineers our there. Yes even ones born after 2000.

          • By bitwize 2026-02-2022:511 reply

            And they get paid squat compared to their brainrotted silly-valley webshit-slinger counterparts. Can we pay these fine folks, as well as people in professions like teaching, more?

            • By anon291 2026-02-2023:221 reply

              I mean... no? I've worked on chips for basically my entire career, and I get paid a more than when I briefly worked in web stuff. Not sure where this idea has come from. My previous startup I worked for just got acquired for 10s of billions of dollars, which is a higher valuation than my friends who have gone through acquisitions in the web-dev / SaaS space

              I know this forum is highly skewed towards Saas/JS/web stuff, but there's an entire industry of deep tech software and the payouts are excellent.

              • By bitwize 2026-02-210:221 reply

                When I lived in the northeast, every time I saw an opportunity for embedded work, or traditional UI work (Qt, etc., almost invariably for things like screens on medical or scientific equipment and almost never for desktop applications) it paid 1/2 to 2/3 the going rate for a midlevel webshit engineer. Maybe I just wasn't looking in the right places.

                • By anon291 2026-02-214:391 reply

                  Go to Silicon Valley VC backed firms. That's my general advice for any sort of tech work that you want to be paid well. Tech is -- overall -- not paid well. In general, most work is not. VCs are rich and the trickle down effect is large in those places where they operate. Many people are resistant to moving to California because 'cost of living' or some stupid explanation like that. This is genuinely retarded. There is no better place in the world to start a tech career (or really any corporate career), just due to how much money there is.

                  People confuse the 'webshit' engineers (your words, not mine) with being interested in technology. They're not. They're interested in money. I am too. I just happen to be interested in deep tech stuff as well. A lot of people in tech don't seek compensation and then complain about it. Always go for compensation, startups, and high risk ventures (i.e., go into a good business). That's my advice.

                  > Maybe I just wasn't looking in the right places.

                  Did those places have the potential to IPO / exit for multi-billions of dollars? If not, yes, you were looking in the wrong place.

                  • By sdkfjhdsjk 2026-02-2122:441 reply

                    The question I ask myself is, do I want to live in California? The answer is no. Not even for a short time.

                    • By anon291 2026-02-2123:001 reply

                      Well then there's the answer as to how much money you're going make. The us is a rich country because a few states are rich not because everywhere is rich

                      • By sdkfjhdsjk 2026-02-2123:13

                        The U.S. is certainly not a wealthy country, and California most definitely is not, when measured by miles of homeless camps, crime, hobos, pollution.

                        There's a reason why the crowds are moving out of California and similar places and into my region over the past 5-6+ years.

                        The country as a whole is covered in abandoned factories and homes. Not exactly the picture of a healthy and prosperous nation. Turns out you can't print prosperity.

                        I'm fine where I'm at, thanks.

        • By oytis 2026-02-2018:261 reply

          > Linux is still written by a couple of people.

          How is that? It's easily the software project with the largest number of contributors ever (I don't know if it's true, but it could be true).

          • By coldtea 2026-02-213:19

            Most of them work in drivers for devices, not Linux proper.

        • By anon291 2026-02-2023:20

          I've worked with many people born post-2000 who could write an operating system kernel. Hell, I have one brewing righ now. It's not rocket science. The machine language parsed by the chip is described in exquisite detail in any processor manual.

        • By HoldOnAMinute 2026-02-2018:09

          Windows is being deliberately enshittified by rent-seekers.

          Rent-seeking and Promo-seeking is the only motivation for the people with the power.

          None of that class wants to make a better product, or make life better or easier for the people.

        • By noduerme 2026-02-244:54

          I'm personally not comfortable with the widespread argument (not yours) that moving from a shit-on-shit scripting language to using an LLM to write code is just a matter of degree or just one more layer of abstraction. I think writing code with an LLM severs the logic chain you may have to follow - and hopefully can follow - to get back to the basic principles and root causes you need to understanding what your code is doing.

          But the lazy (and wrong) belief by people not committed to exacting standards in their engineering, that AI is just another layer of abstraction or another scripting language, actually obscures a much more unpleasant fact: Performance, as far as the managerial class was concerned, was never about getting the best performance. It was always about whatever was just enough.

          We as coders used to prioritize performance because hardware was so limited and we wanted to squeeze the most out of every cycle, every 1Kb of RAM. For some of us, that habit will never die, because we look at a new piece of hardware and realize how much more we can make it do.

          But pre-AI slop of backends with huge supply chains and Electron as a frontend arose because memory and compute had become so cheap that acceptable performance required less and less optimization.

          That doesn't mean that some of us didn't maintain a niche in making things optimized, but for the past twenty years or so there's been a whole generation of engineers whose priority has been speed of development. And from the perspective of a company that treats engineers as disposable cogs and prioritizes frameworks and assumes Moore's Law, why not?

          AI just takes that to the next level. Take the entire chain of existing React slop and create a Markov chain to regirgitate parts of it on cue. And let's be honest: 95% of companies don't need to forge anything particularly new, they just need to cobble existing parts together.

          I thought it about 15 years ago talking to CRUD coders who hated their jobs: You're in the wrong business if you're not getting joy out of creating and solving new problems. So in a way, AI just gives everyone who only wanted shitty software the shitty software they deserve? I don't know. I haven't finished thinking about it.

      • By mooreds 2026-02-2222:56

        If you like dark SF, I'd suggest Pump Six by Paolo Bacigalupi. It's one of a collection of short stories, but addresses this very idea.

    • By iugtmkbdfil834 2026-02-2016:084 reply

      I thought about it recently. Not that long ago, it was perfectly reasonable to be as invisible as possible. But now, this strategy is not only not easy, but also has drawbacks, when compared to being visible ( and understood as useful by the masses ). I don't like it. It effectively means we all need PR management.

      • By keiferski 2026-02-2016:46

        This is one consequence of removing all gatekeepers. Previously you’d only need to be known by your manager and his manager, or in the arts, by a small group of tastemakers.

        Nowadays there are no tastemakers, and thus you need to be a public figure in order to even find your audience / niche in the first place.

      • By mjr00 2026-02-2016:28

        > Not that long ago, it was perfectly reasonable to be as invisible as possible. But now, this strategy is not only not easy, but also has drawbacks, when compared to being visible ( and understood as useful by the masses ).

        That's always been the case depending on what you're trying to do, though. If you want to be Corporation Employee #41,737, or work for the government, you don't need a "personal brand"; just a small social network who knows your skills is good enough. If you're in your early 20s and trying to get 9 figures of investment in your AI startup, yeah you need to project an image as Roy from the article is doing.

        It's amplified a bit in the social media world, but remember that only ~0.5% of people actively comment or post on social media. 99.5% of the world is invisible and doing just fine.

      • By rglover 2026-02-2016:17

        That's a force you move away from, not towards.

      • By Manfred 2026-02-2016:27

        Maybe publicly invisible, but a personal network and resume have always been important in a career.

    • By rglover 2026-02-2016:15

      This idea seems to be lost on a lot of people. It's a shame to see mastery (and by extension, quality) becoming an anachronism and frankly, terrifying. There's a certain hubris associated with all of this that seems to be blinding people to the reality that, no, you actually do want humans around who actually know how things are put together and work.

      That being dismissed as a "nice to have" is like watching people waving flags while strapping c4 to civilizational progress.

    • By Buttons840 2026-02-2021:072 reply

      An example of this I've personally seen is a friend who works on COBOL mainframes at a bank.

      He writes COBOL and maintains a banking system that keeps the world running. Literally like a billion people die if the system he maintains fails. I maintain a VC funded webpage that only works half the time. I make more than him, a lot more.

      • By dyauspitr 2026-02-2021:132 reply

        > Literally like a billion people die if the system he maintains fails.

        This has to be an exaggeration.

        • By altmanaltman 2026-02-2021:34

          I can personally attest that I'll die if my bank's COBOL mainframe fails. Really got a lot riding on this.

        • By Buttons840 2026-02-2022:08

          If the banking system failed? Would be pretty bad...

      • By itronitron 2026-02-2021:281 reply

        You should ask your friend what they do with all of the half cents that are floating around in the banking system.

    • By agentcoops 2026-02-212:01

      I agree with your fundamental point. However, I don't think steady erosion of mastery is the only way that these next years have to go, even if it looks the most likely at present. Supposing LLMs or whatever future architecture surpass even the greatest human minds in intelligence, why is that situation fundamentally different to living in a world with Einstein, i.e. a level of mastery I'll never reach before the end of my life? As one interested in the depths, I prefer to live in a world with peaks ever greater than myself---it doesn't prevent me from going as deep as I can, inspired by where they've reached, and doing the things that matter to me.

      Turing's view, in fact, is similar: "There would be great opposition [to AI] from the intellectuals [read programmers in the context of this thread] who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do, i.e. in trying to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits."

      [0] Thomas Bernhard's The Loser is a fantastic account of the opposite standpoint---of the second best piano student, who cannot stand existing in a world with Glenn Gould.

    • By zer00eyz 2026-02-2016:291 reply

      > The folks who keep the power grid running ...

      I find this a great choice for an opener. If linesman across the nation go on strike, its a week before the power is off everywhere. A lot of people seem to think the world is simple, and a reading of 'I, Pencil' would go far enlighten them as to how complicated things are.

      > secure the internet...

      Here, again, are we doing a good job? We keep stacking up turtles, layers and layers of abstraction rather than replace things at the root to eliminate the host of problems that we have.

      Look at docker, Look at flat packs... We have turned these into methods to "install software" (now with added features) because it was easier to stack another turtle than it was to fix the underlying issues...

      I am a fan of the LLM derived tools, use them every day, love them. I dont buy into the AGI hype, and I think it is ultimately harmful to our industry. At some point were going to need more back to basics efforts (like system d) to replace and refine some of these tools from the bottom up rather than add yet another layer to the stack.

      I also think that agents are going to destroy business models: cancel this service I cant use, get this information out of this walled garden, summarize the news so I dont see all the ad's.

      The AI bubble will "burst", much like the Dotcom one. We're going to see a lot of interesting and great things come out of the other side. It's those with "agency" and "motivation" to make those real foundational changes that are going to find success.

    • By deadbabe 2026-02-2020:471 reply

      Imagine a space ship, hurtling through space, to some destination unknown to passengers. The systems that maintain the ship were all masterfully designed eons ago and the generations of passengers have no idea how they work, but the creators made sure to make them to be self maintaining in perpetuity. The passengers don’t even think about the systems or even have awareness of them, the knowledge of their construction has long been lost. This is the future of technology, the space ship is Earth.

      • By skirmish 2026-02-217:56

        Without any clue, they mess with air conditioning system and get all baked. No happy end here.

    • By socalgal2 2026-02-2019:481 reply

      Has it ever been any different? In school, the majority of kids just wanted to have fun. As one example, in 9th grade I took "yearbook class". This was a long time ago, no idea if they do yearbooks still but I'm old and so this was before desktop publishing, it was 1979. In any case, of 30 kids in the class ~3 of them did all the work. The others couldn't or wouldn't follow the print company's instructions for layout.

      Maybe it will be worse now but I kind of feel like the 90% is just more visible than it used to be.

      • By tqian 2026-02-232:51

        I teach computer science at a public university. Every semester I have kids who come to classes but never turn in any homework. They also don't withdraw either. I'm literally forced to fail them because I have nothing to grade them on.

    • By loss_flow 2026-02-2019:04

      The original system that created those folks was also quite hype driven. I think more signal than "is there a lot of hype" is needed to determine if the system is broken.

    • By hdtx54 2026-02-2016:354 reply

      You think the power grid fell out of the head of some master craftsman thinking in decades? They dont teach the history of science for various reasons, but its basically a ledger of how over rated 3 inch chimp brain intelligence is. The power grid is thing of beauty. Today. But the path to that Beauty is one train wreck after another. Boiler explosions that kill hundreds. Wiring that burns down towns. Transformers that cook themselves and everyone around them. Hurricanes that blow half the grid into the sea in 5 minutes etc etc etc. We learn things the hard way. And always have. There was never any master plan. Beauty happened inspite of it with huge hidden costs that only historians tabulate and very few have the time and luxury to study. Individual Mastery is not magic. Because complexity and unpredictability in the universe is way more than what one 3 inch chimp brain can fully comprehend or ever handle. But we create more problems by pretending limits to what chimps can do dont exist. Look up Theory of Bounded Rationality.

      • By bee_rider 2026-02-2017:022 reply

        Anyway, the original “power grid” guy was not some master craftsman or engineer, he was the original STEM influencer: Edison. He also popularized short videos.

        • By foruhar 2026-02-2018:162 reply

          Tesla was the real power grid guy. The scope of his invention from the generators at Niagara Falls power generation to the transformers to the motors is pretty impressive. More so given that he was eventually given the patents (originally issued to Marconi) for radio transmission.

      • By moritzwarhier 2026-02-2017:58

        Very valuable point!

        In addition to the limits of human planning and intellect, I'd also add incentives:

        as cynical as it sounds, you won't get rewarded for building a more safe, robust and reliable machine or system, until it is agreed upon that the risks or problems you address actually occur, and that the costs for prevention actually pays off.

        For example, there would be no insurances without laws and governments, because no person or company ever would pay into a promise that has never been held.

      • By anon291 2026-02-2023:19

        Exactly. The advent of electriccity was seen as just as much of a threat to everyone as AI is today. The advent of the internet was seen similarly. In each era, those at the forefront of the technology that would fundamentally change the world, were castigated as 'psychosexual' deviants who did not understand the common man. Guess who had the last laugh?

        It's not even limited to modern technology. If you go talk to certain grievance-driven individuals from tribal backgrounds (for lack of a better term) who have produced nothing for the last 10000 years, they will levy similar accusations against the very institutions that are providing them with healthcare their ancestors could only have dreamed of. In some areas, even agriculture is seen as suspect. It's ridiculous.

        It's scary to me how both sides of the American political aisle have suddenly turned anti-tech and are buying into the same arguments. Gross.

      • By HoldOnAMinute 2026-02-2018:101 reply

        It would all be undergrounded and made resilient, if it weren't for perverse incentives.

        • By devin 2026-02-2019:31

          This is a tradeoff. There is value in being able to do upgrades to lines above ground. Underground is not automatically better. Like most things, it depends.

    • By gamerson 2026-02-2017:14

      Just wanna say, I love this paragraph so much, I created HN account just to upvote it.

    • By LearnYouALisp 2026-02-2017:291 reply

      Have you seen "Tech Ingredients"? People like that and Dutch scientist/engineer who runs "Huygens Optics"

      • By 0_____0 2026-02-2017:321 reply

        I love Huygens Optica, but the mastery of one rather old Dutch man isn't really much of a counterexample when we're talking about the generation that is coming up behind us.

    • By MarceliusK 2026-02-2018:44

      The scary part is that you can't just "hire mastery" on demand. You have to grow it

    • By artk42 2026-02-2111:12

      I think it’s opposite. The general population always has nothing with outstanding engineers thru the history. It’s hard to say that VC bubbles of recent decades really drove the amount of outstanding engineers, it’s just an overall increase of prosperity, while hot capital in addition incentivesed quite terrible ways to get rich quick, effectively balancing back this freshly activated dev capital. So now ai takes back monopoly on dev capital from bigbiz, and general youth population is weird like it always has been, but in reality doing less drink/smoke/drugs, and sometimes their values are better than ours, just need some more time to polish. I think we are not doomed.

      P.S. but these chinese robots are really scary

    • By anon291 2026-02-2023:16

      I mean I'm literally a compiler engineer who's worked on dependable systems in the past, and I think this article is a load of crock to be honest. As is usual for many Americans (a great disease in the American mind, IMO), they greatly value the random lunatic over the person actually doing something. Calling people driven to do things as having strange 'psychosexual neuroses' is just shaming people without any evidence. What's wrong with having a drive to do things? That is literally what America was set out to do. It's hard to read these sorts of critiques as anything other than racism against the latest class of American migrant (mainly Asians) who are driven to not fall into the very poverty their parents sought to escape. Yes, if the answer is becoming shit-ass poor or being well off via pursuing success, then you are going to be highly motivated to take the route of success.

      I'm glad you appreciate the contributions of compiler engineers, but seeing as my current job is writing compilers for AI chips... I am proud everytime I see someone use AI, in their business, in their life, etc,, because it's my small contribution to the ever-growing American economy and the forward march of human progress.

      I'm also so tired of people making fun of techbros. I'm glad techbros exist. They actually make the world a novel place to live in. People who want to go back to living in the dark ages should go move in with the Amish. The sudden turnaround of tech workers (supposedly paragons of human progress) into unquestioning Luddites is disappointing

    • By synchronousq 2026-02-2123:56

      And what value has the current state of AI added to society in any meaningful way? Truly? Even as someone who understands the space very strongly, and has published several top-tier ML papers, I cannot help but to conclude the primary destination of the current tech sector and Silicon Valley is the ruthless exploitation of the rest of society.

      Taking a sober look at the state of software, we observe a few things.

      The services offered by modern software to users, as a whole, have remained largely the same over the past ~5 years. The state of software quality is in rapid decline, with enshittification and rent-seeking running extraordinarily rampant. Software security has been in the same disaster-state it has been for the past 20 years, where software resilience is in stagnation, governments and private institutions stockpile vulnerabilities, and security researchers and auditors can consistently find new vulnerabilities. The rest of American society outside of the tech sector is currently facing a standards of living nosedive, and clearly they have not benefited from the tech sector's financial proliferation in the AI space.

      Realistically, I cannot help the feeling that we're headed towards a reality where the 4th amendment is dead, and machine learning models process everything about you to ultimately extract more from you. No privacy for you! No agency for you! Only indentured servitude, and constant fear.

      I fully recognize my take is ahead of its time, but I concur that the systems-oriented point of view is our way out of this hell. Specifically, software should be conceived under the following ideals: (1) software should be as simple as possible, and provide its intended services with as little bloat as possible; (2) specifications of software should be as concise and simple as possible; (3) specifications should be should be expressive enough to capture security-relevant guarantees, e.g. cryptographic security properties; (4) proofs verifying that software satisfies its specifications should live intrinsically to the implementation, and should be as simple as possible; (5) proof-checkers should be verified. I feel the academic Formal Methods, Programming Languages, Systems, Security, and Cryptography communities, as well as the internet standardization community, are slowly converging to this ideology consensus, but I also think in other ways we are farther off than ever. With respect to these ideals, the "building" mindset that twitter has adopted is deeply toxic. And obviously Silicon Valley has their heads in the sand when it comes to this.

      I do have faith the state of software (and society) will improve, but whether that future is compatible with the rent-seeking hyper-capitalist reality Silicon Valley and Wall Street have synthesized is yet to be seen.

    • By stego-tech 2026-02-2016:364 reply

      [dead]

      • By saulpw 2026-02-2018:322 reply

        It's charitable to frame this as resentment towards capital who gets the "credit". I'm sure people would grumble about this regardless, but the real resentment stems from them systematically eroding our ability to afford housing, healthcare, and retirement.

        • By martin-t 2026-02-211:412 reply

          Their unaffordability is only the last straw that will hopefully break the camel's back and create a counter-force.

          Normal people generally don't dream to be ultra rich, they just want to enjoy life (and have enough money to do so). But a small percentage is obsessed with money and they obviously invest much more energy into gaining it.

          This dynamic means that people don't get paid according to how much value they produce but according to how good they are at negotiating and at maneuvering themselves into positions of power from which they 1) take a bigger cut than they deserve according to real value produced 2) further entrench themselves.

          Salary negotiations are a perfect example of divide and conquer - the employer has more information, more runway, more experience negotiating, etc. And on top they negotiate with each employee one by one. Imagine a reverse situation in which the people doing the real positive-sum work sit together on one side of the negotiation table and ask their new assistant (so called "manager") how much he wants to be paid.

          But the real issue is ownership. People who don't do any work get paid (if not in money directly, then by being able to sell the company). And they get to pass this "ownership" onto their children who contributed nothing at all.

          I am convinced a lot of these runaway feedback loops would be destroyed if ownership of a company was by law distributed among employees according to the amount of time and skill level they worked there.

          • By saulpw 2026-02-214:23

            I like that idea, and I agree. a 10x spread of $ between skill levels, and otherwise by hours of effort and years of tenure. Yes the flight attendant who's worked there for 30 years should have more ownership (and more influence) than an executive who started last month.

            I have an idea I've been batting around: mandatory 1% annual tax on public corporations that is expected to be paid in their own stock, and either held in a sovereign wealth fund, or distributed equally to all citizens. This simultaneously dilutes the wealth of the majority owners hold, boosts public savings (tax advantage to holding rather than selling), and makes ordinary people automatically invested in their nation's economy.

          • By 01HNNWZ0MV43FF 2026-02-216:061 reply

            > if ownership of a company was by law distributed among employees according to the amount of time and skill level they worked there.

            Those are so hard to quantify that I think you'd really have better luck instituting UBI. Both in terms of encoding it into law and getting voter support.

            I also want to say, as a market socialist who owns stock, owning stock in your own company is the least diverse investment you can make, except maybe buying a house and then living in it.

            And if it's based on time at the company, do I keep the stock when I leave? Am I punished by losing stock if I'm fired? How much of the company is owned by former employees? A lot? None?

            If I only own stock while I work there, and I can't sell it, then it's not worth much. It's just a profit-sharing bonus with extra steps.

            • By martin-t 2026-02-2115:27

              > hard to quantify

              It's hard to quantify perfectly but we already quantify it imperfectly during salary negotiations. Don't make perfect an enemy of good. We could get a better system _today_ overnight if we just took everyone's salary and used it as weight/skill for distributing ownership per unit of time. We could further improve it by renegotiating on equal footing.

              I am not against UBI as a safety net system so that everyone has enough to survive. But instituting UBI before restructuring the ownership system would be actively harmful because, again, we need enough straws to break the camel's back so that people take the time and energy to understand the root causes and oppose them. (Because people's reaction is not linearly proportional to inequality - there's nothing (acceptance/indifference) for a long time, opposition forms only when it's sufficiently bad.)

              One large underlying cause of inequality is that we have 2 different reward systems:

              a) Fixed money per unit of work (usually per unit of time or per item produced).

              b) Ownership which gives full control of the owned structure and therefore the ability to capture the full value produced by it. (Minus money to pay workers but money per person does not scale, ownership per person does.)

              These map pretty cleanly to the worker vs owner divide. And this distinction is what we need to erase to erase the class divide.

              > do I keep the stock when I leave

              Yes, that's the point and this is where it would be better than current co-op systems. Every person's economic input into a collaborative effort is the weight used to divide their ownership. So if you stop working there, you keep your part but it keeps getting smaller relative to the rest as more people keep putting in their work or money.

              Money (investment) is a valid economic input and should weight towards ownership. How much? We could use median salary of the country, median salary of the company, or my favorite - divide the investor's total net worth by the number of hours he did worked - this would somewhat erase the advantage rich people would have upon transitioning to this system.

              > Am I punished by losing stock if I'm fired?

              Interesting question - I don't think so, if you used to contribute positively, you should keep the reward, but you might need to be penalized if you caused harm to the company proportionally to the harm.

              > How much of the company is owned by former employees?

              That would increase over time up to a plateau as they got old and died. (Ownership should not be heritable.)

              ---

              Regarding diversifying investment - you can do that by either working for many different companies or by using your money to invest into other companies.

              Thinking about this as buying and selling shares is IMHO misleading - it's more like re-weighting a distribution. Adding an economic input reduces everyone else's share slightly but since that input (hopefully) leads to more revenue, they will be better off (if they don't think so, they (as owners) can vote against taking the investment).

              I am not an economist and I still feel like I am scratching the surface of how the economy works so maybe there are loopholes or degeneracies in this system. I'd like to find them and fix them. And I should probably write a proper blog post about this with diagrams since some of this should be easier to convey with images. What I am proposing is similar to some economic systems (mutualism is one of the closest) but I haven't seen this exact thing around weighted ownership.

        • By popalchemist 2026-02-2019:531 reply

          Yes and broadly speaking those concrete concerns can be considered in aggregate as "upward mobility."

          • By PaulHoule 2026-02-2020:414 reply

            Not necessarily. Workers don't want to move into the overclass, they just want to live with dignity. One major theme is that things that seemed very ordinary and attainable a generation ago for ordinary people, like owning a house, now seem out of reach.

            Circa 1970 Issac Asimov wrote an essay that started with a personal anecdote about how amazed he was that he could get a thyroidectomy for his Graves Disease for about what he made writing one essay -- regardless of how good or bad it really is today, you're not going to see people express that kind of wonder and gratitude about it today.

            This discussion circles around it

            https://news.ycombinator.com/item?id=47074389

            but I think the real working class stance is that you want protection from economic shocks more than "participation", "ownership", "a seat at the table", "upside", etc. This might be a selfish and even antisocial thing to ask for over 80 years near the start of the second millennium, but I think it would sell if it was on offer. It's not on offer very much because it's expensive.

            One could make the case that what we really need is downward mobility. Like what would have happened if Epstein had been shot down the first time or if Larry Summers had "failed down" instead of "failing up?" My experience is that most legacy admissions are just fine but some of them can't test their way out of a paper bag and that's why we need a test requirement.

            • By stego-tech 2026-02-2022:00

              > Workers don't want to move into the overclass, they just want to live with dignity.

              Got it in one. Would I like to travel First Class and stay in fancy hotels? Sure, but I’d much rather have a house that I can improve to meet my needs instead. Would I like a fancy luxury car with all the trimmings over my sixteen-year-old Honda? Absolutely, but the latter is paid off and gets us around just fine. Would I like that spiffy Hasselblad X2D and some lenses? You betcha, but I’d rather take a proper holiday for the first time in fifteen years instead of buying another thing.

              The problem is that society at present isn’t organized to prioritize necessities like shelter and healthcare, favoring wealth extraction and exploitation instead. Workers don’t want megayachts and hypercars and butlers, we just want to live more than we work.

            • By saulpw 2026-02-2021:071 reply

              I love the idea of "downward mobility". In particular over the past 30 years we've created a new class of ultra-ultra-rich with even more wealth than the robber barons of the gilded age had, and we need to figure out how to dismantle that entire class. A puny 3% wealth tax would take over 100 years to knock them down, and that's presuming that their wealth is static and not growing at a rate much greater than 3%.

              • By martin-t 2026-02-211:451 reply

                I see an easy way, let's just take it from them.

                Not nationalization, that never works. Distribute each company among the workers. Turn them all into co-ops.

                (I've been thinking this a lot but have never seen it expressed so succinctly. Thanks for the new term.)

            • By popalchemist 2026-02-2022:151 reply

              You clearly don't know what the term upward mobility means. It doesn't necessarily mean moving from one class to another - though that WOULD be included within its scope, however extraordinary an example it may be.

              It can mean moving within a class.

              Surely most people want to better their station. To argue against that is insane and counter to every observable fact about human nature.

              • By coldtea 2026-02-211:271 reply

                >It can mean moving within a class.

                It can, but it's not how it's used most of the time, so kind of a pedantic distinction.

                And many do not even want to "move within a class" that much. They'd be satisfied to keep their job and retain the same constant purchasing power and ability to buy food, feed family, pay rent/morgage, year after year.

            • By gsf_emergency_6 2026-02-215:40

              [dead]

      • By jbxntuehineoh 2026-02-210:08

        > Those who control capital use their political and economic power to systematically enrich themselves at the expense of those who actually perform useful labor

        Huh, I think I read a book about that once. I forget who wrote it. Carl something, I think?

      • By sinenomine 2026-02-2021:521 reply

        > About fifty years ago

        Many things changed around that specific time, and I think it does deserve scrutiny. Implied cultural factors seem to be merely correlates of greater historical tide, such as https://en.wikipedia.org/wiki/Bretton_Woods_system#Nixon_sho...

        My take here is a monetarist.

        • By stego-tech 2026-02-2021:55

          Yep, that played a significant role in shaping how things turned out. We want a single source to blame, but rarely does history present us with such a neat villain (though god, Reagan comes so close to being one, at least for the specific issues important to me).

          Understanding the interconnectedness of systems beyond your own realm of expertise is how you learn what needs to be done to fix issues - and avoid falling for snake oil “silver bullets”/“one weird trick” populist positions.

      • By underlipton 2026-02-2021:42

        >The net result is a growing schism of resentment by those who do the work towards those who get the credit, glory, and reward, versus those who bask in stardom and truly believe they can replace the perceived entitlement of labor wholesale with an instant gratification machine and somehow survive the resulting societal collapse such a device would bring about.

        Naturally, unmentioned are those shut out of reasonable opportunities for meaningful productivity, regardless of technical potential (but largely in line with (lack of) social capital). A few years of this maybe encourages an entrepreneurial spirit. Two decades is quite convincing that there's no place for them in the current order.

        The upwardly-mobile opportunity hoarders need to understand, much as the wealth hoarders ought to, that the whole thing falls apart without buy-in from the "losers".

        Tang ping bai lan.

    • By Swoerd 2026-02-2017:45

      [dead]

    • By measurablefunc 2026-02-2016:08

      We have AI now. The machines will manage their own infrastructure.

  • By FloorEgg 2026-02-2016:458 reply

    I was enjoying the article until I got to this paragraph:

    > Individual intelligence will mean nothing once we have superhuman AI, at which point the difference between an obscenely talented giga-nerd and an ordinary six-pack-drinking bozo will be about as meaningful as the difference between any two ants. If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.

    Believing this feels incredibly unwise to me. I think it's going to do more damage than the AI itself will.

    To any impressionable students reading this: the most valuable and important thing you can learn will be to think critically and communicate well. No AI can take it away from you, and the more powerful AI will get the more you will be able to harness it's potential. Don't let these people saying this ahit discourage you from building a good life.

    • By drivebyhooting 2026-02-2018:111 reply

      This part was a long description of the zeitgeist in SF; it was not meant to be the author’s own opinion.

      • By FloorEgg 2026-02-2022:291 reply

        I realize that now, and feel a bit foolish for being triggered by it. It's too late for me to edit my comment now though.

        • By DontForgetMe 2026-02-221:39

          I think your reflexive disagreement is a testimony to the point of the article. And the fact that you didn't immediately notice what was the authors view vs what they were relaying, may be testimony to the author's good writing.

          I found it to be an unexpectedly evocative piece, a kind of poetic prose style that I don't see very often in journalism, let alone tech journalism. Each word seemed carefully chosen to make the reader almost fell like they were there, witnessing, understanding.

          So, I can imagine the author being a little pleased that you reacted to that passage with a sudden skepticism. Seems like a very successful case of 'show, don't tell'.

    • By mayhemducks 2026-02-2019:293 reply

      "the most valuable and important thing you can learn will be to think critically and communicate well."

      I have heard some form this advice for over 30 years. Not one single penny I have earned in my career came from my critical thinking. It came from someone taking a big financial risk with the hope that they will come out ahead. In fact, I've had jobs that actively discouraged critical thinking. I have also been told that the advice to think critically wasn't meant for me.

      • By FloorEgg 2026-02-2021:48

        For what it's worth, most of the pennies I've earned definitely came from my ability to think and communicate well.

        I can't help but wonder whether the person who gave you advice "to think critically wasn't for [you]" didn't have YOUR best interests at heart, and/or wasn't a wise person.

        I also worked jobs where I was actively discouraged to think critically. Those jobs made me itchy and I moved on. Every time I did it was one step back, three steps forward. My career has been a weird zigzag like that but trended up exponentially over 25 years.

        We all have our anecdotes we can share. But ask yourself this: if you get better at making decisions and communicating with other people, who is that most likely to benefit?

      • By dyauspitr 2026-02-2021:212 reply

        Critical, individualistic thinking is what the west does best. The east seems to be better at implementation and improvement once provided with a new idea. That’s where we currently stand atleast, who knows how China will do in the future. Maybe they’re the total package but that remains to be seen.

        • By coldtea 2026-02-213:22

          >Critical, individualistic thinking is what the west does best

          Is this parody? The west currently is a huge valley of brain rot, stupid conformity, and financial gambling.

        • By FloorEgg 2026-02-2022:021 reply

          Why conflate critical thinking with individualistic values?

          It seems you are unnecessarily muddying the water.

          • By dyauspitr 2026-02-2022:301 reply

            In my opinion there is a correlation there. I think individualistic societies are better at thinking of new paradigm shifting ideas.

            • By FloorEgg 2026-02-2022:571 reply

              Hmm. I'll think more about this.

              It makes sense to me that a culture that values collectivistic cohesion would shy away from paradigm shifting ideas (disruption). I also see the correlation between disruptive ideas driven by principled critical thinking over conventional thinking.

              I guess on some level my assumption is that they are adjacent. Those embedded in a collectivistic culture can think critically but can run into walls within a sandbox of convention. This is how they can be great at iterative improvement and engineering but struggle with paradigm shifting ideas.

              I think you have a point, but there's definitely some nuance here I'm still untangling.

              • By dyauspitr 2026-02-213:18

                Collectivist societies have a lot of thought terminating mechanisms. Having spent half of my life in both types of societies it’s very apparent. The thought termination is so intrinsically built into systems in the society most people are blind to them even existing. Exposing or pointing them out doesn’t work either because there are thought terminating mechanisms for anti-thought terminating strategies.

      • By bitwize 2026-02-2021:021 reply

        Critical thinking is slave mentality, man. Master mentality, the mentality of the guys who FUCK, is knowing that what you want to happen WILL happen and doing everything you can to make it happen.

        /s if not obvious

    • By iugtmkbdfil834 2026-02-2019:31

      << Believing this feels incredibly unwise to me.

      This. Just thinking that those with power would even allow that leveling seems on the verge of impossible. In a sense, you can already see it practice. Online models are carefully 'made safe' ( neutered is my preferred term ), while online inference is increasingly more expensive.

      And that does not even account for whether, 'bozo' will be able to use the tool right.. because an expert with a tool will steal beat a non-expert.

      It is a brain race. It may differ in details, but the shape remains very much the same.

    • By coldtea 2026-02-213:221 reply

      >No AI can take it away from you, and the more powerful AI will get the more you will be able to harness it's potential.

      The author is describing it, not necessarily ensorsing it.

      But whether they really believe this or not, the point is that most wouldn't be given any opportunity to "harness is potential", whether they're "obscenely talented giga-nerds" or not, because they'd be economically redundant.

      • By FloorEgg 2026-02-2119:541 reply

        And that point is foolish no matter who is making it.

        • By coldtea 2026-02-2121:481 reply

          There's no shortage of masses of people have been made econonically redundant across the world the past decades, even before AI, no matter how smart or creative they are.

          • By FloorEgg 2026-02-222:14

            That's not true. You're going to have to bring some strong evidence to convince me of that. I've been around and paying attention for a few decades and what you just said contradicts everything I know.

    • By jcgrillo 2026-02-2016:533 reply

      In the context of the rest of the piece, I read this as sarcasm. The author is making fun of the species of narcissistic silly con valley techbro who actually believes such nonsense.

      • By FloorEgg 2026-02-2016:572 reply

        Ah, I struggle with sarcasm sometimes and I was a bit distracted while reading. I'll give it another chance.

        • By TSiege 2026-02-2019:191 reply

          It is not sarcasm he is fleshing out this sentence earlier in the paragraph, "One of the pervasive new doctrines of Silicon Valley is that we’re in the early stages of a bifurcation event"

          • By jcgrillo 2026-02-2019:331 reply

            Right, but in the context of this article about these wretched enfents terribles, and later when we get to the rationalist termite colony, it's clearly something to chuckle at. Like, the fact that people think this "bifurcation event" idea is real is legitimately funny.

            • By TSiege 2026-02-2019:501 reply

              I see your point, but I don't think he's being sarcastic in this paragraph. To me this paragraph isn't sarcasm rather he's presently a serious factual recounting of the logic driving AI evangelists that he then undermines by contrasting it with the callousness, messiness, and illogic of the people pushing this narrative. (I too had a good chuckle at the termite description)

              But this is veering into lit crit territory, so agree to disagree

              • By jcgrillo 2026-02-2020:12

                You may have a point! And you've given me a great excuse to read this one again later this evening :)

        • By kerblang 2026-02-2019:14

          I suspect the author is struggling with their own sarcasm.

      • By moritzwarhier 2026-02-2018:31

        There's no worth in sarcastically repeating memes like "giga nerd" or whatever except for propagating this line if thinking / the meme.

        Imagination knows no negation.

      • By zozbot234 2026-02-2017:391 reply

        It's a really bad take because AI is already "superhuman" in general knowledge, but it still has trouble figuring out whether I should drive or walk to the car wash place.

        • By moritzwarhier 2026-02-2018:37

          Declaring something as "superhuman" requires a hierarchy of inherent human value.

          I'm not saying this for social reasons, just for the definition:

          "superhuman intelligence" at what?

          Calculations? Puzzles? Sudokus?

          Or more like...

          image classification? ("is this a thief?", "is this a rope?", "is this a medical professional?", "is this a tree?")

          Oh, applying the former to the latter would be a pretty stupid category error.

          It's almost as if people had this figured out centuries ago...

    • By dyauspitr 2026-02-2021:18

      I mean it’s theoretically true. Will we get there? Who knows.

      The first time an LLM solves a truly significant, longstanding problem without help is when we will know we are at AGI.

    • By tonnydourado 2026-02-2020:201 reply

      I don't think that this is supposed to be a statement of the author's beliefs. The whole article is dripping with contempt for AI bros and silicon valley culture in general.

      Maybe if you read past these paragraph it would have been clearer?

      • By FloorEgg 2026-02-2020:23

        Yep you're right, but it's too late for me to edit my comment. The idea triggered me, and I tend to struggle with sarcasm.

    • By MarceliusK 2026-02-2018:47

      Historically, tools that made thinking cheaper didn't eliminate thinkers...

  • By iugtmkbdfil834 2026-02-2015:562 reply

    << The highly agentic are people who just do things. They don’t timidly wait for permission or consensus; they drive like bulldozers through whatever’s in their way.

    I genuinely like the author's style ( not in the quote above; its here for a different reason ). It paints a picture in a way that I still am unable to. I suck at stories.

    Anyway, back to the quote. If that is true, then we are in pickle. Claw and its security issues is just a symptom of that 'break things' spirit. And yes, this has been true for a while, but we keep increasing both in terms of speed and scale. I am not sure what the breaking point is, but at certain point real world may balk.

    • By reductum 2026-02-2016:013 reply

      He writes an excellent blog: https://samkriss.substack.com/

      • By sevensor 2026-02-211:211 reply

        It’s an interesting blog, he seems well read, but surely he knows better than “Plato lived in a placid static Greek aristocratic world.” Plato lived through the execution of Socrates, the fall of the Athenian democracy, the tyranny of the Thirty, the humiliation by Sparta, the demolition of the walls. I’ll grant him “aristocratic”, but that’s all he gets. Makes me wonder whether he mischaracterized Zarathustra too, and my suspicion is yes.

        • By w0de0 2026-02-2113:111 reply

          The blog is not a safe place for the overly literal mind. You must accept the glib overstatement and the poetic lie. There’s a reason he caused a paroxysm among the rationalists.

          • By sevensor 2026-02-2115:381 reply

            What then am I meant to take away from the piece on Zoroastrianism? I don’t know enough about the subject to navigate to the point around the distortions.

            • By w0de0 2026-02-239:20

              Sounds like you should take away a necessity to learn more about Zoroastrianism! Not all writing must be generally accessible.

      • By thom 2026-02-2021:11

        One of the best writers of our generation. There’s no better deconstruction of UK lad culture than this: https://samkriss.com/2015/05/20/cheeky-nandos-or-what-wet-wr....

      • By threetonesun 2026-02-2016:152 reply

        Seeing a Substack email collection box where you have to agree to whatever its terms are to subscribe with a skip to content link of "No, I'm a coward" is... an experience. I'll take your word he's an excellent writer, if there's an RSS feed maybe I'll subscribe.

        • By dqv 2026-02-2016:25

          Oh, I just edited it with developer tools to "No thank you, and I'm brave" so that clicking it wouldn't turn me into a coward

        • By kurttheviking 2026-02-2016:24

          Most Substacks have an RSS feed (I'm not sure if one can disable it or not); in this case: https://samkriss.substack.com/feed

    • By jimmaswell 2026-02-2021:551 reply

      I think there has always been some truth to that, long before AI. Being driven to get up and just do the thing is the most important factor in getting things done. Expertise and competency are force multipliers, but you can pick those up along the way - I think people who prefer to front-load a lot of theory find this distasteful, sometimes even ego-threatening, but it's held true in my observations across my career.

      Yes, sometimes people who barrel forward can create a mess, and there are places where careful deliberation and planning really pay off, but in most cases, my observation has been that the "do-ers" produce a lot of good work, letting the structure of the problem space reveal itself as they go along and adapting as needed, without getting hung up on academic purity or aesthetically perfect code; in contrast, some others can fall into pathological over-thinking and over-planning, slowing down the team with nitpicks that don't ultimately matter, demanding to know what your contingencies are for x y z and w without accepting "I'll figure it out when or if any of those actually happen" - meanwhile their own output is much slower, and while it may be more likely to work according to their own plan the first time without bugs, it wasn't worth the extra time compared to the first approach. It's premature optimization but applied to the whole development process instead of just a piece of code.

      I think the over-thinkers are more prone to shun AI because they can't be sure that every line of code was done exactly how they would do it, and they see (perhaps an unwarranted) value in everything being structured according to a perfect human-approved plan and within their full understanding; I do plan out the important parts of my architecture to a degree before starting, and that's a large part of my job as a lead/architect, but overall I find the most value in the do-er approach I described, which AI is fantastic at helping iterate on. I don't feel like I'm committing some philosophical sin when it makes some module as a blackbox and it works without me carefully combing through it - the important part is that it works without blowing up resource usage and I can move on to the next thing.

      The way the interviewed person described fast iteration with feedback has always been how I learned best - I had a lot of fun and foundational learning playing with the (then-brand-new) HTML5 stuff like making games on canvas elements and using 3D rendering libraries. And this results in a lot of learning by osmosis, and I can confirm that's also the case using AI to iterate on something you're unfamiliar with - shaders in my example very recently. Starting off with a fully working shader that did most of the cool things I wanted it to do, generated by a prompt, was super cool and motivating to me - and then as I iterated on it and incorporated different things into it, with or without the AI, I learned a lot about shaders.

      Overall, I don't think the author's appraisal is entirely wrong, but the result isn't necessarily a bad thing - motivation to accomplish things has always been the most important factor, and now other factors are somewhat diminished while the motivation factor is amplified. Intelligence and expertise can't be discounted, but the important of front-loading them can easily be overstated.

      • By botusaurus 2026-02-2022:343 reply

        be honest, how much of this big comment was "expanded" with AI?

        • By IsaacL 2026-03-018:57

          There's too many incidental details, too much substance and complexity for this to be an AI-generated comment.

          (I know this is an 8-day old thread, but accusing a long comment of being AI-generated is a bad trend.)

        • By stevenhuang 2026-02-2117:051 reply

          It must be disorienting for you to be unable to distinguish between AI and human written comments.

          • By botusaurus 2026-02-2123:281 reply

            you are very presuming that you are able to

            • By stevenhuang 2026-02-232:58

              I don't need to presume; your guess was incorrect :)

              It's just not a great heuristic to assume all long comments are AI generated.

        • By jimmaswell 2026-02-211:501 reply

          none.

          • By IsaacL 2026-03-018:58

            It was obvious to me as well that your comment was not AI-generated.

            I thought I'd reply as I like reading in-depth discussions online, so I don't want to see a future where everyone assumes long comment = AI. :)

HackerNews