
Tech strikes back
A new tech ideology is ascendant online. “Introducing effective accelerationism,” the pseudonymous user Beff Jezos tweeted, rather grandly, in May 2022. “E/acc” — pronounced ee-ack — “is a direct product [of the] tech Twitter schizosphere,” he wrote. “We hope you join us in this new endeavour.”
The reaction from Jezos’s peers was a mix of positive, critical, and perplexed. “What the f*** is e/acc,” posted multiple users. “Accelerationism is unfortunately now just a buzzword,” sighed political scientist Samo Burja, referring to a related concept popularized around 2017. “I guess unavoidable for Twitter subcultures?” “These [people] are absolutely bonkers,” grumbled Timnit Gebru, an artificial intelligence researcher and activist who frequently criticizes the tech industry. “Their fanaticism + god complex is exhausting.”
Despite the criticism, e/acc persists, and is growing, in the tech hive mind. E/acc’s founders believe that the tech world has become captive to a monoculture. If it becomes paralyzed by a fear of the future, it will never produce meaningful benefits. Instead, e/acc encourages more ideas, more growth, more competition, more action. “Whether you’re building a family, a startup, a spaceship, a robot, or better energy policy, just build,” writes one anonymous poster. “Do something hard. Do it for everyone who comes next. That’s it. Existence will take care of the rest.”
Jezos admits that his initial announcement was a bit of a “shitpost.” And e/acc’s perceived lack of clarity — what, exactly, are they advocating for? — makes it tempting to dismiss it. Nevertheless, it has since caught the attention of a growing number of tech figures: venture capitalist Marc Andreessen, Y Combinator president Garry Tan, Notion co-founder Chris Prucha, Intercom co-founder Ciaran Lee, and science investor and former U.S. state official Jim O’Neill, some of whom add “e/acc” to their online display names and bios as a sign of solidarity.

If e/acc isn’t entirely sure yet what it is for, it does know what it’s against. One of its chief adversaries is AI safety, a field concerned with the risks of developing AI too quickly. Another, closely related, adversary is effective altruism, a utilitarian movement that aims to “do the most good” in the world. Even e/acc’s name is a jab against EA. Whereas AI safetyists follow the precautionary principle in their thinking about AI — emphasizing risk avoidance over action — effective accelerationists preach blazing forward into the unknown, arguing that we will only learn by doing.
But these debates don’t explain why e/acc became suddenly popular among a wider set of technologists, many of whom aren’t involved with artificial intelligence. E/acc’s significance lies in its counterbalancing force not just to AI safety, but to widespread public concerns about the risks posed by the tech industry at large. Effective accelerationists worry that these concerns have become so entrenched that they threaten to extinguish the light of tech itself.
In March 2023, the Future of Life Institute published an open letter calling for artificial intelligence labs, such as OpenAI and Google AI, to immediately pause the development of technology more powerful than GPT-4 for at least six months. “If such a pause cannot be enacted quickly,” the letter stated, “governments should step in and institute a moratorium.” The letter also called for new systems of auditing, certification, monitoring, and regulation as AI develops.
The letter was initially signed by over a thousand people, many of whom were not just concerned citizens but AI technologists themselves, including Yoshua Bengio, a pioneer of artificial intelligence research; Elon Musk, one of OpenAI’s original co-founders (though he has not been involved since 2018); and Emad Mostaque, founder of Stability AI. Another statement, warning of the “risk of extinction from AI,” was published by the Center for AI Safety in May. It was signed by AI leaders at OpenAI, Anthropic, and DeepMind, as well as senior executives from Google and Microsoft.
Artificial intelligence is a rare domain where technologists themselves are being proactively cautious about their own power before any demonstrable harm has been done. The moral panic now comes from within — a stark deviation from how technological revolutions historically influence society, as scholar Carlota Perez documents in her seminal 2002 book Technological Revolutions and Financial Capital.
Perez looked at major technological innovations in the industrial age — including steam and railway, steel and electricity, oil and automobiles, and digital communication — and noticed that they follow a pattern. There is the initial “big bang” event that signifies a major technical breakthrough, which goes largely unnoticed at the time. Technologists, however, recognize its potential and build upon this breakthrough for decades, until it attracts wider adoption and interest. Financial capital then takes over from technologists to continue driving its development, which creates a speculative public frenzy, a financial bubble, and collapse, followed by a period of reflection and reconciliation as the technology is finally stabilized and absorbed into our social norms.
While Perez suggests that this cycle occurs roughly every fifty years, if we squint a bit, we can see it play out with several more-recent technological breakthroughs, such as cloud computing and smartphones — which together drove the Web 2.0 startup boom — and cryptocurrency. In each case, technologists are generally advocates, not detractors, of the opportunities they see before the rest of the world catches on.
Even when financial capital is not so involved, technologists have historically been fiercely protective of their right to build and distribute their work. Open-source programmers in the late 1990s and early 2000s were adamant that we should not restrict how their free and public code could be used, nor hold programmers liable for users’ decisions. Cryptographic code was once considered a form of munitions in the United States, subject to tightly controlled export laws; it was technologists who advocated for the right to share this code across national borders. When it comes to artificial intelligence, on the other hand, some technologists are skipping ahead to reflecting on its harms before the impact is clear.
It’s not that tech has never had a community concerned about the risks and ethics of artificial intelligence; they just weren’t the ones driving its development. AI safety has been a cottage research field for decades, championed by rationalists — an Internet community that prioritizes logical thought and eliminating cognitive bias — and effective altruists. The Machine Intelligence Research Institute, co-founded by AI researcher Eliezer Yudkowsky, began focusing on these risks in 2005. Overcoming Bias, a blog started by Yudkowsky and economist Robin Hanson the following year, was the predecessor to the forum LessWrong, which became a watering hole for people concerned about AI’s impact on society.
Such efforts were widely recognized and even supported by those in tech, but seemingly more from a place of symbolic goodwill — the way a prominent financier might support the arts — than serious engagement. AI safety research, and its underlying rationalist community, was a comforting reminder to wealthy entrepreneurs that tech was not just a shallow way to make money through startups, but a thoughtful, intellectual place with its own philosophical culture.
Since late 2022, however, when the explosive success of OpenAI’s ChatGPT made the potential of artificial intelligence suddenly visible to the public, what was once a niche subculture became an unlikely public face of the controversy over AI development. Parties previously unaffiliated with the AI safetyists suddenly became their bedfellows, such as Tristan Harris, a former Google employee and self-described “tech ethicist” who had made a name for himself by raising concerns about the harms of social media. These cautionary positions have now spread to a wider set of startup founders and engineers.
Why are so many technologists suddenly circumspect about AI development?
For OpenAI and other large corporate labs, this position may be partly strategic: some onlookers grumble that these labs are working to achieve regulatory capture, courting favor with government. If nothing else, their leaders have likely learned from the wave of backlash against social media executives that it’s better to preemptively establish relationships with policymakers than to be brash. Sam Altman, the CEO of OpenAI, testified before the Senate last spring that “we want to work with the government to prevent [problems] from happening.”
Among a wider set of technologists, the appeal of AI safety might reflect a cognitive tendency to overly formalize social problems. AI’s risks are especially frightening for those who find comfort in predicting human behavior with math rather than lived experience. Calculating these risks and designing theoretical governance structures to avoid them is itself a form of nerd sniping, visible in other places among the likes of rationalists and crypto-governance enthusiasts.
Or, perhaps, wanting to be regulated is a subconscious way for tech to reassure itself about its central importance in the world, which distracts from an otherwise uneasy lull in the industry. AI is the crown jewel of the tech industry at a time when the golden age of software startups has passed. Venture capital funding crashed from its frothy peak in 2022, marking a new and grueling era for startup founders. AI is a critical morale boost to rally tech’s spirits; being important enough to warrant regulatory action means its relevance won’t fade anytime soon.
Regardless of motivations, it’s this anomalous behavior of tech leaders to worry about their own work that effective accelerationists are reacting to. They find it strange and disheartening to see their fellow technologists so timid and docile in the face of a major new development.
Instead, e/accs exhort their peers to pluck up and find the courage to embrace unfamiliar territory — a stance that is often interpreted by others as reckless. Grimes, the futuristic-chic musician who also traverses tech circles, cautioned e/acc’s founders to be more thoughtful about how they express themselves. “You are freaking people out,” she stated in a Twitter Space discussion last March. “[It’s] causing people to panic at non-profits and agitate for government regulation…. I think that’s the kind of thing you need to be careful about and mindful of. Powerful people are watching this.”
While effective accelerationists can be juvenile, letting their love of memes overshadow their substance, its founders, in their serious moments, don’t advocate for willful ignorance of risks. Both Beff Jezos and Bayeslord, another pseudonymous founder, have repeatedly stated that they support efforts to align AI with human welfare — they just don’t want fear to be the primary guide of technological innovation. “From now on I don’t want to see a single tweet about how e/acc doesn’t want people to work on alignment,” Bayeslord declared. “On the contrary, we think reliability engineering is valuable work. However, we also want the apocalyptoids and the power hungry to lose.”
Effective accelerationists don’t want us all to stop caring. Just the opposite: they wish that everyone cared more. They don’t want to be known for bleating platitudes about “a better future,” then airily waving away the details. Rather, their moral vision is one where more people — including and especially those who consider themselves hands-off today — actively engage with emerging technology and identify concrete plans for its development and stewardship, rather than reflexively backing away from what they don’t understand. Discussing the risks and opportunities in front of us intelligently, e/accs believe, is a sign of a flourishing civil society.
E/acc’s nemesis is not the political left, with its proclivity toward regulation. One Twitter poll asking effective accelerationists how they self-identified politically — an informal poll, but perhaps a meaningful one, given how close-knit and online the movement is so far — was split roughly evenly between left and right. “The previous culture war was between left and right,” Jezos says. Now “acceleration is the only option, degrowth is death.”
Rather than use well-trod political labels, e/acc wants to be the alternative to what they call decels, for “decelerationists”: proponents of a passive, overly cautious approach to the future. Back when “disruption” was a buzzword, tech was united against this timid mindset, which it saw as the default outsider position, of those trapped in their miserable 9-to-5 cubicle jobs who could only think in terms of red tape and bureaucracy. As effective accelerationists see it, in recent years the decel attitude has crept into tech itself, and nowhere more clearly than in debates about artificial intelligence.
If decels had a face in tech, it would be Big Tech employees, the defanged, corporatized version of the hackers and outcasts who once inspired the tech industry, now imprisoned on motivational posters aimed at self-loathing managers. This, perhaps, is something that e/accs and some tech critics can agree upon: Settling for a middling Big Tech job — chasing lavish promotions instead of making history — represents a failure to realize the promises of tech and what it sought to accomplish, which those on the frontier now want to recapture.
It was the tech backlash of the 2010s that tore a hole through tech’s image as it previously saw itself: a burgeoning industry composed of startups and their financiers, whose members would grind away writing code on their MacBooks and attending Y Combinator’s demo days, whose hardest decision every year was whether to go to Burning Man. Though a founder’s life was filled with highs and lows, the cycle of tech seemed stable and predictable. Most importantly, tech was beloved by the outside world, who gleefully consumed stories of young founders and their mythical overnight successes.
What went wrong? While tech believed it was changing the world for the better, PayPal co-founder and venture capitalist Peter Thiel warned in his 2014 book Zero to One that this era was marked by “indefinite optimism.” In contrast to the definite optimist, who not only articulates a vision for the future but “plans and works to make it better,” the indefinite optimist believes “the future will be better, but he doesn’t know how exactly…. He expects to profit from the future but sees no reason to design it concretely.”
Published at the height of tech’s infatuation with startup disruption, Thiel’s comments proved to be several steps ahead of the rest of the tech industry. While Thiel generally approved of startups as a vehicle for definite optimists to realize their goals, he criticized the incrementalist attitude that he saw creeping into Silicon Valley. Founders from this era busily copied the methodology from Eric Ries’s 2011 book The Lean Startup that emphasized iterating upon customer feedback to find the right fit between product and market. They adopted Y Combinator’s “Make something people want” motto as canon, and pitched their companies to investors within the familiar framework of “I’m building X for Y”: “eBay for space” (Airbnb), “Uber for trucking” (Convoy) and so on. Thiel was skeptical of this trend. He challenged founders to build solutions based on what they wanted to happen, rather than hamfistedly plastering a layer of software onto every problem in sight.
Silicon Valley’s indefinite optimism was equally foreshadowed by Marc Andreessen’s 2011 article “Why Software Is Eating the World.” While the phrase is remembered as a prediction of how founders from this era would become enormously wealthy, it also captures how their success was due to riding the wave of digitization, rather than leading with definite optimism. Tech’s love of digital disruption became a running joke: whether health care, local business, or agriculture, it seemed every outmoded industry could be magically transformed by the pixie dust of software. The subsequent graveyard of startups from this period reveals a lack of understanding by founders of how these industries actually worked, and of the complex social and political forces that shape them.
Beyond startups, tech’s sunny naïveté extended to its relationship to society, creating resentment. There was the story of Patrick McConlogue, a software programmer living in New York, who attracted ire in 2013 when he published a Medium post explaining that he had offered Leo, a homeless man, a choice between either $100 in cash or three JavaScript books and a laptop, as well as an hour of McConlogue’s time every day to teach him how to code. Leo chose the latter, and McConlogue kept his promise and taught him how to code. They even built a ridesharing app together. But two years later, Leo was still homeless, overwhelmed by the attention he’d received, and no longer coding. Though McConlogue’s attempts were earnest, he had applied a one-size-fits-all solution and hoped for the best. When he did not succeed, there was no backup plan.
It was tech’s indefinite optimism that prevented it from heeding the signs of discontent swirling about, clinging to the industry like San Francisco’s fog. There were the rising rents; the commuter protests; the murmurs about how tech’s biggest accomplishments were creating a crisis of attention, loneliness, and political strife — all easily dismissed by tech workers, whose experience had taught them that even against long odds they could only ever win, until they hit a snag in the fall of 2016.
The outcome of the U.S. presidential election not only shocked legislators, who set about searching for answers from tech companies they believed were partly to blame for Donald Trump’s victory; it also marked the turnover of the Obama administration, which had enthusiastically supported tech’s optimism. The world had called tech’s bluff, and it turned out that behind the cheerful promises of a better future there was no real vision that would assuage the betrayal that many felt.
Those in tech burrowed underground to lick their wounds. As the smoke cleared, they began having private conversations with trusted peers and friends. They struggled to reconcile the sudden surge of anger from outsiders with their own internal sense of self. Over the next few years, tech experienced a profound loss of identity, followed by a search for meaning.
Founders and engineers are not typically known for their innate interest in “non-technical” topics like philosophy and history, but they turned now to these pursuits to make sense of their predicament. Snapchat founder and CEO Evan Spiegel, who had previously made headlines for a series of boorish emails he had sent as an undergraduate to his Stanford fraternity brothers, now stood in front of the French-American Foundation in 2017 and spoke about Alexis de Tocqueville, who, as Spiegel put it, believed that “the newspaper was one of the most important tools for helping democracy overcome individualism.”
Ultimately, everyone in tech was trying to answer the same question, posed for millennia by those unlucky souls who find themselves suddenly in exile: How, now, shall we live? If the brash, unchecked ambition that had defined tech for decades was no longer something to be proud of, how should technologists find meaning in their work today?
The varying answers to this question scattered tech workers into different factions. Many were alarmed and remorseful about the harm they had caused — to America’s social fabric, teenage mental health, and the future of the country. They now faced difficult questions from friends and family about the transgressions of their employers, and resolved to shepherd what they saw as a more responsible approach to technology, examining the effects of misinformation, algorithmic bias, and lack of content moderation. Tristan Harris, the ex-Google tech ethicist, became a vocal advocate for addressing the negative effects of social media, featuring prominently in the 2020 Netflix documentary The Social Dilemma.
Others blamed the American managerial class, whose appendages were tangled in a secular monoculture that had lost its moral and spiritual purpose. They found solace in the philosophical writings of Nick Land and a pseudonymous blogger named Mencius Moldbug, who expounded upon the idea that progressive liberal democracy is becoming an all-powerful bureaucratic oligarchy, the solution to which is a return to monarchy and traditional values. This was the beginning of the neoreactionary movement, which grew into the New Right today, and whose members, and associated figures in tech like Peter Thiel, backed anti-establishment candidates like J. D. Vance and Blake Masters in the 2022 elections.
Still others felt that the problem was that America no longer valued technological progress and innovation. They read economist Tyler Cowen’s 2011 book The Great Stagnation, which warned that the American economy had begun to plateau. In 2019, Cowen and Stripe co-founder Patrick Collison published an Atlantic article titled “We Need a New Science of Progress,” by which they meant a study of “the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries.” Their efforts fostered a movement, sometimes called “progress studies” or just “progress,” as well as its cousin movement “abundance,” whose proponents focus on removing institutional roadblocks to innovation — particularly through an economic and policy lens — such as the development of clean energy, building more housing and public works, and funding scientific research.
Others saw poor governance of San Francisco — which had attracted media attention for its crime, dirtiness, and political infighting — as a sign of how tech had underutilized its influence, falling out of step with its environment. The San Francisco School Board and district attorney elections became hotly contested battles as tech workers organized to support candidates who aligned with their values. Garry Tan, the Y Combinator president who grew up in a working-class Bay Area immigrant family, devoted himself to improving San Francisco’s living conditions, organizing recall campaigns, and supporting the launch of GrowSF, a political action committee founded by tech workers that focuses on reducing homelessness and improving public safety and services.
While each of these tribes has a different underlying philosophy and agenda, they are united by an embrace of definite, rather than indefinite, approaches. Instead of the incrementalist, “software eating the world” approach to disruption, these tech tribes are now focused on bringing about tangible, meaningful change. They also tend to have greater humility toward what they realize they don’t know. They are more likely to consult and build relationships with subject matter experts, policymakers, and researchers to inform their understanding of industries beyond their own. They are no longer afraid to get their hands dirty learning about climate science or housing materials, or approaching politicians, in order to achieve their goals.
For example, in tech’s indefinite era, Google’s parent company Alphabet launched Sidewalk Labs as an “urban innovation” company, which touted a project in Toronto featuring buzzy smart-city technology including sensors and touchscreens to help residents visualize energy usage and traffic patterns. The project never materialized. In today’s definite era, wealthy tech patrons want to back San Francisco’s GrowSF, which prioritizes “outcomes over ideologies” and whose plans for the city read more like a sensible agenda than a sci-fi novel: build homeless shelters, construct subway tunnels and bike lanes, and elect politicians who can get it all done.
Another example of the shift — or the need for it — can be seen in the work of Jennifer Pahlka. Serving as U.S. Deputy Chief Technology Officer in the Obama administration, founder of Code for America, and creator of the United States Digital Service, Pahlka was one of the most visible faces of tech’s relationship to government in the 2010s. Code for America connects software engineers and designers with city governments to build web applications; the United States Digital Service is a government tech unit that consults federal agencies on how to improve their digital services, founded partly as a response to the disastrous technical rollout of Healthcare.gov in 2013.
But digital tech alone was never a solution. In her 2023 book, Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better, Pahlka argues that people in government chronically undervalue the challenge of implementation of new digital projects, simply outsourcing them to software contractors and expecting that they mindlessly follow orders, rather than take part in the design process itself. “Although government must adapt to an increasingly digital world,” Pahlka writes, “the heart of the adaptation isn’t mobile apps, cloud computing, or even artificial intelligence.” Instead, “we have to examine and challenge the underlying structures, assumptions, and values that drive the larger system of government in which the bureaucracy of implementation operates.”
There was a time when indefinite optimists believed in “disruption” and “move fast and break things.” Now definite optimists believe in “progress” and “acceleration.” In the abstract, the two sets of concepts seem indistinguishable from one another. The difference is in strategy and tactics, and it is a more grounded focus on building for the future that effective accelerationists preach.
Keep reading our Winter 2024 issue
The sky is not falling • Tech is so back • Italy goes wild • Barbie vs. Botox • Subscribe
Definite optimists strive for concrete goals, but e/acc is not, at least in its current form, a prescriptive movement with an agenda. Instead, e/acc is best understood as a cultural mirror, or what Jezos calls a “meta philosophy.” It reflects a shift in sentiment across many subcultures of builders, all of whom think tech has more to offer the world than overfunded software startups filled with free snacks and ping pong tables.
Jezos himself was, he says, a theoretical physicist with a graduate degree who joined a Big Tech company to work on machine learning. After three years, he quit, disgusted with the lifestyle. “So much of what is broken with big tech is because it selects for pure status-seekers rather than builders,” he tweeted. Jezos sold his car, moved back in with his parents to save money, and eventually took out a personal loan to start a company, buying $100,000 worth of GPUs and building an army of workstations in his home.
It was while Jezos was working in Big Tech that he created a pseudonymous Twitter account and found others to talk to. In the early days of the Covid pandemic, confined to their homes, he and a handful of other accounts began having late-night discussions on Twitter Spaces — a voice-only conversation app — about where the world was going. Like Jezos, many of them also worked in Big Tech and were frustrated by the milquetoast ambitions of their organizations. And like Jezos, they tweeted as a way to vent, while using pseudonyms to avoid repercussions at work. (Against his will, Jezos’s real identity was revealed by Forbes in December.) E/acc emerged from these conversations as a philosophy for “builders to express how they feel about their role in society and what they’re contributing to civilization, and how they can feel happy with their day-to-day lives,” Jezos explained in an interview on the Moment of Zen podcast. It was an antidote to the shame and embarrassment that lingered from the tech backlash, a way to take back the spirit of tech.
“E/acc was cooked up when some Big Tech engineers were very talented but crushed by top-down bureaucracy,” Beff reminisced on a Twitter Space last March. On a subsequent Space, he added: “There’s a pervasive mind virus in a lot of Big Tech organizations that causes a lot of engineers working on powerful tech to have self-hatred. It’s not healthy at all…. They kind of get depressed because the whole system tells them that they are bad…. I was thinking, let’s make an ideology where the engineers and builders are heroes.”
E/acc is a litmus test for whether someone is a corporate “wagie,” mindlessly collecting paychecks at a Big Tech company or overfunded startup, or whether the person believes that tech, at its best, is not merely another industry like finance or fashion but a safe haven for people from all backgrounds to build the future they envision, free from cynical naysayers.
Having descended down the other side of the backlash, tech is now preoccupied with solving problems in the physical world with a more sophisticated toolkit than before. There is an influx of founders starting “deep tech” companies, which tackle complex technical and scientific challenges in areas such as nuclear energy, aerospace, and biotech — all unthinkable in the previous era, when the mention of anything non-software sent investors running in the other direction. What’s more, the new founders now come from a different talent pool, bringing deep expertise from “hard” scientific and engineering backgrounds.
And this activity isn’t just restricted to startups anymore, either. Tech now understands the role that policy can play in helping — or hindering — them to build faster, across an array of issues. The Institute for Progress, a tech-minded D.C. policy think tank, is partnering with the National Science Foundation to design and test new ways to fund research and innovation. Founders working in and around artificial intelligence travel to D.C. to meet with policymakers, from Amjad Masad seeking to demonstrate “the positive and pragmatic side of AI” through his software-creation platform Replit, to Clem Delangue advocating for “transparency, openness, and distribution of value and power” with AI developer-tool company Hugging Face.
We are also seeing tech’s ethos attract like-minded people from other sectors, even if they don’t have backgrounds in startups. Ezra Klein of the New York Times and Derek Thompson of the Atlantic are two of the most prominent voices advocating for “abundance.” Dartmouth economics professor Heidi Williams leads the Institute for Progress’s science policy efforts. Frontier, a $1 billion initiative started by Stripe to kickstart the carbon removal market, relies upon a panel of advisors and technical reviewers from industry, research, and academia.
The newest generation of technologists knows they won’t derive long-term satisfaction from building a trivial product feature and selling it to a Big Tech company for a few hundred million dollars. They want to rekindle the torch of tech from before the glut and frivolity of the startup frenzy, the part that’s still acceptable to memorialize in Hollywood biopics about Nikola Tesla and Alan Turing. And they realize now that accomplishing this will require them to expand their own skills and knowledge, instead of believing that software alone will save (or eat) the world.
Not all of their efforts are associated with effective accelerationism itself, but they grew from the same psychological place that drove Beff Jezos and his peers to ruminate about the purpose of tech during the pandemic. There is a groundswell of excitement in tech, still rising, that feels like a palate cleanser to the doom and gloom of the post-backlash era. E/acc may not offer the roadmap that definite optimists need, but it signals a shift in tech culture that is long overdue.
> [...] but to widespread public concerns about the risks posed by the tech industry at large. Effective accelerationists worry that these concerns have become so entrenched that they threaten to extinguish the light of tech itself.
Those "years of gloom" (which aren't very many years -- has everyone forgotten when the tech industry was widely seen in optimistic terms?) have been brought on by the behavior of the tech industry itself, in large part because of the misapplication of the idea "move fast and break things" (which is, unless I'm misunderstanding, the very essence of e/acc that this article discusses).
Our industry has been breaking a lot of things that people don't want broken, then tends to shame people for being upset about that. The problem isn't some inherent fear of tech itself, it's a (supportable) fear of the tech industry and what other things it may damage as time goes on.
If the industry wants to assuage these fears, the solution isn't to move even faster and break even more things, it's to start demonstrably acting in a way that doesn't threaten people and the things they hold dear.
I agree mostly, though I think the "break things" bit got twisted and misunderstood.
We were supposed to break; limits, barriers, status-quos, ossified ideas... Instead we broke; treasured social norms, privacy, mutual respect and dignity. There's a difference between benevolent innovation and reckless iconoclasm. I think it started the day Peter Thiel gave money to Mark Zuckerberg.
Picture of two little identical castles, towns, and armies, caption:
Their barbarous "barriers", "status quo", "ossified ideas"
vs.
Our blessed "privacy", "treasured social norms", "dignity"
The alternative to describing the meme here is to call it by name: a Russell conjugation.
Exactly. Words that seem different but mean whatever you want them to mean, including the exact opposite. tools for peace <--> weapons of mass destruction etc.
Ah the good old "there's literally no difference between good things and bad things" argument. Compelling.
Amorality. The refuge of the bewildered.
[flagged]
I understood it much smaller fwiw. As long as you can add useful features really quickly, it's fine if your website crashes every once in a while.
Yep. It came from Facebook, and it was changed to favor stability while moving fast almost a decade ago.
https://en.wikipedia.org/wiki/Meta_Platforms#History
>> '"On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure".[40][41] The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough."[42]"'
Last night I changed some solid-js ui code to replace mutating game in ui state with updating ui state with mutated clones (cloning is efficient & shares most data, optimizations made for AI efficiency long ago)
ofc, with these stale game references around, I soon got reports of broken things: targeting was broken, pvp was broken, fade out animations were broken
A few hours later these issues were resolved. The players are used to these things happening sometimes. It's fine since the stakes are low. It's just a game after all. & being free, the active playerbase understands that they're QA
And, crucially, you'd generally be around to help fix the website.
I always thought move fast and break things used at FB was to empower the ambitious, talented, fresh crop of ivy-college grads with confidence to move forward with poor decisions due to lack of experience.
You’re closer to the truth, but with a bit of a harsh bias. It was simply permission to make mistakes. Sometimes you get it wrong, and it’s better to get more done and risk mistakes instead of moving cautiously.
Facebook was famously unit-test sparse, for example.
Or rather fail fast before we blow all our money only to find out our new product doesn't work
Only the best and brightest inexperienced developers :)
No? You’re projecting what you want it to mean. The “break things” is don’t be afraid to break functionality/features/infrastructure in the process of improving it (new features, new scaling improvements, etc etc). That’s why it was renamed “Move fast with stable infrastructure".
> The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough."
It’s about growth at all costs and then once Facebook got big enough they had to balance growth against other factors (+ the things people were doing that were causing breakages weren’t actually helping to grow).
Mottos like that live their own life. Take google’s “dont be evil” - people remember that, and see all the evil shit google does now, of course they are going to recall the motto and laugh at the irony. Whatever Sergey meant when he coined the phrase is irrelevant imo.
> You’re projecting what you want it to mean
Maybe true. But then if it's just about development it's a rather mundane old chestnut about reckless engineering versus good software engineering etc. Granted, that's a different discussion and we can see the tide turning now in terms of regulation and mandated software quality.
Sure, the Post-Office/Fujitsu scandal, Boeing etc, show how bad software actually ruins lives, but for the most-part the externality imposed by the reckless software engineer is measured in "hours of minor inconvenience".
That said.. I wonder if you did a ballpark calculation of how much harm lies behind the Google Graveyard [0], whether the cost of what is broken outweighs the benefits of it ever having been made?
Engineering was literally taught to me in a well respected engineering university as making an appropriate cost/reward trade off and being careful in taking that risk. But the economics of the business were important too as it was part of the competition of driving more efficiency into a system. In classical engineering, there can be more risk because you’re dealing with people’s lives and so you have to be more careful and add extra margins of error even if more expensive.
One person’s recklessness is another person’s calculated risk. The consequences of FB engineering mistakes are minimal in both impact to customers and FB’s business. As FB scaled, the impact to individual people is still largely minimal (perhaps even beneficial) but the impact to their own business is larger and same for their customers if their ads aren’t getting eyeballs. So they shifted as big companies do. It’s kind of the best case of thoughtful risk taking - we’re rolling out a new system and we don’t know what could go wrong at scale and we put in monitoring of what we think we need. If there’s problems we’ll catch it with our monitoring/alerting and rollback or fix. You see the outages but not 99% of changes that go in without anything going wrong which lets the business resolve issues quickly and cheaply.
As for Boeing and Fujistsu, I’d say those are very different situations and aren’t an engineering problem nor do they indicate a move fast and break things mentality. As with many things like that, the engineering mistakes are a small detail within the overall larger picture of corruption. Boeing wanted to escape being classified as a new aircraft and met a perfect storm of skimping on hardware and corrupting the FAA through regulatory capture. I don’t fully understand Boeing’s role with the recent failures as a subcontractor is involved, but my hunch is that they’re nominally responsible for that subcontractor anyways. Same goes for Fujitsu - bad SW combined with an overly aggressive prosecution mandate and then cover ups around having made mistakes based on the assumption that the SW was correct rather than assuming new SW that hadn’t run anywhere before may contain bugs (not really sure whether Fujitsu hid the bugs or if politicians did or what happened but certainly the Post Office officials hid the reports of the auditors that found bugs in the sw and continued with prosecutions anyway).
Btw in engineering classes, all the large scale failures we were taught about involved some level of corruption or chain of mistakes. A contractor not conforming to the engineering specs to save on costs (valid optimization but should be done extra carefully), overlooking some kind of physical modeling that wasn’t considered industry standard yet, kickbacks, etc.
We probably had similar rigorous educations at that level. In SE we studied things like the '87 Wall St. crash versus Therac-25. The questions I remember were always around what "could or should" have been known, and crucially... when. Sometimes there's just no basis for making a "calculated risk" within a window.
The difference then, morally, is whether the harms are sudden and catastrophic or accumulating, ongoing, repairable and so on. And what action is taken.
There's a lot about FB you say that I cannot agree with. I think Zuckerberg as a person was and remains naive. To be fair I don't think he ever could have foreseen/calculated the societal impact of social media. But as a company I think FB understood exactly what was happening and had hired minds politically and sociologically smart enough to see the unfolding "catastrophe" (Roger McNamee's words) - but they chose to cover it up and steer the course anyway.
That's the kind of recklessness I am talking about. That's not like Y2K or Mariner-I or any of those very costly outcome could have been prevented by a more thoughtful singular decision early in development.
I’m talking strictly about the day to day engineering of pushing code and accidentally breaking something which is what “move fast and break things” is about and how it was understood by engineers within Facebook.
You now have raised a totally separate issue about the overall strategy and business development of the company which you’d be right about - if it were required to have a PE license to run an engineering company, Zuckerberg would have to have had his PE license revoked and any PEs complicit in what they did with tuning for addictiveness should similarly be punished. But the lack of regulation in any engineering projects that don’t deal directly with human safety and how businesses are allowed to run is a political problem.
I see we agree, and that as far as day-to-day engineering goes I'd probably care very little about whether a bug in Facebook stopped someone seeing a friends kitten pics.
But on the issue I'm really concerned about, do you think "tuning for addictiveness" on a scale of about 3 billion users goes beyond mere recklessness, and what do we do about this "political problem" that such enormous diffuse harms are somehow not considered matters of "human safety" in engineering circles?
Is it time we formalised some broader harms?
I think there are political movements to try to regulate social media. There’s lots of poorly regulated sub industries within the tech field (advertising is another one).
> Sure, the Post-Office/Fujitsu scandal, Boeing etc, show how bad software actually ruins lives, but for the most-part the externality imposed by the reckless software engineer is measured in "hours of minor inconvenience".
I've been deeply critical of the appalling behaviour of the Post Office and Fujitsu in the Horizon scandal but there's a world of difference between this and the impact of Facebook in 2009. One had a foreseeable and foreseen impact on people's lives. The other was a social network competing with MySpace and looking for a way to monetise its popularity.
> there's a world of difference between this and the impact of Facebook in 2009.
You're absolutely right there,
Frances Haugen's leaked internal communications showed incontrovertibly that internal Facebook research had long known teen girls had increased suicidal thoughts and obtained eating disorders. Facebook and Instagram products exploited teens with manipulative algorithms designed to amplify their insecurities, and that was documented. Yet they consistently chose to maximise growth rather than implement safeguards, and to actively bury the truth that their product caused deaths [0]. Similarly the Post Office had mountains of evidence that its software was ruining lives yet engaged in a protracted, active cover-up [1].
So, very similar.
But what's the "world of difference"?
> looking for a way to monetise its popularity.
That's a defence? You know what, that makes it worse. The Post Office were acting out of fear, whereas Facebook acted out of vanity and greed. The Post Office wanted to hide what had happened, whereas Facebook wanted to cloak ongoing misdeeds in order to continue. Simply despicable.
Way I see it - Facebook come out looking much, much worse.
[0] https://www.npr.org/2021/10/05/1043377310/facebook-whistlebl...
The NPR link is from 2021, not 2009. It links out to research from 2019, still not 2009. In 2009, Facebook was still branching out among university students.
IDK, I think a big part of the "years of gloom" was an official (but secret) NYTimes policy of only publishing negative stories about tech, as confirmed by Vox journalist Kelsey Piper. [1]
1: https://twitter.com/KelseyTuoc/status/1588231892792328192
I just wonder if that can be extended to every. single. news. organization.
They all do it. Doom gets more clicks than happy talk.
Here’s the real funny bit: the whole “doom gets more clicks” thing is itself a consequence of the ultra-competitive attention marketplace that tech created.
Sure it’s been “if it bleeds it leads” for a long time, but not until digital advertising has it been “if you’re not bleeding profusely 24/7 then you are going bankrupt.”
They wouldn't need a specific policy for tech if that was their general policy
This depends if 'tech' said something like "screw the media, we're going to replace it with media2.0". That could take you from the normal doom and gloom (that can be dispelled with the right ad buys) to a hated enemy that must be destroyed at all costs.
IMO the Verge actually strikes a pretty good balance of writing about tech like the old days of Wired, while still being willing to call out bad behavior when they see it.
I mean we're acting like tech is just a net good with this kind of attitude. Monopolies and surveillance hurt people. Facebook caused a genocide in Myanmar. They wouldn't publish anything if it was all good.
I'm confused by their idea that insider criticism and self-reflection is a sign of collapse rather than maturity in this case. If professional architects find value in working together to develop and adhere to standards before building a city (which I'd consider "technology") it is not a warning sign that architecture will cease to exist.
Gene drives, nuclear weapons, space organisms getting to earth, self-reflection on potential risks is not a sign of failure unless your only goal is to move. We only get to make some of these choices, as an entire global civilization, once.
AI is not exactly in that category but it's sure not a sign of failure that the people making it are actually considering the results before just making decisions with world-scale impacts. It slows things down but that is important when decisions only get to be made once and cannot be reversed. That was less the case with earlier tech, so I don't understand the surprise at the difference in scrutiny over time.
> self-reflection on potential risks is not a sign of failure unless your only goal is to move.
(my emph)
The root of this is a deflated concept of "progress". Progress is a vector. It has a magnitude and a direction. And it has a context that entirely defines its value. You would not be happy if the doctor told you you have a progressive illness. But "progress" gets used carelessly as a bare noun.
Ultimately it's the purse string holders who want to move fast and break things. Investors are the people who would rather try to shove ten figures into undercutting taxi markets everywhere to try to build a monopoly. Imagine if instead they'd put that into cancer treatments and diagnostics or novel forms of energy generation. Move fast and break things is shit compared to building new things at the centre of human need and at the edge of human understanding.
Uber is a particularly silly example due to the sheer volume of money they burned, but it did prove that the taxi medallion monopolies were suppressing market opportunities. I don’t believe that the founders or VCs really understood the available market potential, but it was 100% a farce that “ride sharing” was a meant for “sharing”.
Taxis (incl Uber) are bigger now than they have ever been. Add in Uber Eats and co., and there’s so much new demand for similar services. Even Amazon flex has shown that car based services have unmet demand and utility.
The taxi industry should have been broken down. It was good for humans to have better access to the market, even if it wasn’t cancer treatment. There’s tons of money going to research cancer. New energy opportunities are a huge source of investment already, let’s not pretend the existence of Uber foreclosed the opportunity to cure cancer.
Mafia with chips extracting maximum value from every vital industry comes to mind..
Tech has shown its cards and people hate it for a reason.
Every time we try to do something somebody is in the way. We can have reasonable conversations with these people but when their arguments deevole to screaming, yelling about Adam and Eve not Adam and Steve or throwing food at paintings you have to ask yourself how long before you push the (metaphoric) peddle to the metal and keep going.
We can’t deal on an adult basis with children, no matter their age, how long do we have to let them stop us?
There is no possible path for advancement that doesn't threaten people and the things they hold dear. It never worked that way in the past, and it won't now.
Yes, but there's a critical difference now. Now, the tech industry breaks many things at an unprecedented pace, and largely doesn't offer a reasonable replacement for the things that have been broken.
People can only handle a limited amount of loss within a given period of time before they start pushing back hard against further loss and consider those causing them harm to be forces of evil.
There's also another factor that the tech industry is largely blind to: tech people tend to think that "we know best" and that pushing our ideas on the general public against their will is a Good Thing. But it's not a Good Thing, it's a Bad Thing.
Another thing we need to be doing is allying with the general public rather than dictating to them.
Who is pushing anything on the public? The tech industry wouldn't exist in the form that it does now except that it gives people something they want, not the other way around.
Disruption from tech advancement is caused by tech changes displacing existing industries and it hurts the people currently making money from those industries. But to be against that disruption you would have to believe that those people have some sort of right to make that money and continue doing the things that make them those profits when the public wants the more efficient tech. So really it's the anti tech people who are pushing things on the public.
E.g. people often complain about Amazon displacing small retailers, but really it's just that given the choice, most people choose Amazon.
> except that it gives people something they want, not the other way around.
That used to be true. Now, though, a very common thing I've noticed with people is that they use tech not because they want to or because it solves a problem for them, but because they are disadvantaged if they don't.
It's an important difference. If people willingly choose to use a thing, then they'll be inclined to think about it positively. If they use a thing because they feel they have no choice, then that thing is more likely to be viewed as adversarial, because it is.
I think that's largely where the tech industry has arrived at. Further, the tech industry shows little to no empathy to those whose lives are worse because of what it does.
People may feel that way, and I'm sure in some cases they really mean it. But the reason they always give for why they have to use it is some form of "because every one else does." And it had to get to that point because people wanted it in the first place. Otherwise it just wouldn't have sold in the market when it came out.
The costs of a thing are usually not apparent when it is new. All that's apparent is the benefits. The costs rear their ugly head later.
So yes, often people jump onto a hot new thing because all they can see are the benefits. The "buyer's remorse" doesn't come in until later, when the downsides become apparent. At that point, it's often too late and people are trapped. By design.
The tech industry counts on this effect, and doing that is one of the bad behaviors that encourages people to distrust the industry and become angry at it.
All I'm saying is that people are growing increasingly distrustful of, and angry at, our industry for really solid, rational reasons.
The most charitable interpretation I can think of for why we allow this to be is that the most visible part of our industry has become so insular and divorced from society in general that they can't even understand the anger or why it's rational. The least charitable interpretation I can think of is that they know perfectly well why people are getting mad and just don't care, because not caring increases short to medium term profitability.
You're very close to describing the "enshittification" process.
Things start out good quality, high effort. Useful.
Then once they achieve a certain amount of inertia they start cutting stuff out. Adding new tiers of payment plans, or injecting advertising into existing plans. Lowering caps. Whatever else they can get away with to cut costs but keep your money.
But people have invested in them by that point. Invested enough that changing off is painful and potentially expensive. They want to barrier to leave to be high, even as they give you less reason to stay using their product.
Not sure you even need this product cycle to explain the parent poster's observation.
What I see here is your basic Tragedy of the Commons at multiple levels. Consumers adopt the new thing to disadvantage their peers, who then have to do the same, to the detriment of all. And the vendors are doing the same thing.
This whole thread reminds me how much "tech" as a meme has come to conflate technology and business. People don't even seem to recognize that "move fast and break things" expressed a business philosophy, not some fundamental truth of technology, R&D, or science.
You're ignoring the fact that "it" (the tech in question) can change, and that there's a large motivation to monetize/cut dev spending/etc. once users are entrapped via network effects.
Edit: Sibling "enshittification" comment conveyed it better.
> Who is pushing anything on the public?
We are. Constantly. We are, intellectually, in a tiny minority who find these things delightful and empowering. We assume that must also be good for everyone else. I was building electronics as a five year old when the other kids were playing outside and it thrilled me so much I assumed everyone else thought the same. They didn't and they don't. Maybe us nerds "took over the world", but as an adult I find almost everybody else (those we call normies) feel that digital technology is;
- something that happens to them
- is foisted upon them and they have no choice
- something they "have to trust"
> it gives people something they want
Have you considered that you really have no idea "what people want"? Neither do I, but I do know that and feel comfortable saying it. And I have done research and literally gone onto the streets interviewing lots of people to ask them. Most want what they think their friends want. Or the thing they already have with some new features. We tell them and they buy.
> to be against that disruption you would have to believe that those people have some sort of right to make that money and continue doing the things...
In a funny way they kind of do have that right. UDHR includes several aspects that can be taken as a "right to stability".
> it's the anti tech people
I don't encounter any "anti-tech people". Ever. I meet plenty folks who are anti-surveillance, or anti-authoritarian, or anti-asshole - against people forcing their technology on them - but I've never met anyone who thinks it's simply the fault of technology itself. You may be living in a bit of a bubble?
you are mistaken if you believe your knowledge of tech gives you power over it.
> Who is pushing anything on the public?
Software is pushing updates on us that nobody asked for in the exact moment we need them least, everything between the operating system to websites shove ads in our face, and features & "modern UI" that remove important information and options because designers thought a UI can't be confusing if it barely exists. Privacy violations pushed with countless license agreements and the daily "We value your privacy" popup that explains to you in what ways this sentence is a lie.
> it gives people something they want, not the other way around
I have never heard someone ask for slow and broken software, ads, tracking and other shady practices. They just have to live with it because what are they gonna do, not communicate with friends or file their taxes?
> "When a new item of technology is introduced as an option that an individual can accept or not as he chooses, it does not necessarily REMAIN optional. In many cases the new technology changes society in such a way that people eventually find themselves FORCED to use it."
Tech is like a fission reactor: powerful, elegant, delivering value through leverage, but requires strong controls and protections (moderators, containment) for humans so it doesn’t ruin us all.
People worry about AI paperclip maximizing, but Tech is already that in some ways (find or build moats, blitz scaling, no concerns for the harm incurred). It’s just fuzzy cohorts of tech workers and management doing the paperclip maximizing, for comp and shareholder value respectively. Not much different than AI reward functions.
"Advancement" implies improvement. Just because things are changing does not mean they are improving.
Yeah, it's a misguided and naive way of thinking. Deciding whether a technological development is good (and for whom, and to what extent, and with what trade-offs, and on what time horizons) is a really difficult task. So some folks will replace it with a much easier question: "Is this new?"
Leaded gasoline is a good example of an advancement where the naysayers were right.
One of the very very very few. Asbestos is another one. Would you be able to provide another example?
How about glyphosate (roundup)?
Also, plastic is looking far worse now than twenty years ago though I think the net is quite complicated and therefore ambiguous at the moment.
There is an ongoing discussion about “forever chemicals” and, again, not unambiguous but the balance seems to be tilting toward them being a bad idea.
I’m not personally seeing much of a dividend from nuclear weapons given how difficult nuclear power turns out to be under capitalism in practice. But I suppose it gets a pass because otherwise my father might have died in a land war.
Remains to be seen what the net will be on oil but I’ll happy speculate that if you consider a sufficiently long timeline that one turns out bad too.
I’m still pretty mad about the “food pyramid” but I can’t offer any particular study that tries to quantify its role in the decline of american health outcomes. Certainly modern food processing techniques look like a slow moving disaster but it’s really hard to sort our cause and effect.
Social media was neat for a few years but I would consider it a net negative.
I guess you’re right.
CFCs, PFAS, BPA, phthalates, thalidomide, leaches, bloodletting, lobotomies, pretty much the entire history of mental health treatment, hydrogen airships, vermillion pigment, mercury felt stabilizer, radium water... If I were feeling particularly spicy I might even suggest things like weapons research, communism, or suburbia.
How bad actually are hydrogen airships? At this point trying to make tiltrotors any safer is not working out, so airships could be better. Though, if you have a real airport, passenger jets are unbeatably safe.
Fair point! Probably should have left them off. Despite the rather famous failing, they've had a lot of utility, and the technology is still in use today. (I think there's at least one YCombinator startup using hydrogen airships.)
"Data is the new oil"
Kinda the point, no? If history shows progress is disruptive, then accelerationism seems likely to accelerate disruptions. Many people can connect these dots, and not everyone sees this as positive.
If what people hold dear is controlling the way people live hundreds or thousands of miles away, then you're right.
But that is a deliberately obtuse definition designed to justify any behavior.
If you let people continue their traditions, don't deliberately bankrupt them, and allow them make their own local laws, that's enough for most.
"Or, perhaps, wanting to be regulated is a subconscious way for tech to reassure itself about its central importance in the world, which distracts from an otherwise uneasy lull in the industry."
There is that. There hasn't been a must-have consumer electronics thing since the smartphone. 2019 was supposed to be the year of VR. Fail. 2023 was supposed to be the year of the metaverse. Fail. Internet of Things turned out to be a dud. Self-driving cars are still struggling. All those things actually work, just not well enough for wide deployment.
LLM-based AI has achieved automated blithering. It may be wrong, but it sounds convincing. We are now forced to realize that much human activity is no more than automated blithering. This is a big shakeup for society, especially the chattering classes.
Holy crap, I'm realizing it's been 4 years since Half Life Alyx. I really wish it had been the first of many.
LLMs make people recognize BS jobs for what they are, a little harder
LLMs and generative AI are on par with the spreadsheet for impact on office work. It's just that most people who be obsolete are in denial, again.
No, they are not. I was there in the 1980-1990s when spreadsheets hit. After the word processor, the spreadsheet became ubiquitous. Along with email, they were at the center of the personal computing revolution.
These days AIs/LLMs are still rarified air. People do use AIs built into a SaaS product (auto-summaries, etc.), but it's still the minority. Some others are becoming facile prompt jockeys. A rarer few experts will run their own models on local laptops and servers.
But it is intrinsically complex technology that most users don't really "grok" in terms of how it actually works. It is fundamentally very, very, very different than the spreadsheet. And its adoption will have natural limits and boundaries.
> But it is intrinsically complex technology that most users don't really "grok" in terms of how it actually works. It is fundamentally very, very, very different than the spreadsheet. And its adoption will have natural limits and boundaries.
I know people who do office jobs unrelated to tech who have slashed their workloads in half using LLMs.
What do you mean complex technology? You just type plain English into a prompt; it can't get less complex. Have you seen how complicated spreadsheets are?
Yes he did mention "Some others are becoming facile prompt jockeys."
Are they? While I love LLMs, I don't find them extremely useful for much more than faster API documentation.
Or wasting time going through the crap that google throws up in its search results these days. I find it faster to just ask GPT when I forget some command argument. LLMs have basically replaced the web search engine for me in most day to day cases now.
How do you handle the lying/hallucination problem? Do you just run the command and hope?
The stuff I use it for is usually to recall or find simple information rather than creating textual content. Stuff like looking up some linux command that I've used before, but can't recall specific parameters/arguments or generating code to get me started on something. So I don't see the hallucination issue much. There have been cases where it has pulled some outdated information tied to a specific library I'm using and I asked it to generate some example code on how to use that library. But with GPT 4 , I don't see that often now.
Now Google's Bard/Gemini on the other hand quite frequently makes up stuff. So for now I'm sticking to GPT 4 via Chatgpt plus subscription to augment my daily dev work.
> Stuff like looking up some linux command that I've used before, but can't recall specific parameters/arguments
So, to repeat the question:
Do you just run the command and hope? Or do you double-check using the manpage that it isn't going to do something drastic and unexpected?
I see what you mean. Yes, I do verify it or do it in an environment where a mistake isn't going to cripple an application or cause some crisis. But in most cases once it points out the argument for the command or the command name - that usually jogs my memory to know that its correct. Lately it's been mostly when creating dockerfiles and also stored procedure syntax. I'm not really good with keeping notes.
Anyone who still talks about hallucinations today hasn't used a paid service in the last 6 months.
I just had a paid open ai service tell me all about a command line argument that doesn't exist.
It isn't possible to do what I wanted with the proposed command, but the hallucination helped me to Google a method that worked.
What do you mean? Hallucinations are unavoidable, even humans produce them semi-regularly. Our memories are not nearly reliable enough to prevent it.
In my experience the only more or less reliable way to avoid hallucinations is to provide the right amount of quality information in the prompt and make sure the LLM uses that.
I've used them to:
- Make edits to a Latex file which would have taken me at least an hour longer to do.
- Reverse-compile a PDF into Latex from a mere copy-paste.
- Translate a travel diary from French into English.
- Ask conceptual questions in difficult areas of mathematics. It's unreliable but it often has a "germ" of an idea. This is backed up by the Fields medalist Terence Tao.
- Helped me tutor someone by giving model solutions to homework and exam problems when I wasn't sure.
- Write a browser extension to do certain content blocking/censoring that none of the programs on my computer could do. I've never written a browser extension before, and this would have taken me a day longer.
- Give me feedback on emails I wrote.
- Helped me deal with a medical emergency.
Those are good ideas, thanks. I don't like its writing, I find it stilted and awkward, but it's good if you want something that's not going to dissatisfy anyone.
I've also used an LLM for some of the elements on that list, though it seems like I should use them more.
"Years of gloom"?
For my entire career in tech, anything that isn't the most extreme of optimistic view points has been relegated to unnecessary and destructive pessimism.
Tech can't stand "bad thoughts". We had a very necessary correction in this industry but people couldn't handle it and just completely lost their minds on AI.
I say this as someone who works in AI very close to the metal (i.e. not just calling an API all day). While these tools are very impressive, when I talk to people outside of my area it's like they've all taken obscene quantities of some new stimulant and can't even connect with reality anymore.
Even the AI "doomers" are, by my view, extreme optimistic because they don't see, or maybe just don't want to see, how much all of the public discourse on this subject is largely smoke and mirrors to drive yet another tech bubble.
Personally I think this is just mass hysteria created by the increasing awareness of the fundamentally unsustainable nature of industrial society. Reading this article is like listening to some hallucinating maniac on the streets of SF scream at you about things that really aren't there.
The view from outside of tech is very much "We went from having no AI to having ChatGPT overnight! Imagine how soon the future will be here, we could have a breakthrough for AI at breakfast tomorrow and talking companion robots the day after that!"
And the doomers just add on "and the week after that we'll all be dead!"
I'm personally not very impressed by the AI tools I've used. Sure they're a neat toy. They do seem to keep getting better. Maybe they'll be good enough one day for me to actually want to use them and feel it's a benefit.
> The view from outside of tech is very much "We went from having no AI to having ChatGPT overnight!"
In their defense, for most people that is what happened. Sure, they've been using "AI" tools of varying degree for a long time (spellcheck, language translation), but now they have totally free access to something that behaves sorta like the AIs they've seen on the popular Star Trek shows -- where, incidentally, the AIs were also imperfect.
Journalists love their overnight success stories. And it's most apparent when they are talking about some musician who worked their ass off for decades before being discovered. As if they sprung into existence as 30 year-olds, worked hard for a couple years, then became millionaires.
> I'm personally not very impressed by the AI tools I've used. Sure they're a neat toy. They do seem to keep getting better. Maybe they'll be good enough one day for me to actually want to use them and feel it's a benefit.
Unless you explain this statement, most people here are likely to dismiss everything you have to say on the topic of AI.
We all know exactly the kinds of bad outputs we've seen from AI.
I just ran a query for ChatGPT to recommend various databases for a specific use case. Out of the seven databases it recommended, only one was a database actually appropriate; one suggestion was marginally acceptable; three of the recommendations weren't even databases.
I then asked it to provide a list of the most important battles in North Africa prior to the entry of the United States into World War 2.
It gave me five answers. Three of which occurred after the entry of the United States into World War 2.
AIs provides extremely plausible answers. Sometimes it will actually generate correct, useful output; but you cannot yet rely on it for correctness.
I'd like to see a side by side comparison with a random human on the street. Maybe with a sample size of 100 or so. How well do you think the humans would do vs whatever outdated model you were playing with here?
There is clearly significant value to this tech and I'm still dumbfounded how strongly some people try to deny it.
Anyone reckon there's a chance that GPT hallucinates because it was trained on online material (e.g. reddit and other forums)? I'd have to say on topics I know GPT is about as hit or miss as a random internet comment, especially in that they'll both give a confidently stated answer whether the answer is factual or not.
Is it possible GPT just thinks[0] that any answer stated confidently is preferable over not giving an answer?
Promise I'm not just being snarky, legitimate wonder!
[0]: I know it doesn't actually think, you know what I mean
You're judging fish by its ability to climb a tree. Being able to recall facts is a nice side effect for LLMs, not their bread and butter. If you need facts, plug in some RAG to it.
Also, what model did you use?
Stop using GPT 3.5 and complaining it's no good. We all know that. Unless you're using GPT4, your anecdotes are out of date and irrelevant.
https://chat.openai.com/share/43ebf64e-34ae-402b-a3ce-0787e2...
It is really why I am tired of this whole format of talking to people.
I wish someone would build a site that we share chatGPT4 outputs like the above.
I can't remember the last time I have actually learned something from a human on a message board like this compared to chatGPT4.
Talking to people like this is just 100% waste of time.
Well, learning and conversing are different things done for different reasons.
I'll agree that GPT4 has completely replaced Google, stackoverflow, etc for me.
The only time I use Google now is for janky more human like situations.
For example, today I had to transfer DC roles from a Windows 2012 R2 server to a new 2022. They only have one DC. And the old DC has a basically unused CA service set up.
ChatGPT would have me "fix" everything first, wheras, I did find a forum post with a situation almost identical to mine that helped me cowboy it rather than being overly meticulous.
There is still value to human experience. For now.
Unless you explain this statement, most people here are likely to dismiss everything you have to say on the topic of AI.
fwiw, I agree with them. It has its use cases, but 'hallucinations' or whatever you want to call them is a huge dealbreaker for everything I'd want to use ai for
Agreed but in my opinion the problem is more fundamental than just hallucinations, it involves plain inaccuracy and inability to reason.
Try asking chatgpt or Gemini about something complex that you know all about. You’ll likely notice some inaccuracies, or thinking one related subject is more important than something else. That’s not even scratching the surface of the weird things they do in the name of “safety” like refusing to do work, paying lip service to heterodox opinions, or interjecting hidden race/gender prompts to submodels.
It’s good at generalist information retrieval to a certain degree. But it’s basically like an overconfident college sophomore majoring in all subjects. Progressing past that point requires a completely different underlying approach to AI because you can’t just model text anymore to reason about new and unknown subjects. It’s not something we can tweak and iterate into in the near term.
This same story has recurred after every single ML advance from DL, to CNN + RNN/LSTM, to transformers.
> Agreed but in my opinion the problem is more fundamental than just hallucinations, it involves plain inaccuracy and inability to reason.
> Try asking chatgpt or Gemini about something complex that you know all about. You’ll likely notice some inaccuracies, or thinking one related subject is more important than something else. That’s not even scratching the surface of the weird things they do in the name of “safety” like refusing to do work, paying lip service to heterodox opinions, or interjecting hidden race/gender prompts to submodels.
For sure.
On the other hand, I recently started to convert these AI hallucinations into a feature for me: it is like asking a person who is somewhat smart, but high on some hallucinogenic drug, on their opinion on a topic of your interest. Depending on the topic and your own intellectual openness, the result can be ... interesting and inspiring.
I’ve asked Bing about sexual health questions and been told it’s not okay to talk about that.
I asked it a question about Christianity and it stated things with the tone and certainty of a preacher.
Just gross. And worse to put something like that in the hands of billionaires.
I am somewhat of a doomer not because I think it will be the Terminator. More like AI theocracy here we go.
Generally agree with the GP, and am curious what use-cases you've found where AI meaningfully improves your daily work.
I've found two so far: the review summaries on Google Play are generally quite accurate, and much easier than scrolling through dozens of reviews, and the automatic meeting notes from Google Meet are great and mean that I don't have to take notes at a meeting anymore.
It did okay at finding and tabulating a list of local government websites, but had enough of an error rate (~10%) that I would've had to go through the whole list to verify its factualness, which defeats a lot of the time savings of using ChatGPT.
Beyond that: I tried ChatGPT vs. Google Search when I had what turned out to appendicitis, asking about symptoms, and eventually the 5th or so Google result convinced me to go in. If I had followed ChatGPT's "diagnosis", I would be dead. I've tried to have ChatGPT write code for me; it works for toy examples, but anything halfway complicated won't compile half the time, and it's very far from having maintainable structure or optimal performance. Basically works well if your idea of coding is copying StackOverflow posts, but that was never how I coded. I tried getting ChatGPT to write some newspaper articles for me; it created cogent text that didn't say anything. I did some better prompting, telling to incorporate some specific factual data - it did this well, but looking up the factual data is most of the task in the first place, and its accuracy wasn't high enough to automate this task with confidence.
Bard was utter crap at math. ChatGPT is better, but Wolfram Alpha or just a Google Search is better still.
In general, I've found LLMs to be very effective at spewing out crap. To be fair, most of the economy and public discourse involves spewing out crap these days, so to that extent it can automate a lot of people's jobs. But I've already found myself just withdrawing from public discourse as a result - I invest my time in my family and local community, and let the ad bots duke it out (while collecting a fat salary from one of the major beneficiaries of the ad fraud economy).
I recognize your username so I know you've been around for awhile (and are you a xoogler who for a time banged the drum on the benefits of iframes, or am I confusing you with a similar username?), and so I'm kind of surprised at your lukewarm take on LLMs.
I agree they hallucinate and write bad code and whatever, but the fact that they work at all is just magical to me. GPT-4 is just an incredibly good, infinitely flexible, natural language interface. I feel like it's so good people don't even realize what it's doing. Like, it never makes a grammatical mistake! You can have totally natural conversations with it. It doesn't use hardcoded algorithms or English grammar references, it just speaks at a native level.
I don't think it needs to be concretely useful yet to be incredible. For anyone who's used Eliza, or talked to NPCs, or programmed a spellchecker or grammar checker, I think it should be obviously incredible already.
I'm not sold on it being a queryable knowledge store of all human information yet, but certainly it's laying the inevitable future of interacting with technology through natural language, as a translation layer.
> GPT-4 is just an incredibly good, infinitely flexible, natural language interface.
An interface that's incredibly difficult to produce consistent output. As far as I know, we have not found a way to even make it do basic tasks parsed from natural language without a prohibitive-to-most-use-cases error rate. It's amazing that it can produce pretty believable looking text, but it's abundantly clear that there's no reasoning behind that text at all.
The other day I planned out a cloud to on-prem migration of an entire environment. From cost analysis to step by step checklists. In about 2 hours I had a ~50 page run book that would have taken me at least a week to do coming from my own brain and fingertips.
Here is my initial draft chat session. From here I feed it parts of this initial thing. It gets something down on the page immediately and I revise myself and by feeding portions into new chat sessions etc.
https://chat.openai.com/share/15b30c88-d21f-4ffe-8c15-5b444d...
Good reminder than no social media platform is a monolith. Trying to speak as the voice of a platform typically gets you egg on your face, especially when being dismissive towards someone else.
You’ll find people who claim to have doubled their productivity from ChatGPT and people who think it’s useless here.
That’s their loss isn’t it?
Count me unimpressed too
Bing Copilot can’t even answer simple questions about businesses in a particular city without lying. When confronted, it will apologize and then repeat the same text verbatim.
Zero reasoning happening.
> the fundamentally unsustainable nature of industrial society
Nothing that goes against entropy is fundamentally sustainable. That doesn’t mean we can’t keep it going for time that is essentially infinite on human scales (unsustainable life has been kicking around on Earth for 3.7 billion years now). Defeatism is even more dangerous than hallucinating optimism.
Every process of life - breathing, moving, learning - takes energy. This is as true for the ants building their colonies as it is for us building our cities.
Due to thermodynamics, this is a one-way process. In our case, it’s all fueled by the sun, which will keep burning for another few billion years.
But even that counts being not not truly sustainable, since one day it must end. The whole universe is destined to die a cold heat death where nothing at all happens anymore.
But that is a long time away and until then, we can build a beautiful civilization; we can learn and grow; and we can do so with nuclear & solar power.
People saying that we’re all as good as dead are technically correct, but in a very unhelpful way.
> Tech can't stand "bad thoughts". > it's like they've all taken obscene quantities of some new stimulant and can't even connect with reality anymore
It's hallucinogens. That's the drug of choice in tech. What that leads to, other than the obvious hallucinations, is an obsession with set and setting. Bad vibes will literally give you a bad trip and ruin your high. I've seen this with wealthy shroom heads over and over again. Their entire life becomes centered around making sure the set and setting are always perfect, which means any little bit of someone trying to talk some sense in to them, they get ignored.
Once you start to think through the kinds of behaviors that shroom addicts would start to engage in, especially if they had the wealth and resources to facilitate their addiction, you'll see it everywhere. It's not the typical "ruin your life" kind of addiction, but it's having an impact on what kind of ideas are allowed among the SV venture types.
I couldn't speak for silicon valley, but the personality changes you're describing as being associated with shrooms... those are not typical.
I have a few friends in psychedelic assisted therapy and the effects I've noticed in them are the same effects I've noticed with regular psychedelic use outside of therapy: you don't identify the bad vibes and shy away from them, you end up making decisions that are uncomfortable in the sort term to improve things in the long term. Myself, I started going to college.
Maybe it works differently among wealthy people.
If you're in a cult of positivity, adding psychedelics to the mix is more likely to make you acutely away of the inauthenticity of the situation.
Psychedelic assisted therapy is so good an powerful precisely because it exploits the best case scenario of set and setting with the way the drug has an impact on your mind. I'm far from against psychedelic and have done the therapy myself.
However, if you're in a positivity cult, and don't realize you're in a cult, and don't have someone guiding you to consider you might be in a cult, the shrooms are just as likely to make the cult seem like the most profound and important experience in your entire life.
If you are a lead in a company, you suddenly have a profound spiritual experience based around your ability to hire and fire people and tell them what to do and can use the drug to convince yourself that the ideas you are coming up with are the most profound thoughts a person has ever had. You won't even realize you're shutting out good ideas, because you have a messianic belief in AI or crypto or whatever the thing is, and you take the shrooms to reinforce that belief and you create an environment around yourself and put people around you that reinforce that belief.
It's a very different experience than going to therapy to work through your fear or depression.
> It's hallucinogens. That's the drug of choice in tech
To the degree Silicon Valley has a drug right now, it’s ketamine. (Before that, it was weed.)
HGH and low dosage testosterone regimes seem popular as well.
Seriously? That just seems ridiculous. Is this an aging thing, where folks are worried about low testosterone or something?
A decent amount of AI doomerism I've seen is classic criti-hype (as coined in [1], afaik)--it's promotion of the technology thinly disguised as critique. Fear mongering about AIs taking artists' or writers' job is yet another way of boasting about their capabilities.
[1] https://sts-news.medium.com/youre-doing-it-wrong-notes-on-cr...
"My product is so good the world should be afraid of it" often comes from the mouths of the AI CEOs.
Same with CEOs who say "Our tech is too powerful, we want to be regulated" while knowing that Congress can barely function to pass a budget.
I was really starting to feel like Abe Simpson at one point and then I read From Good to Great. The central thesis of that book is that you might work at a good company that avoids confronting 'brutal facts' but you will never find a Great company that does.
You have to slay bad things to make a great company, and you have to identify them before they're chewing on something vital. Which means going out of your way to speculate about whether things are actually bad or just annoying.
> when I talk to people outside of my area it's like they've all taken obscene quantities of some new stimulant and can't even connect with reality anymore
Can you share some examples of what you mean by this? I've encountered people who are excited about "AI", for sure, but who are excited because a problem that has plagued them for years suddenly became solvable. Excited because the way they used to learn about things just suddenly changed. Excited because there's a better way to do research than to scour the crappy results provided by the big search engines.
There is absolutely an element of hype that far outstrips the reality, and after the single-function apps like "Summarize this paper for me" are monetized to death, we'll enter the trough of disillusionment, and some reality will set in. But there are absolutely transformative use cases unlocked by the latest generation of tools that are very real, exciting, and that enable a new generation of users to interact with computers in a way that was previously the stuff of science fiction.
I've spent much of my career working on boring enterprise tech, building software that's not flashy, but gets work done. The problem space is extremely large, and the solution space is extremely inadequate. It's this vast space of unsolved business problems that make some of this hype more real, IMO. LLMs in particular will be transformative for many of the problems the big enterprise platforms solve, and have the potential to solve some of the messiest parts of operating in that space.
I don't know that I agree with the "Years of gloom" characterization, and I agree that there are other underlying currents re: sustainability, but I can't help but feel this comment endorses a different problematic extreme. In the middle of the hysteria are real uses cases that will change how we interface with computers, and industries will be transformed/new categories created.
Anecdotally, GPT4 has completely changed how I approach research and troubleshooting and has been saving me many hours on a regular basis. Weird error message in a linux log file? I'll provide some context and paste the error, and a few minutes later the issue is solved based on a decently good answer.
Picking up a new language? I'll ask for example code to solve a very specific use case, and then keep returning when I run into errors. The rate of learning enabled this way is pretty remarkable.
If the only thing that we get out of this craze are some extremely high quality LLMs with an accurate understanding of most human knowledge, that alone is an incredible jump forward.
Exactly. The ai car ethics news was always marketing to distract from the fact that it doesn’t work.
As someone who works "close to the metal" in AI, do you really believe AI is just another "bubble" the way crypto was? I'm having a tough time finding the two even similar. It's clear how much potential it has to change things far more than even the most optimistic views of what crypto bros were touting as the future a few years ago.