Elon Musk pushes out more xAI founders as AI coding effort falters

2026-03-1316:40222283www.ft.com

JavaScript is disabled in your browser. Please enable JavaScript to proceed.

JavaScript is disabled in your browser.

Please enable JavaScript to proceed.


Read the original article

Comments

  • By dang 2026-03-1320:01

    All: please stick to thoughtful, substantive discussion. You may not owe you-know-whom better, but you owe this community better if you're participating in it.

    If you don't have a thoughtful, substantive comment to add, not commenting is also a good option. There are quite a few interesting submissions to talk about.

    https://news.ycombinator.com/newsguidelines.html

  • By Imnimo 2026-03-1320:566 reply

    I think the problem for xAI is that it can really only hire two types of researchers - people who are philosophically aligned with Elon, and people who are solely money-motivated (not a judgment). But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work, and those philosophies are often completely at odds with Elon. OpenAI and Anthropic have philosophical niches that are much better at attracting the current cream of the crop, and I don't really see how xAI can compete with that.

    • By jazzpush2 2026-03-1322:233 reply

      In an interview with xAI I was literally told that certain parts of the model have to align with Elon, and that Elon can call us and demand anything at anytime. No thanks!

      • By jarrettcoggin 2026-03-1322:332 reply

        From my time at Tesla, this is 100% the case. When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.

        • By exe34 2026-03-1322:55

          yeah that wouldn't work for me. when my boss asks me to do something unexpected, I ask, what do you want me to drop this week? if he doesn't want to pick, I ask, so what do you want first?

      • By bdangubic 2026-03-1322:34

        wild, but not surprising! anything else interesting you can share from that interview?

      • By kvetching 2026-03-1322:342 reply

        I don't see the problem with this. The chatbot is the most important part of Grok, so it makes sense Elon would be dogfooding it then providing suggestions.. He wants it to be truthful... It was shown on benchmarks recently that it hallucinates the least...

        • By Braxton1980 2026-03-1322:44

          >He wants it to be truthful

          How do you know this? Why would you believe him considering the massive lies he's told, for example about the 2020 widespread election fraud

        • By watwut 2026-03-1322:48

          No he does not want it to be truthful. Elon loves lying and lying is the think he does the most.

          He wants it to promote nazism. And he wants it to lie in the process.

    • By oceanplexian 2026-03-1322:521 reply

      > But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work

      The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them. Neither is a letter published by a few disgruntled employees of a San Francisco based company any kind of evidence or form of consensus.

      • By squidbeak 2026-03-1322:56

        > The "top researchers" in AI are Chinese. And I am skeptical that they have even remotely the philosophical or political alignment you are attempting to project on to them.

        What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.

    • By lich_king 2026-03-1322:341 reply

      Anthropic, maybe, but what is the philosophical niche of OpenAI? Their only consistent philosophical position about AI is "let's make more money".

      • By tibbar 2026-03-1322:41

        I think OpenAI is more of an aesthetic. Very... Apple-like, polished, with an eye towards making really cool stuff. And aesthetics are a type of philosophy.

        This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.

    • By dan-robertson 2026-03-1322:286 reply

      Why does being a top AI researcher so often come with this philosophical bent you describe?

      • By ladberg 2026-03-1322:303 reply

        You are paying the smartest people in the world to think really really hard, and turns out they might also think really really hard about not making the world a worse place

        • By asddubs 2026-03-1322:43

          it's not working

        • By watwut 2026-03-1322:52

          Except they do? They are certainly not making it better place. Like, ok, it is money for few companies and salary, it is business and probably fun work.

          But it is absurd to claim it is "making the world better place".

        • By bdangubic 2026-03-1322:371 reply

          Is this really the case though? How many smartest people do you really think are there that fit this narrative?! I want to believe there are at least some but I think they are minority in this group… otherwise I think all these pretty much evil corporations would have a awfully difficult time attracting talent? maybe some do but…

          • By saagarjha 2026-03-1322:53

            Most evil corporations have fairly normal jobs available.

      • By mynameisash 2026-03-1322:50

        I would think it's because of the staggering money they're making. According to Fortune[0]:

        > Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”

        > Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”

        If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.

        [0] https://archive.ph/lBIyY

      • By wombatpm 2026-03-1322:36

        Because it is not Macrodata Refinement and you can’t stop them thinking off the clock.

      • By derektank 2026-03-1322:37

        Because a lot of them are academics that are doctors of philosophy

      • By refulgentis 2026-03-1322:33

        Maybe you’re reading “philosophical bent” as “armchair philosopher”, as in they are dabbling in a field unrelated to their profession and letting it drive their profession - worldview might have made it clearer?

      • By hermanzegerman 2026-03-1322:292 reply

        Because they can afford it, they are very sought after.

        And smart people usually have moral convictions.

        I know for some people on this website it's hard to understand, but not everything in life is about $$$

        • By 0x3f 2026-03-1322:45

          > And smart people usually have moral convictions.

          Are you sure you don't just like the moral convictions and so engage in trait bundling?

          Moral knowledge doesn't really exist. I mean you can have personal views on it, but the lack of falsifiability makes me suspect it wouldn't be well-correlated with intelligence.

          Smarter people can discuss more layered or chic moral theories as they relate to theoretical AI, maybe.

        • By siva7 2026-03-1322:312 reply

          I'm smart and you can buy my morals. So what?

          • By refulgentis 2026-03-1322:33

            So what, indeed (not sure what you mean)

          • By hermanzegerman 2026-03-1322:43

            Those people get paid so much anyway that they don't have to compromise their morals.

            I guess that's not the case for you and me

    • By mattbillenstein 2026-03-1322:40

      This is becoming the problem with all of his businesses - Tesla has a crazy valuation and it really seems like they're having huge trouble getting Robotaxi going in Austin given the very slow progress there.

    • By zeroCalories 2026-03-1321:064 reply

      It's worse than that. Elon is a notoriously bad employer, and the only people that put up with him were the people that shared his vision. Pretty much the only people that will work for him now are second rate researchers and people that think gooner AI and racism is a worthwhile mission.

      • By vessenes 2026-03-1321:348 reply

        There's some texture here. Elon's enriched pretty much everybody who's ever worked for and invested with him. He makes money for people throughout his orgs. Many ex-employees have said to me: "incredible opportunity, made great money, worked insanely hard, once is plenty".

        • By NeutralCrane 2026-03-1321:46

          My ex-Twitter employee coworkers beg to differ. They made plenty of money before Elon came around. Once he was in the company, one of them actually hired a personal attorney to confirm that he wasn’t going to be burned by the things Musk was asking him to do, before he finally decided it wasn’t worth it to work there anymore and left.

        • By KaiserPro 2026-03-1322:20

          I don't really think thats true.

          The deal with tesla is that there is a relatively small employer pool, so you can be fairly bad employer but still get good outcomes. The same with spaceX. Sure early tesla had some stories about it being fun, but there was/is a darkside.

          The issue with xAI is that researchers have a whole bunch of other employers to choose from. Even at meta, where it used to be fairly nice for researchers, the pressure of "delivering" every 6 months lead to bad outcomes. Having someone single you out for what ever reason the boss had a bad day, is not how good research gets done.

          We have seen (A few of my friends were at twitter when it was taken over) that Musk has a somewhat unusual approach to managing staff (ie camping at work). Some researcher love that, assuming that they have peace to research, and are listened to. But a lot don't.

        • By rconti 2026-03-1322:26

          What about all the ones who are suing him for shortchanging them?

        • By raw_anon_1111 2026-03-1321:392 reply

          Ask the people at Twitter..

          • By cladopa 2026-03-1322:382 reply

            You mean the 80% of the workforce that was fired and the company continued running just fine?

            Usually just firing 3 to 5% of any company workers have terrible consequences for the company that does it.

            It does not speak so well about the workers.

            • By mattbillenstein 2026-03-1322:43

              He also cut 80% of the traffic... And the fact that it kept running with him willy nilly pulling network cables is a credit to the work they did to make it resilient to failure.

            • By watwut 2026-03-1322:55

              It was significantly worst, could not keep ads, became overrun by bots. The quality went down significantly. And earnings too.

          • By JumpCrisscross 2026-03-1321:443 reply

            > Ask the people at Twitter

            The ones with stock options in, now, SpaceX?

            • By sroussey 2026-03-1321:58

              Poor SpaceX employees whose options got diluted by Twitter. :/

            • By raw_anon_1111 2026-03-1322:04

              Stock options aren’t magic. I bet you that the remaining Twitter employees won’t see a higher comp than equivalent employees at BigTech companies between their cash + RSUs when SpaceX IPOs.

              Aren’t employees also subject to a lock out period where they still can’t sell their stock until $x number of months after an IPO unlike employees of public companies that can sell as soon as they vest?

              Honest question, I’ve worked for public $BigTech but haven’t been at a company pre IPO

            • By rconti 2026-03-1322:27

              No, the ones suing his ass.

        • By hermanzegerman 2026-03-1322:31

          He's a notorious cheapskate and Tesla is known for firing people shortly before their stock options vest

        • By Zigurd 2026-03-1322:001 reply

          > Elon's enriched pretty much everybody who's ever worked for and invested with him.

          I'd wager you were saying the same thing about bitcoin until last year.

          • By mediaman 2026-03-1322:25

            I'm unclear what statement this is trying to make.

            Is it meant to draw equivalence between crypto and Tesla/SpaceX? That each has roughly similar (i.e., low) value to humanity, or value as businesses?

            Is it that the metric of whether a person makes others money is invalid?

            The comment seems coy, possibly to avoid making any claim at all, but it must not be that because that wouldn't be very sporting.

        • By Freedom2 2026-03-1321:47

          Many ex-employees have said to me that working for Elon did not enrich them at all, either financially or professionally.

        • By jamespo 2026-03-1321:381 reply

          There's probably a lot of survivor bias going on there

          • By vessenes 2026-03-1322:49

            Undoubtedly. With 2.5T in value between tsla and sx that’s a lot of value for survivors.

      • By ai_critic 2026-03-1322:321 reply

        Gooning and racism have been a cornerstone of humanity since we descended from the trees, for better or worse.

      • By LZ_Khan 2026-03-1321:332 reply

        After seeing the type of people he hired for doge.. yikes.

        • By hooch 2026-03-1322:181 reply

          Was doge ever anything more than a "get root, grab the data, and run" operation?

        • By GeorgeTirebiter 2026-03-1322:081 reply

          Karparthy worked for Elon for, what, 5 years? How did he do it, if Elon is Ivan the Terrible?

          • By jazzpush2 2026-03-1322:25

            Karpathy makes great educational content. It's not clear what industry (or academic) research he did even now, five years later.

      • By vibeprofessor 2026-03-1322:07

        [dead]

  • By bearjaws 2026-03-1319:4012 reply

    Feel like the canary was when Grokpedia became a project.

    Giant waste of time while Anthropic/OAI keep surging forward.

    I also keep hearing this narrative that Twitter is a good data source, but I cannot imagine it's a valuable dataset. Sure keeping up with realtime topics can be useful, but I am not sure how much of a product that is.

    • By paulbjensen 2026-03-1320:597 reply

      The Twitter social graph was an amazing data asset. I worked at a consumer insights firm and the data on followers/followings was quite powerful.

      Using a custom taxonomy of things (celebrities, influencers, magazines, brands, tv shows, films, games, all kinds of things), we could identify groups of people who liked certain things, and when you looked at what those things were, it gave you a way of understanding who those people were.

      With that data, you could work out:

      - What celebrities/influencers to use in marketing campaigns - Where to advertise, and on which tv/radio channels - What potential brands to collaborate with to expand your customer base - What tone of voice to use in your advertising - In some cases, we educated clients about who their actual customers were, better than they understood themselves.

      One scenario, we built a social media feed based on the things that a group of customers following a well-known Deodorant brand in the UK would see.

      When we presented that to the client, they said “Why are there so many women in bikinis in this feed?”

      The brand had repositioned themselves to a male-grooming focussed target market, but had failed to realise that their existing customer base were the ones that had been looking at their TV adverts of women on beaches chasing a man who happened to spray their Deodorant on them. Their advertising from the past had been very effective.

      That was the power of Twitter’s data, and it is an absolute shame that Twitter went the way that it did. Mark Zuckerberg once said that Twitter was like “watching a clown car driven into a gold mine”.

      I’m pretty sure he must be delighted with how things have panned out since.

      • By johnisgood 2026-03-1322:55

        This reads very dystopian. You are not optimizing to understand people, you are optimizing to weaponize that understanding against them.

        When you know what someone will buy based on exploiting their unconscious preferences, and you are paid to increase sales, you will do it. Especially if your competitors are doing it too.

        And this happens at scale, invisibly. People never see the manipulation.

        In any case, it is not useful for most people. It is useful for the people doing the deceiving.

      • By BLKNSLVR 2026-03-1322:21

        That entire description sounds worthless to any positive direction of humanity. Therefore probably rapaciously profitable

        Very sad face.

      • By smcin 2026-03-1321:05

        That Zuckerberg quote was published in 2013 and supposedly was made a year or more before. Was it about when Dick Costolo was CEO (2010-2012)?

      • By gwern 2026-03-1321:592 reply

        It's definitely very valuable, but for what AI model? How does any of that lead to AGI, or even just a good coding agent?

        • By applfanboysbgon 2026-03-1322:10

          It doesn't need to lead to AGI or a good coding agent. Some of the only people who are actually profitable in the LLM industry are the people making actual chatbots. There are several bootstrapped startups that run open-weight models with a $10 or $20 monthly sub and make millions in profit off of inference from people just talking to the things, usually for character roleplay / "AI boyfriend/girlfriend" stuff etc. Some of them even took those profits and invested it into training their own bespoke models from scratch, usually on the smaller side although finetunes/retrains of Llama 70b, GLM, and Deepseek 670b have also been done. Grok could probably be profitable if it targeted this space, as the most "intelligent" conversational/uncensored model.

          This is already presupposing that profit even matters, though. Musk already burned some $50 billion dollars to control messaging on political discourse with his acquisition of Twitter. It was not about money, but power. After you already have infinite money, the only thing left to spend it on is acquiring more power, which is achieved through influencing politics. LLMs represent a potentially even better propaganda tool than social media platforms. They give you unprecedented access to people's thoughts that they would probably not share online otherwise, and they allow you to more subtly influence people with deeply-personalised narratives.

        • By KaiserPro 2026-03-1322:23

          > but for what AI model?

          Sentiment analysis. Working out what words lead to what outcomes, and then being able to predict on new data is super useful.

          For coding or "AGI" no, its not useful. For building a text based (possibly image based) recategorisation system top class.

      • By alex1138 2026-03-1322:27

        As an aside that quote from MZ does bother me. There's more to making a web-scale human rights respecting (because it has to, it's the internet, social media needs guidelines) than just making money (which Zuck doesn't seem to care much about anyway if he's sinking apparently billions into metaverse while having no account support)

        Of course he would only see it through the lens of cash. I have no idea how profitable Twitter was under Dorsey but it felt the spirit of the company at first was relatively neutral, it was a tool, it was what Jack came up with

        Zuck replaced people's email addresses[1], the feed has been wildly unchronological for years. Fix some of those problems wrt. lack of user respect and maybe you can make statements like "all else being equal, clown car goal mine". Or was it "dumb fucks"[2]?

        [1] https://news.ycombinator.com/item?id=4151433 [2] https://news.ycombinator.com/item?id=1692122

      • By cyanydeez 2026-03-1321:07

        It _was_ a great asset, however, just like models need proper data, as soon as musk removed the clamps on valuable social signals, well, he basically took a dump where he intended to eat.

    • By notahacker 2026-03-1319:493 reply

      Twitter's communication style being based around brevity, slang, memes, spam and non-threaded conversations seems particularly unlikely to be helpful for optimising LLMs

      • By tclancy 2026-03-1320:101 reply

        >Twitter's communication style being based around brevity

        Is this still true? Every once in a while someone sends a link around to some madman explaining how race or economics or whatever "really" works and it's like a full dissertation with headings, footnotes, clip art. They're halfway to reinventing Grok-o-pedia right there in Twitter. I mean X. I was promised that "X gonna give it to you" but it turns out "it" is some form of brain chlymidia.

        • By 3rodents 2026-03-1320:28

          Elon was running some sort of $1m competition for the “best” Twitter post for a few months. I think those type of dissertations about Phrenology and the like have fallen off a cliff since the competition ended.

      • By aleph_minus_one 2026-03-1319:52

        > Twitter's communication style [...] seems particularly unlikely to be helpful for optimising LLMs

        This depends on what one wants to optimize the AI for. ;-)

      • By libertine 2026-03-1320:181 reply

        And the amount of bots there isn't helpful either.

        • By facemelt2 2026-03-1320:293 reply

          recent changes in their comment system have reduced my exposure to bots to a level I much prefer over every other platform I use

          • By tanjtanjtanj 2026-03-1320:41

            How recent? As recently as last weekend I was seeing blue check marks replying with AI generated only-technically-related replies on top of the majority of the posts I looked at.

          • By rvnx 2026-03-1321:12

            There are bots here too, lot of them, to a point that rules were amended, this is because it's very valuable to give points to new publications

          • By libertine 2026-03-1320:42

            If that's actually true, good for them, but after what I've witnessed there not that long ago, I doubt I'll try it ever again.

    • By brokencode 2026-03-1319:585 reply

      It’s pretty telling that Elon had to have Grok rewrite Wikipedia because the truth was too woke for him. No idea how anybody can ever take Grok seriously.

      • By freehorse 2026-03-1320:252 reply

        Many projects in his companies seem to be more and more Musk's vanity projects than ideas/products one can take seriously. This is also how tesla ended up with a huge cybertruck stock that nobody wants to buy and thus had to be bought by his other companies. And it is becoming worse and worse, especially ever since he bought twitter and sped up his twitting rates.

        • By dmarcos 2026-03-1320:593 reply

          FWIW it looks there’s now a demand surge with the introduction of the new cheap cybertruck variant. delivery dates pushed out to the fall of 2026.

          • By robrain 2026-03-1321:161 reply

            That was an artificial boost created by setting a time-limit for a low price. There were ten days to buy at the price, then they put it back up. [1]

            [1] https://electrek.co/2026/03/01/tesla-cybertruck-awd-price-in...

            EDIT: grammar

            • By parineum 2026-03-1322:32

              What's an artificial boost? Sounds like you're describing a sale.

          • By NewJazz 2026-03-1321:31

            Look up what their production targets were and compare that to their sales. A small temporary demand surge isn't going to be enough to chew through their current inventory, let alone keep the production lines busy.

          • By MPSimmons 2026-03-1321:18

            A push on delivery dates is as likely to mean production issues as it is an influx of interest.

        • By scottyah 2026-03-1321:24

          [flagged]

      • By squarefoot 2026-03-1320:06

        Probably next generations of kids being fed PragerU studying material will. Something tells me we didn't see a fraction of what's going to happen in the decades to come.

      • By Timon3 2026-03-1320:291 reply

        I take Grokipedia very seriously as a threat to society. Sure, they're happy if people read it and fall for - but the primary goal is not to convince humans, but to influence search results of current models & to poison the training data of future models. ChatGPT (and most likely other models/providers too) is already using Grokipedia as a source, so unless you're aware of the possibility and always careful, you might be served Musks newest culture war ideas without ever being the wiser.

        It's not enough that everyone on Twitter is forced to read his thoughts, he's trying to make sure his influence reaches everyone else too.

        • By danabramov 2026-03-1320:44

          I've seen Claude pick it up too. It's disconcerting.

      • By alex1138 2026-03-1320:192 reply

        I can both not like Elon and also think Wikipedia is also very captured on some things

        • By ryandrake 2026-03-1320:363 reply

          Are there actual good examples showing errors of fact on Wikipedia that are verifiably incorrect, that demonstrate how it is "captured"?

          • By servo_sausage 2026-03-1322:23

            I find it more surprising that the common understanding has shifted away from "wikis are crap for anything new or political".

            As soon as there is a plausible agenda for selecting a narrative the way Wikipedia works we should be sceptical.

            For recent examples, everything to do with Biden and family, and Gamergate. These pages are still full of discussion; and what's written is more ideological than factual. You can follow these pages to see how an in-group selects a narrative.

            And these topics are not nearly as controversial as race, feminism, or transgender topics.

          • By gowld 2026-03-1320:502 reply

            It's not errors of fact, it's errors of omitted facts.

            • By ibero 2026-03-1320:57

              Are there actual good examples showing errors of omitted facts on Wikipedia that are verifiably correct, that demonstrate how it is "captured"?

            • By decimalenough 2026-03-1321:09

              [flagged]

          • By AuryGlenz 2026-03-1320:505 reply

            [flagged]

            • By JumpCrisscross 2026-03-1320:58

              The Minnesota Transracial Adoption Study was methodologically flawed. “Children with two black parents were significantly older at adoption, had been in the adoptive home a shorter time, and had experienced a greater number of preadoption placements.”

              Reframed, the study seemed to find (a) black kids are adopted less readily and (b) the longer a kid spends in the foster system, the lower their IQ at 17. (There is also limited controlling for epigenetic factors because we didn’t understand those well in the 1970s and 80s.)

              Based on how new human cognition is, and genetically similar human races are, it would be somewhat groundbreaking to find an emergent complex trait like IQ to map to social constructs like race, particularly ones as broad as American white and black. (There is more genetic diversity in single African tribes than in some small European countries. And American whites and blacks are all complex hybridized social categories.)

              [1] https://en.wikipedia.org/wiki/Minnesota_Transracial_Adoption...

            • By AlotOfReading 2026-03-1321:001 reply

              It seems like the root of your statement is with the existence of "race" as a purely biological classification. Wikipedia correctly notes the consensus position that race is a social construct [0] that's difficult to use accurately when discussing IQ. Grok makes the implicit and incorrect assumption that genetic factors = race, among other issues.

              [0] https://www.genome.gov/genetics-glossary/Race

              • By darkwater 2026-03-1322:21

                I wonder how much longer that link will stay up with the current administration...

            • By epgui 2026-03-1320:521 reply

              Have you considered the possibility that your opinion is just not representative of the scientific consensus?

              • By charcircuit 2026-03-1321:16

                Wikipedia does not care about scientific consensus. It just summarizes "reliable" secondary sources.

            • By lobf 2026-03-1320:55

              >As you can see, Wikipedia is very dismissive to the point of effectively lying.

              Did I miss where you presented evidence that wikipedia is wrong? You seem to be taking an assumption you carry (race is related to IQ) and assuming everyone believes it's true as well, thus wikipedia is lying.

            • By erxam 2026-03-1321:02

              [flagged]

        • By freehorse 2026-03-1320:322 reply

          I can understand somebody not liking wikipedia, I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.

          • By atonse 2026-03-1321:553 reply

            > I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.

            Really? Have you used AI to write documentation for software? Or used AI to generate deep research reports by scouring the internet?

            Because, while both can have some issues (but so do humans), AI already does extremely well at both those tasks (multiple models do, look at the various labs' Deep Research products, or look at NotebookLM).

            Grokipedia is roughly the same concept of "take these 10,000 topics, and for each topic make a deep research report, verify stuff, etc, and make minimal changes to the existing deep research report on it. preserve citations"

            So it's not like it's automatically some anti-woke can't-be-trusted thing. In fact, if you trust the idea of an AI doing deep research reports, this is a generalizable and automated form of that.

            We can judge an idea by its merits, politics aside. I think it's a fascinating idea in general (like the idea of writing software documentation or doing deep research reports), whether it needs tweaks to remove political bias aside.

            • By chipotle_coyote 2026-03-1322:26

              > Have you used AI to write documentation for software?

              Hi. I have edited AI-generated first drafts of documentation -- in the last few months, so we are not talking about old and moldy models -- and describing the performance as "extremely well" is exceedingly generous. Large language models write documentation the same way they do all tasks, i.e., through statistical computation of the most likely output. So, in no particular order:

              - AI-authored documentation is not aware of your house style guide. (No, giving it your style guide will not help.)

              - AI-authored documentation will not match your house voice. (No, saying "please write this in the voice of the other documentation in this repo" will not help.)

              - The generated documentation will tend to be extremely generic and repetitive, often effectively duplicating other work in your documentation repo.

              - Internal links to other pages will often be incorrect.

              - Summaries will often be superfluous.

              - It will love "here is a common problem and here is how to fix it" sections, whether or not that's appropriate for the kind of document it's writing. (It won't distinguish reliably between tutorial documentation, reference documentation, and cookbook articles.)

              - The common problems it tells you how to fix are sometimes imagined and frequently not actually problems worth documenting.

              - It's subject to unnecessary digression, e.g., while writing a high-level overview of how to accomplish a task, it will mention that using version control is a good idea, then detour for a hundred lines giving you a quick introduction to Git.

              As for using AI "to generate deep research reports by scouring the internet", that sounds like an incredibly fraught idea. LLMs are not doing searches, they are doing statistical computation of likely results. In practice the results of that computation and a web search frequently line up, but "frequently" is not good enough for "deep research": the fewer points of reference for a complex query there are in an LLM's training corpus, the more likely it is to generate a bullshit answer delivered with a veneer of absolute confidence. Perhaps you can make the case that that's still a good place to start, but it is absolutely not something to rely on.

            • By freehorse 2026-03-1322:12

              No, I don't trust an encyclopedia generated by AI. Projects with much narrower scopes are not comparable.

              edit: I am not very excited by AI-generated documentations either. I think that LLMs are very useful tools, but I see a potential problem when the sources of information that their usefulness is largely based on is also LLM-generated. I am afraid that this will inevitably result in drop in quality that will also affect the LLMs themselves downstream. I think we underestimate the importance that intentionality in human-written text plays in being in the training sets/context windows of LLMs for them to give relevant/useful output.

          • By scottyah 2026-03-1321:284 reply

            > "grokipedia" as idea

            So you can understand someone not liking something, but you cannot understand that person liking the idea of an alternative? What is the idea for you if not just an alternative to the established service with the undesired part changed?

            • By freehorse 2026-03-1321:59

              Because not liking something does not imply liking any possible alternative.

              Which one is the "undesirable part changed" here? Wikipedia is written by humans, it has a not-for-profit governance model, it encompasses a large, international community of authors/editors that attempt to operate democratically, it has an investment/commitment in being an openly available and public source of information. Grokipedia, on the other hand, is AI-generated, and operated by a for-profit AI company. Even if "grokipedia" managed somehow to get traction and "overthrow" wikipedia, there is no reason on earth why a company would operate it for free and not try to make profit out of it, or use it for their ends in ways much more direct than what may or may not be happening to wikipedia. Having a billionaire basically control something that may be considered "ground truth" of information seems a bad idea, and having AI generate that an even worse one.

              I can understand somebody not liking something in how wikipedia is governed or operating, after all whatever has to do with getting humans work together in such a scale is bound to be challenging. I can understand somebody ideologically disagreeing with some of the stances that such a project has to take eventually (even if one tries to be neutral as much as possible, it is inevitable to avoid some clash somewhere about where this neutrality exactly lies). But grokipedia much more than "wikipedia but different ideologically".

              edit: just to be clear, I see a critique of the "idea of grokipedia" as eg the critique of it being a billionaire controlled, AI generated project to substitute wikipedia; a critique of the implementation would be finding flaws to actual articles in grokipedia (overall). I think the idea of it is already flawed enough.

            • By debugnik 2026-03-1321:44

              They meant the idea of Wikipedia rewritten by Grok (or another controversial LLM) specifically, not just any alternative.

            • By wat10000 2026-03-1321:55

              Not all alternatives are necessarily worthy. I can understand someone not liking tomatoes. I can't understand someone liking depleted uranium.

      • By tclancy 2026-03-1320:11

        [flagged]

    • By UncleOxidant 2026-03-1319:581 reply

      > Giant waste of time while Anthropic/OAI keep surging forward.

      And Google. They're quietly making a lot of progress in the coding space with antigravity and Gemini 3.1.

      • By koakuma-chan 2026-03-1320:013 reply

        Has Antigravity gotten any better?

        • By UncleOxidant 2026-03-1322:49

          I find it pretty good. And Gemini 3.1 pro seems quite capable. Not as good at some things as Claude, but better at others. I was trying to target a verilog design to an uncommon FPGA and board and Gemini went out and searched for the FPGA docs and examined the schematics for the board in able to do the pin assignments (generated .ccf file). Not sure of Claude could've done that.

        • By sunaookami 2026-03-1321:191 reply

          It has gotten worse and they tightened the limits for paying customers recently: https://x.com/antigravity/status/2031835833716625883 (only announcement on Twitter, not in the app nor via email)

          • By kivle 2026-03-1321:51

            Limits are so low that I cancelled after about two weeks on my initial $0 trial. I tried making a change to a tiny code base with Claude Sonnet (which they offer in Antigravity). It couldn't even finish the change before my weekly limit was used up, reset in 7 days.

        • By BoredPositron 2026-03-1320:15

          Probably the best value for a good amount of anthropic credits. You can also share your Google ai subscription with up to four family members and they all get the same amount of credits...

    • By jmspring 2026-03-1319:43

      Twitter has the mass adoption, and it takes an effort to avoid bot/particular view bias - but as a valuable content source, it's a far cry from what it once was before Musk took it over.

    • By sheepscreek 2026-03-1321:201 reply

      AFAIK Grok still doesn’t have a CLI coding agent that works with a subscription. That’s a shame. Grok Code Fast 1 was pretty impressive when it came out - for what it did, and they never followed it up with a new version.

      • By sroussey 2026-03-1322:49

        You can use cursor with grok, though my experience is that grok is the worst of the API providers cursor supports.

    • By ben_w 2026-03-1320:231 reply

      > Feel like the canary was when Grokpedia became a project. Giant waste of time while Anthropic/OAI keep surging forward.

      Really? I assumed that that whole thing was just a very direct `for each article in Wikipedia { article = LLM(systemprompt, article) }`

      Agree re Twitter "good" != valuable.

      • By sroussey 2026-03-1322:50

        Where system prompt lists a certain someone’s latest tweets.

    • By samrus 2026-03-1321:53

      Twotter as a data source is interesting. I think it gets over hyped because thats elons grift. But i cant deny that the real time info aspect of it is pretty valuable. But i definitely think that its not that much more valuable than the open internet from a context source perspective. Everything worthwhile on twitter will end up elsewhere with a bit of lag. And the stuff that wont is noise anyway

    • By giancarlostoro 2026-03-1320:07

      > but I cannot imagine it's a valuable dataset.

      It's going to be a mixed batch, but any time there's world events, since as far back as I can think, Twitter (now X) was always first in breaking news. There's plenty of people and news orgs still on X because they need to be for the audience.

    • By BurningFrog 2026-03-1320:44

      Grok is trained on pretty much the same giant web crawl/text corpus as the other AIs.

    • By EGreg 2026-03-1321:001 reply

      I'm not a fan of Elon's software endeavors, ever since he bought Twitter and turned it into an even worse cesspool of angry political nonsense than it used to be. I don't like how he's been biasing Grok, etc.

      But, what exactly is so bad about Grokipedia? It's a different approach and I think a valid one: trying to do with AI what people have been doing manually at Wikipedia. I'm curious to hear the substantive comparisons.

      • By kennywinker 2026-03-1321:441 reply

        I think the issue is simply this: wikipedia trends towards unbiased info through use of the crowd. Grok, with a single owner with an ax to grind, trends towards whatever elon wants. It’s poisoned information under the control of one man - cyberpunk novels have been written about less.

        • By wat10000 2026-03-1322:01

          A concrete example: a few weeks ago, Musk was making a big deal about how most of his massive net worth was not held in cash, and by a total coincidence the phrase "primarily derived from equity stakes rather than cash" showed up on his Grokipedia page in the section about net worth. I checked the pages of several other extremely wealthy people and none of them had such a comment.

    • By vibeprofessor 2026-03-1322:09

      [dead]

HackerNews