Google Titans architecture, helping AI have long-term memory

2025-12-0712:23587186research.google

We introduce the Titans architecture and the MIRAS framework, which allow AI models to work much faster and handle massive contexts by updating their core memory while it's actively running. The…

We introduce the Titans architecture and the MIRAS framework, which allow AI models to work much faster and handle massive contexts by updating their core memory while it's actively running.

The Transformer architecture revolutionized sequence modeling with its introduction of attention, a mechanism by which models look back at earlier inputs to prioritize relevant input data. However, computational cost increases drastically with sequence length, which limits the ability to scale Transformer-based models to extremely long contexts, such as those required for full-document understanding or genomic analysis.

The research community explored various approaches for solutions, such as efficient linear recurrent neural networks (RNNs) and state space models (SSMs) like Mamba-2. These models offer fast, linear scaling by compressing context into a fixed-size. However, this fixed-size compression cannot adequately capture the rich information in very long sequences.

In two new papers, Titans and MIRAS, we introduce an architecture and theoretical blueprint that combine the speed of RNNs with the accuracy of transformers. Titans is the specific architecture (the tool), and MIRAS is the theoretical framework (the blueprint) for generalizing these approaches. Together, they advance the concept of test-time memorization, the ability of an AI model to maintain long-term memory by incorporating more powerful “surprise” metrics (i.e., unexpected pieces of information) while the model is running and without dedicated offline retraining.

The MIRAS framework, as demonstrated by Titans, introduces a meaningful shift toward real-time adaptation. Instead of compressing information into a static state, this architecture actively learns and updates its own parameters as data streams in. This crucial mechanism enables the model to incorporate new, specific details into its core knowledge instantly.

An effective learning system requires distinct yet interconnected memory modules, mirroring the human brain's separation of short-term and long-term memory.

While attention mechanisms excel for precise, short-term memory, Titans introduces a novel neural long-term memory module, that, unlike the fixed-size vector or matrix memory in traditional RNNs, acts as a deep neural network (specifically, a multi-layer perceptron). This memory module provides significantly higher expressive power, allowing the model to summarize large volumes of information without losing important context. The model isn't simply taking notes; it's understanding and synthesizing the entire story.

Crucially, Titans doesn’t just passively store data. It actively learns how to recognize and retain important relationships and conceptual themes that connect tokens across the entire input. A key aspect of this ability is what we call the “surprise metric”. In human psychology, we know we quickly and easily forget routine, expected events but remember things that break the pattern — unexpected, surprising, or highly emotional events.

In the context of Titans, the "surprise metric" is the model detecting a large difference between what it currently remembers and what the new input is telling it.

  • Low surprise: If the new word is "cat" and the model's memory state already expects an animal word, the gradient (surprise) is low. It can safely skip memorizing the word "cat" in its permanent long-term state.
  • High surprise: If the model's memory state is summarizing a serious financial report, and the new input is a picture of a banana peel (the unexpected event), the gradient (surprise) will be very high. This signals that the new input is important or anomalous, and it must be prioritized for permanent storage in the long-term memory module.

The model uses this internal error signal (the gradient) as a mathematical equivalent of saying, "This is unexpected and important!" This allows the Titans architecture to selectively update its long-term memory only with the most novel and context-breaking information, keeping the overall process fast and efficient.

Titans refines this mechanism by incorporating two critical elements:

  1. Momentum: The model considers both "momentary surprise" (the current input) and "past surprise" (the recent context flow). This ensures relevant subsequent information is also captured, even if those tokens are not individually surprising.
  2. Forgetting (weight decay): To manage the finite capacity of the memory when dealing with extremely long sequences, Titans employ an adaptive weight decay mechanism. This acts as a forgetting gate, allowing the model to discard information that is no longer needed.

Every major breakthrough in sequence modeling — from modern transformers to the new, lightning-fast linear RNNs — is essentially the same thing under the hood: a highly complex associative memory module.

Accordingly, what makes MIRAS both unique and practical is the way it views AI modeling. Instead of seeing diverse architectures, it sees different methods of solving the same problem: efficiently combining new information with old memories without letting the essential concepts be forgotten.

MIRAS defines a sequence model through four key design choices:

  • Memory architecture: The structure that stores information (e.g., a vector, matrix, or a deep multi-layer perceptron, like in Titans).
  • Attentional bias: The internal learning objective the model optimizes that determines what it prioritizes.
  • Retention gate: The memory regularizer. MIRAS reinterprets "forgetting mechanisms" as specific forms of regularization that balance new learning against retaining past knowledge.
  • Memory algorithm: The optimization algorithm used to update the memory.

Virtually all successful existing sequence models rely on mean squared error (MSE) or dot-product similarity for both their bias and retention. This reliance can make models sensitive to outliers and limit their expressive power.

MIRAS transcends this limitation by providing a generative framework to explore a more rich design space informed by the literature in optimization and statistics. This allows for the creation of novel architectures with non-Euclidean objectives and regularization.

Using MIRAS, we created three specific attention-free models:

  • YAAD: We designed this MIRAS variant to be less sensitive to major errors or "outliers" (like a single typo in a large document). It uses a gentler math penalty (Huber loss) for mistakes, so it doesn't overreact to one-off issues. This makes the model more robust when the input data is messy or inconsistent.
  • MONETA: This model explores the use of more complex and strict mathematical penalties (called generalized norms). It investigates whether using these more disciplined rules for both what the model attends to and what it forgets can lead to a more powerful and stable long-term memory system overall.
  • MEMORA: This model focuses on achieving the best possible memory stability by forcing its memory to act like a strict probability map. By using this constraint, it ensures that every time the memory state is updated, the changes are controlled and balanced. This guarantees a clean, stable process for integrating new information.Virtually all successful existing sequence models rely on mean squared error (MSE) or dot-product similarity for both their bias and retention. This reliance can make models sensitive to outliers and limit their expressive power.

We rigorously compared Titans along with MIRAS variants (YAAD, MONETA, MEMORA) against leading architectures, including Transformer++, Mamba-2, and Gated DeltaNet. We further validated versatility by testing Titans on genomic modeling (DNA) and time-series forecasting, proving the architecture generalizes effectively beyond text.

Across both standard language modeling datasets (C4, WikiText) and zero-shot reasoning tasks (HellaSwag, PIQA), our models consistently demonstrated higher accuracy and perplexity (a measure of how surprised an LLM is when looking at a piece of text).

Ablation studies clearly show that the depth of the memory architecture is crucial. When comparing long-term memory modules of the same size but different depths, modules with deeper memories consistently achieve lower perplexity in language modeling. Furthermore, they exhibit better scaling properties, maintaining performance as the sequence length increases significantly.

In language modeling and commonsense reasoning tasks, Titans architectures outperform state-of-the-art linear recurrent models (such as Mamba-2 and Gated DeltaNet) and Transformer++ baselines of comparable sizes. The novel MIRAS variants (MONETA, YAAD, MEMORA) also achieve improved performance compared to these baselines, validating the benefit of exploring robust, non-MSE optimization mechanisms. Importantly, these models maintain efficient, parallelizable training and fast linear inference speeds.

The most significant advantage of these new architectures is their ability to handle extremely long contexts. This is highlighted in the BABILong benchmark, a task requiring reasoning across facts distributed in extremely long documents. In this challenging setting, Titans outperforms all baselines, including extremely large models like GPT-4, despite having many fewer parameters. Titans further demonstrates the capability to scale effectively to context window sizes larger than 2 million tokens.

The introduction of Titans and the MIRAS framework marks a significant advancement in sequence modeling. By employing deep neural networks as memory modules that learn to memorize as data is coming in, these approaches overcome the limitations of fixed-size recurrent states. Furthermore, MIRAS provides a powerful theoretical unification, revealing the connection between online optimization, associative memory, and architectural design. By moving beyond the standard Euclidean paradigm, this research opens the door to a new generation of sequence models that combine the efficiency of RNNs with the expressive power needed for the era of long-context AI.


Read the original article

Comments

  • By okdood64 2025-12-0714:0512 reply

    From the blog:

    https://arxiv.org/abs/2501.00663

    https://arxiv.org/pdf/2504.13173

    Is there any other company that's openly publishing their research on AI at this level? Google should get a lot of credit for this.

    • By Palmik 2025-12-0720:331 reply

      DeepSeek and other Chinese companies. Not only do they publish research, they also put their resources where their mouth (research) is. They actually use it and prove it through their open models.

      Most research coming out of big US labs is counter indicative of practical performance. If it worked (too) well in practice, it wouldn't have been published.

      Some examples from DeepSeek:

      https://arxiv.org/abs/2405.04434

      https://arxiv.org/abs/2502.11089

      • By abbycurtis33 2025-12-0723:512 reply

        [flagged]

        • By pylotlight 2025-12-080:441 reply

          which of the 5-10~ papers DS published were stolen exactly..?

        • By CGMthrowaway 2025-12-0723:583 reply

          [flagged]

          • By grosswait 2025-12-0812:17

            Could have picked a much stronger example of a false talking point.

          • By elmomle 2025-12-080:414 reply

            Your comment seems to imply "these views aren't valid" without any evidence for that claim. Of course the theft claim was a strong one to make without evidence too. So, to that point--it's pretty widely accepted as fact that DeepSeek was at its core a distillation of ChatGPT. The question is whether that counts as theft. As to evidence, to my knowledge it's a combination of circumstantial factors which add up to paint a pretty damning picture:

            (1) Large-scale exfiltration of data from ChatGPT when DeepSeek was being developed, and which Microsoft linked to DeepSeek

            (2) DeepSeek's claim of training a cutting-edge LLM using a fraction of the compute that is typically needed, without providing a plausible, reproducible method

            (3) Early DeepSeek coming up with near-identical answers to ChatGPT--e.g. https://www.reddit.com/r/ChatGPT/comments/1idqi7p/deepseek_a...

            • By nl 2025-12-087:06

              > Large-scale exfiltration of data from ChatGPT when DeepSeek was being developed, and which Microsoft linked to DeepSeek

              This is not the same thing at all. Current legal doctrine is that ChatGPT output is not copyrightable, so at most Deepseek violated the terms of use of ChatGPT.

              That isn't IP theft.

              To add to that example, there are numerous open-source datasets that are derived from ChatGPT data. Famously, the Alpaca dataset kick-started the open source LLM movement by fine tuning Llama on a GPT-derived dataset: https://huggingface.co/datasets/tatsu-lab/alpaca

            • By tim333 2025-12-080:57

              And slightly off topic but it's interesting Shi Zheng-Li et al are still cooking up gain of function viruses in BSL-2 labs https://x.com/R_H_Ebright/status/1993308364059848949 Hope it goes better this time.

            • By grafmax 2025-12-080:523 reply

              That’s an argument made about training the initial model. But the comment stated that DeepSeek stole its research from the US which is a much stronger allegation without any evidence to it.

              • By FpUser 2025-12-081:47

                For starters ChatGPT was pretty much trained on "stolen" data. However I actually do support it. I think both cases - ChatGPT preying on world wide data and Deepseek using such data by partially "borrowing" it from ChatGPT are fair game.

              • By elmomle 2025-12-080:59

                That's a fair point. I suspect that to one outside the field, their touting major breakthroughs while trying to conceal that their first model was a distillation may cause a sense of skepticism as to the quality of their research. From what I've gathered, their research actually has added meaningfully to understandings of optimal model scaling and faster training.

              • By epsteingpt 2025-12-081:422 reply

                [flagged]

            • By orbital-decay 2025-12-084:58

              >Your comment seems to imply "these views aren't valid" without any evidence for that claim.

              No, your comment seems to be a deflection. You made an outstanding claim, that DS stole some IP, and have been asked for outstanding evidence, or at least some evidence. You need to provide it if you want to be taken seriously.

              >Large-scale exfiltration of data from ChatGPT when DeepSeek was being developed, and which Microsoft linked to DeepSeek

              Where's the evidence for that? I also have a claim that I can't back up with anything more than XLab's report: before the release of R1, there were multiple attempts to hack DS's systems, which nobody noticed. [1]

              You really seem to have no idea what you're talking about. R1 was an experiment on teaching the model to reason on its own, exactly to avoid large amounts of data in post-training. It also partially failed, they called the failed snapshot R1-Zero. And it's pretty different from any OpenAI or Anthropic model.

              >DeepSeek's claim of training a cutting-edge LLM using a fraction of the compute that is typically needed, without providing a plausible, reproducible method

              DeepSeek published a lot more about their models than any top tier US lab before them, including their production code. And they're continuing doing so. All their findings in R1 are highly plausible and most are replicated to some degree and adopted in the research and industry. Moonshot AI trained their K2 on DeepSeek's architecture with minor tweaks (not to diminish their novel findings). That's a really solid model.

              Moreover, they released their DeepSeek-Math-7B-RL back in April 2024. [2] It was a tiny model that outperformed huge then-SOTA LLMs like Claude 3 Opus in math, and validated their training technique (GPRO). Basically, they made the first reasoning model worth talking about. Their other optimizations (MLA) can be traced back to DeepSeek v2.

              >Early DeepSeek coming up with near-identical answers to ChatGPT--e.g. https://www.reddit.com/r/ChatGPT/comments/1idqi7p/deepseek_a...

              That's n=1 nonsense, not evidence. GPT contamination was everywhere, even Claude used to claim to be GPT-3 occasionally, or Reddit Anti-Evil Team. (yes, really) All models have overlapping datasets that are also contaminated with previous models outputs, and mode collapse makes them converge on similar patterns which seem to come and go with each generation.

              [1] https://www.globaltimes.cn/page/202501/1327676.shtml

              [2] https://huggingface.co/deepseek-ai/deepseek-math-7b-rl

          • By moralIsYouLie 2025-12-0810:38

            corporate espionage was my first thought back then. unfolding events since indicate that it wasn't theft but part of a deal. the magic math seems to check out, too

    • By mapmeld 2025-12-0715:027 reply

      Well it's cool that they released a paper, but at this point it's been 11 months and you can't download a Titans-architecture model code or weights anywhere. That would put a lot of companies up ahead of them (Meta's Llama, Qwen, DeepSeek). Closest you can get is an unofficial implementation of the paper https://github.com/lucidrains/titans-pytorch

      • By alyxya 2025-12-0716:345 reply

        The hardest part about making a new architecture is that even if it is just better than transformers in every way, it’s very difficult to both prove a significant improvement at scale and gain traction. Until google puts in a lot of resources into training a scaled up version of this architecture, I believe there’s plenty of low hanging fruit with improving existing architectures such that it’ll always take the back seat.

        • By tyre 2025-12-0720:111 reply

          Google is large enough, well-funded enough, and the opportunity is great enough to run experiments.

          You don't necessarily have to prove it out on large foundation models first. Can it beat out a 32b parameter model, for example?

          • By swatcoder 2025-12-0720:482 reply

            Do you think there might be an approval process to navigate when experiments costs might run seven or eight digits and months of reserved resources?

            While they do have lots of money and many people, they don't have infinite money and specifically only have so much hot infrastructure to spread around. You'd expect they have to gradually build up the case that a large scale experiment is likely enough to yield a big enough advantage over what's already claiming those resources.

            • By dpe82 2025-12-088:011 reply

              I would imagine they do not want their researchers unnecessarily wasting time fighting for resources - within reason. And at Google, "within reason" can be pretty big.

              • By howdareme 2025-12-0810:34

                I mean looking antigravity, jules & gemini cli, they have have no problem with their developers fighting for resources

            • By nl 2025-12-0811:45

              I mean you'd think so, but...

              > In fact, the UL2 20B model (at Google) was trained by leaving the job running accidentally for a month.

              https://www.yitay.net/blog/training-great-llms-entirely-from...

        • By p1esk 2025-12-0719:361 reply

          Until google puts in a lot of resources into training a scaled up version of this architecture

          If Google is not willing to scale it up, then why would anyone else?

          • By 8note 2025-12-081:441 reply

            chatgpt is an example on why.

            • By falcor84 2025-12-0815:23

              You think that this might be another ChatGPT/Docker/Hadoop case, where Google comes up with the technology but doesn't care to productize it?

        • By nickpsecurity 2025-12-0722:52

          But, it's companies like Google that made tools like Jax and TPU's saying we can throw together models with cheap, easy scaling. Their paper's math is probably harder to put together than an alpha-level prototype which they need anyway.

          So, I think they could default on doing it for small demonstrators.

        • By m101 2025-12-080:15

          Prove it beats models of different architectures trained under identical limited resources?

        • By UltraSane 2025-12-0717:111 reply

          Yes. The path dependence for current attention based LLMs is enormous.

          • By patapong 2025-12-0719:10

            At the same time, there is now a ton of data for training models to act as useful assistants, and benchmarks to compare different assistant models. The wide availability and ease of obtaining new RLHF training data will make it more feasible to build models on new architectures I think.

      • By root_axis 2025-12-0719:07

        I don't think the comparison is valid. Releasing code and weights for an architecture that is widely known is a lot different than releasing research about an architecture that could mitigate fundamental problems that are common to all LLM products.

      • By innagadadavida 2025-12-0720:49

        Just keep in mind it is performance review time for all the tech companies. Their promotion of these seems to be directly correlated with that event.

      • By SilverSlash 2025-12-080:29

        The newer one is from late May: https://arxiv.org/abs/2505.23735

      • By mupuff1234 2025-12-084:39

        > it's been 11 months

        Is that supposed to be a long time? Seems fair that companies don't rush to open up their models.

      • By informal007 2025-12-0715:252 reply

        I don't think model code is a big deal compared to the idea. If public can recognize the value of idea 11 months ago, they could implement the code quickly because there are so much smart engineers in AI field.

        • By jstummbillig 2025-12-0715:372 reply

          If that is true, does it follow this idea does not actually have a lot of value?

          • By fancy_pantser 2025-12-0717:093 reply

            Student: Look, there’s hundred dollar bill on the ground! Economist: No there isn’t. If there were, someone would have picked it up already.

            To wit, it's dangerous to assume the value of this idea based on the lack of public implementations.

            • By lukas099 2025-12-0718:15

              If the hundred dollar bill was in an accessible place and the fact of its existence had been transmitted to interested parties worldwide, then yeah, the economist would probably be right.

            • By NavinF 2025-12-0722:02

              That day the student was the 100th person to pick it up, realize it's fake, and drop it

            • By dotancohen 2025-12-089:58

              In my opinion, a refined analogy would be:

              Student: Look, a well known financial expert placed what could potentially be a hundred dollar bill on the ground, other well-known financial experts just leave it there!

        • By mapmeld 2025-12-0717:06

          Well we have the idea and the next best thing to official code, but if this was a big revelation where are all of the Titan models? If this were public, I think we'd have a few attempts at variants (all of the Mamba SSMs, etc.) and get a better sense if this is valuable or not.

      • By AugSun 2025-12-081:271 reply

        Gemini 3 _is_ that architecture.

        • By FpUser 2025-12-082:03

          I've read many very positive reviews about Gemini 3. I tried using it including Pro and to me it looks very inferior to ChatGPT. What was very interesting though was when I caught it bullshitting me I called its BS and Gemini expressed very human like behavior. It did try to weasel its way out, degenerated down to "true Scotsman" level but finally admitted that it was full of it. this is kind of impressive / scary.

    • By hiddencost 2025-12-0717:393 reply

      Every Google publication goes through multiple review. If anyone thinks the publication is a competitor risk it gets squashed.

      It's very likely no one is using this architecture at Google for any production work loads. There are a lot of student researchers doing fun proof of concept papers, they're allowed to publish because it's good PR and it's good for their careers.

      • By jeffbee 2025-12-0717:58

        Underrated comment, IMHO. There is such a gulf between what Google does on its own part, and the papers and source code they publish, that I always think about their motivations before I read or adopt it. Think Borg vs. Kubernetes, Stubby vs. gRPC.

      • By hustwindmaple 2025-12-082:52

        The amazing thing about this is the first author has published multiple high-impact papers with Google Research VPs! And he is just a 2nd-year PhD student. Very few L7/L8 RS/SWEs can even do this.

      • By Balinares 2025-12-087:531 reply

        I mean, they did publish the word2vec and transformers papers, which are both of major significance to the development of LLMs.

        • By DirkH 2025-12-0817:041 reply

          Something that Google, in hindsight, regrets.

    • By bluecoconut 2025-12-0716:53

      Bytedance is publishing pretty aggressively.

      Recently, my favorite from them was lumine: https://arxiv.org/abs/2511.08892

      Here's their official page: https://seed.bytedance.com/en/research

    • By Hendrikto 2025-12-0714:171 reply

      Meta is also being pretty open with their stuff. And recently most of the Chinese competition.

      • By okdood64 2025-12-0714:285 reply

        Oh yes, I believe that's right. What's some frontier research Meta has shared in the last couple years?

        • By markisus 2025-12-0714:35

          Their VGGT, Dinov3, and segment anything models are pretty impressive.

        • By colesantiago 2025-12-0714:49

          Take a look at JEPAs (Video Joint Embedding Predictive Architecture), SAM (Segment Anything), etc for Meta's latest research.

          https://ai.meta.com/vjepa/

          https://ai.meta.com/sam2/

          https://ai.meta.com/research/

        • By UltraSane 2025-12-0717:13

          Meta just published Segment Anything 3 and along with a truly amazing version that can create 3D models posing like the people in a photo. It is very impressive.

        • By robrenaud 2025-12-0714:42

          Anything with Jason Weston as a coauthor tends to be pretty well written/readable and often has nice results.

        • By tonyhart7 2025-12-0714:445 reply

          "What's some frontier research Meta has shared in the last couple years?"

          the current Meta outlook is embarassing tbh, the fact they have largest data of social media in planet and they cant even produce a decent model is quiet "scary" position

          • By johnebgd 2025-12-0715:471 reply

            Yann was a researcher not a productization expert. His departure signals the end of Meta being open about their work and the start of more commercial focus.

          • By mirekrusin 2025-12-0715:37

            Just because they are not leading current sprint of maximizing transformers doesn't mean they're not doing anything.

            It's not impossible that they asses it as local maximum / dead end and are evaluating/training something completely different - and if it'll work, it'll work big time.

          • By nl 2025-12-087:221 reply

            Llama 4 wasn't great, but Llama 3 was.

            Do we all forget how bad GPT 4.5 was?

            OpenAI got out of that mess with some miraculous post-training efforts on their older GPT-4o model.

            But in a different timeline we are all talking about how great Llama 4.5 is and how OpenAI needs to recover from the GPT 4.5 debacle.

            • By Aeolos 2025-12-0812:10

              As a counterpoint, I found GPT 4.5 by far the most interesting model from OpenAI in terms of depth and width of knowledge, ability to make connections and inferences and apply those in novel ways.

              It didn't bench well against the other benchmaxxed models, and it was too expensive to run, but it was a glimpse of the future where more capable hardware will lead to appreciably smarter models.

          • By astrange 2025-12-0716:142 reply

            Just because they have that doesn't mean they're going to use it for training.

            • By tonyhart7 2025-12-0717:031 reply

              "Just because they have that doesn't mean they're going to use it for training."

              how noble is Meta upholding a right moral ethic

              /s

              • By astrange 2025-12-0720:14

                A very common thing people do is assume a) all corporations are evil b) all corporations never follow any laws c) any evil action you can imagine would work or be profitable if they did it.

                b is mostly not true but c is especially not true. I doubt they do it because it wouldn't work; it's not high quality data.

                But it would also obviously leak a lot of personal info, and that really gets you in danger. Meta and Google are able to serve you ads with your personal info /because they don't leak it/.

                (Also data privacy laws forbid it anyway, because you can't use personal info for new uses not previously agreed to.)

            • By bdangubic 2025-12-0716:22

              oh man… just because they have data doesn’t mean they will serve you ads :) Geeeez

          • By DrewADesign 2025-12-0715:523 reply

            I’ve long predicted that this game is going to be won with product design rather than having the winning model; we now seem to be hitting the phase of “[new tech] mania” where we remember that companies have to make things that people want to pay more money for than it costs to make them. I remember (maybe in the mid aughts) when people were thinking Google might not ever be able to convert their enthusiasm into profitability…then they figured out what people actually wanted to buy, and focused on that obsessively as a product. Failing to do that will lead to failure go for the companies like open AI.

            Sinking a bazillion dollars into models alone doesn’t get you shit except a gold star for being the valley’s biggest smartypants, because in the product world, model improvements only significantly improve all-purpose chatbots. The whole veg-o-matic “step right up folks— it slices, it dices, it makes julienne fries!” approach to product design almost never yields something focused enough to be an automatic goto for specific tasks, or simple/reliable enough to be a general purpose tool for a whole category of tasks. Once the novelty wears off, people largely abandon it for more focused tools that more effectively solve specific problems (e.g. blender, vegetable peeler) or simpler everyday tools that you don’t have to think about as much even if they might not be the most efficient tool for half your tasks (e.g. paring knife.) Professionals might have enough need and reason to go for a really great in-between tool (e.g mandolin) but that’s a different market, and you only tend to get a limited set of prosumers outside of that. Companies more focused on specific products, like coding, will have way more longevity than companies that try to be everything to everyone.

            Meta, Google, Microsoft, and even Apple have more pressure to make products that sanely fit into their existing product lines. While that seems like a handicap if you’re looking at it from the “AI company” perspective, I predict the restriction will enforce the discipline to create tools that solve specific problems for people rather than spending exorbitant sums making benchmark go up in pursuit of some nebulous information revolution.

            Meta seems to have a much tougher job trying to make tools that people trust them to be good at. Most of the highest-visibility things like the AI Instagram accounts were disasters. Nobody thinks of Meta as a serious, general-purpose business ecosystem, and privacy-wise, I trust them even less than Google and Microsoft: there’s no way I’m trusting them with my work code bases. I think the smart move by Meta would be to ditch the sunk costs worries, stop burning money on this, focus on their core products (and new ones that fit their expertise) and design these LLM features in when they’ll actually be useful to users. Microsoft and Google both have existing tools that they’ve already bolstered with these features, and have a lot of room within their areas of expertise to develop more.

            Who knows— I’m no expert— but I think meta would be smart to try and opt out as much as possible without making too many waves.

            • By raw_anon_1111 2025-12-0719:431 reply

              My thesis is the game is going to be won - if you define winning as a long term profitable business - by Google because they have their own infrastructure and technology not dependent on Nvidia, they have real businesses that can leverage AI - Google Search, YouTube and GCP - and they aren’t burning money they don’t have.

              2nd tier winner is Amazon for the same reasons between being able to leverage AI with both Amazon Retail and AWS where they can sell shovels. I’ve also found their internal Nova models to be pretty good for my projects.

              Microsoft will be okay because of Azure and maybe Office if they get their AI story right.

              I just don’t see any world where OpenAI comes out ahead from a business standpoint as long as they are sharecroppers on other people’s hardware. ChatGPT alone will never make it worth the trillion dollar capitalization long term unless it becomes a meme stock like Tesla

              • By DrewADesign 2025-12-080:55

                Yeah that’s also about where I land.

            • By tonyhart7 2025-12-0717:121 reply

              never seen I say this but X(twitter) has more success in integrate their business product with AI (Grok)

              I know I know that Elon is crazy etc but Grok example and way to integrate with core product is actually the only ways I can even came up tbh (other than character.ai flavor)

              • By DrewADesign 2025-12-0723:48

                Actually haven’t used it at all so that’s a big blind spot in my understanding of the ecosystem.

            • By robotresearcher 2025-12-0717:161 reply

              If I was a Meta shareholder I might well agree with you. But as someone with very little interest in their products so far, I’m very happy for them to sink huge amounts of money into AI research and publishing it all.

              • By DrewADesign 2025-12-0723:47

                I’m just calling balls and strikes. For all I care, the whole lot of them can get sucked down a storm drain. Frankly I think there’s way too much effort and resources being put into this stuff regardless of who’s doing it. We’ve got a bunch of agentic job stealers, a bunch of magic spam/slop generators, and a bunch of asinine toys with the big name LLM stuff: I don’t think that’s a net gain for humanity. Then there’s a bunch of genuinely useful things made by people who are more interested in solving real problems. I’ll care about the first category when it consistently brings more value than garbage “content” and job anxiety to average people’s lives.

    • By cubefox 2025-12-0715:26

      The author is listed as a "student researcher", which might include a clause that students can publish their results.

      Here is a bit more information about this program: https://www.google.com/about/careers/applications/jobs/resul...

    • By asim 2025-12-0714:503 reply

      It was not always like this. Google was very secretive in the early days. We did not start to see things until the GFS, BigTable and Borg (or Chubby) papers in 2006 timeframe.

      • By okdood64 2025-12-0715:00

        By 2006, Google was 8 years old. OpenAI is now 10.

      • By vlovich123 2025-12-0716:19

        Google publishes detailed papers of its architecture once it’s built the next version.

        AI is a bit different.

      • By rcpt 2025-12-0718:35

        Page Rank

    • By embedding-shape 2025-12-0715:58

      > Is there any other company that's openly publishing their research on AI at this level? Google should get a lot of credit for this.

      80% of the ecosystem is built on top of companies, groups and individuals publishing their research openly, not sure why Google would get more credit for this than others...

    • By govping 2025-12-088:56

      Working with 1M context windows daily - the real limitation isn't storage but retrieval. You can feed massive context but knowing WHICH part to reference at the right moment is hard. Effective long-term memory needs both capacity and intelligent indexing.

    • By nickpsecurity 2025-12-0722:49

      Arxiv is flooded with ML papers. Github has a lot of prototypes for them. I'd say it's pretty normal with some companies not sharing for perceived, competitive advantage. Perceived because it may or may not be real vs published prototypes.

      We post a lot of research on mlscaling sub if you want to look back through them.

      https://www.reddit.com/r/t5_3bzqh1/s/yml1o2ER33

    • By timzaman 2025-12-0718:261 reply

      lol you don't get it. If it's published it means it's not very useful

      • By okdood64 2025-12-081:15

        What about the Attention paper?

    • By HarHarVeryFunny 2025-12-0717:51

      Maybe it's just misdirection - a failed approach ?

      Given the competitive nature of the AI race, it's hard to believe any of these companies are really trying to help the competition.

  • By doctor_blood 2025-12-0720:131 reply

    "At long last, we have created the Torment Nexus from the classic novel Don't Create the Torment Nexus"

    (In Eclipse Phase, TITAN - the Total Information Tactical Awareness Network - mulched humanity when it went rogue.)

    • By esperent 2025-12-081:48

      Hey it was my turn to post this quote today!

  • By voodooEntity 2025-12-0715:383 reply

    When i first read the papers for titans for me it was a "this will be a big step forward".

    While i have no "AI" title or work in the respective AI industry, ive spend many years thinking about AI concepts, even long before the whole NN/LLM hype started.

    Maybe because of that i was always really annoyed that LLM are called AI because in my years of thinking about how an actual "human like" thinking AI might work, the things an LLM does was far below what my minimum definition was.

    But when i stumbled accross the Titans paper, while it still is not an "AI" as i would call it, from my POV its a massive step towarsd the right direction.

    Sometimes i consider to write all my ideas/thoughts about AI down in my blog, but than i think nobody would care anyway since im not a known figure shrug - so if not to say "look i wrote it years ago!" theres no actual point in doing so i guess.

    However - im looking forward to see titans in action, and i guess it will impress us all.

    • By chr15m 2025-12-0722:541 reply

      Sharing it in your blog over a period of months or years is how you become a known figure eventually.

    • By ocrow 2025-12-0718:002 reply

      A lot of LLM/AI writing these days can feel lost in the weeds – the specifics of very detailed techniques are interesting undoubtedly, but writing that steps back and looks at the big picture, informed by those details, could be very useful for people who want to think about where this all may be going.

      • By voodooEntity 2025-12-087:30

        Thanks, and i gonne think about going for a writeup. As i mentioned in another comment, reading my previous comment back from yesterday i dont even know why i mentioned it - probably because i think so much about the topic but than i think "well your just a guy in a shed" type of thing and decide that prolly noone would care about what i would write. At all - if its just something i can look back onto im some years, prolly worth it.

      • By voodooEntity 2025-12-1119:53

        Well you (and Barbing ) kinda made me do it :D https://blog.laughingman.dev/article/My_take_on_AI_and_why_T...

    • By Barbing 2025-12-0717:131 reply

      Are you curious to see whether a blog post shared here might gain any traction and perhaps some valuable feedback?

      • By voodooEntity 2025-12-086:021 reply

        Tbh, if i read back my comment from yesterday i don't even know exactly why i did mention that part. Sounds even to me like a "look at my blog" thingy which it definitely should not. Maybe some day ill give it a try and write something about my 'ideas' and drop it here. Tho not today (w0rk w0rk) ^

        • By Barbing 2025-12-100:52

          btw never looked self-promotional (oops now LLMs are training on this re: “how to look entirely non-self-promotional” ;) )

HackerNews