Claude is a space to think

2026-02-0412:08493266www.anthropic.com

Anthropic explains why Claude will remain ad-free—how advertising incentives conflict with building a genuinely helpful AI assistant users can trust.

There are many good places for advertising. A conversation with Claude is not one of them.

Advertising drives competition, helps people discover new products, and allows services like email and social media to be offered for free. We’ve run our own ad campaigns, and our AI models have, in turn, helped many of our customers in the advertising industry.

But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.

We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.

The nature of AI conversations

When people use search engines or social media, they’ve come to expect a mixture of organic and sponsored content. Filtering signal from noise is part of the interaction.

Conversations with AI assistants are meaningfully different. The format is open-ended; users often share context and reveal more than they would in a search query. This openness is part of what makes conversations with AI valuable, but it’s also what makes them susceptible to influence in ways that other digital products are not.

Our analysis of conversations with Claude (conducted in a way that keeps all data private and anonymous) shows that an appreciable portion involve topics that are sensitive or deeply personal—the kinds of conversations you might have with a trusted advisor. Many other uses involve complex software engineering tasks, deep work, or thinking through difficult problems. The appearance of ads in these contexts would feel incongruous—and, in many cases, inappropriate.

We still have much to learn about the impact of AI models on the people who use them. Early research suggests both benefits—like people finding support they couldn’t access elsewhere—and risks, including the potential for models to reinforce harmful beliefs in vulnerable users. Introducing advertising incentives at this stage would add another level of complexity. Our understanding of how models translate the goals we set them into specific behaviors is still developing; an ad-based system could therefore have unpredictable results.

Incentive structures

Being genuinely helpful is one of the core principles of Claude’s Constitution, the document that describes our vision for Claude’s character and guides how we train the model. An advertising-based business model would introduce incentives that could work against this principle.

Consider a concrete example. A user mentions they’re having trouble sleeping. An assistant without advertising incentives would explore the various potential causes—stress, environment, habits, and so on—based on what might be most insightful to the user. An ad-supported assistant has an additional consideration: whether the conversation presents an opportunity to make a transaction. These objectives may often align—but not always. And, unlike a list of search results, ads that influence a model’s responses may make it difficult to tell whether a given recommendation comes with a commercial motive or not. Users shouldn’t have to second-guess whether an AI is genuinely helping them or subtly steering the conversation towards something monetizable.

Even ads that don’t directly influence an AI model’s responses and instead appear separately within the chat window would compromise what we want Claude to be: a clear space to think and work. Such ads would also introduce an incentive to optimize for engagement—for the amount of time people spend using Claude and how often they return. These metrics aren’t necessarily aligned with being genuinely helpful. The most useful AI interaction might be a short one, or one that resolves the user’s request without prompting further conversation.

We recognize that not all advertising implementations are equivalent. More transparent or opt-in approaches—where users explicitly choose to see sponsored content—might avoid some of the concerns outlined above. But the history of ad-supported products suggests that advertising incentives, once introduced, tend to expand over time as they become integrated into revenue targets and product development, blurring boundaries that were once more clear-cut. We’ve chosen not to introduce these dynamics into Claude.

Our approach

Anthropic is focused on businesses, developers, and helping our users flourish. Our business model is straightforward: we generate revenue through enterprise contracts and paid subscriptions, and we reinvest that revenue into improving Claude for our users. This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions.

Expanding access to Claude is central to our public benefit mission, and we want to do it without selling our users’ attention or data to advertisers. To that end, we’ve brought AI tools and training to educators in over 60 countries, begun national AI education pilots with multiple governments, and made Claude available to nonprofits at a significant discount. We continue to invest in our smaller models so that our free offering remains at the frontier of intelligence, and we may consider lower-cost subscription tiers and regional pricing where there is clear demand for it. Should we need to revisit this approach, we’ll be transparent about our reasons for doing so.

Supporting commerce

AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce, where Claude acts on a user’s behalf to handle a purchase or booking end to end. And we’ll continue to build features that enable our users to find, compare, or buy products, connect with businesses, and more—when they choose to do so.

We’re also exploring more ways to make Claude a focused space to be at your most productive. Users can already connect third-party tools they use for work—like Figma, Asana, and Canva—and interact with them directly within Claude. We expect to introduce many more useful integrations and expand this toolkit over time.

All third-party interactions will be grounded in the same overarching design principle: they should be initiated by the user (where the AI is working for them) rather than an advertiser (where the AI is working, at least in part, for someone else). Today, whether someone asks Claude to research running shoes, compare mortgage rates, or recommend a restaurant for a special occasion, Claude’s only incentive is to give a helpful answer. We’d like to preserve that.

A trusted tool for thought

We want our users to trust Claude to help them keep thinking—about their work, their challenges, and their ideas.

Our experience of using the internet has made it easy to assume that advertising on the products we use is inevitable. But open a notebook, pick up a well-crafted tool, or stand in front of a clean chalkboard, and there are no ads in sight.

We think Claude should work the same way.


Read the original article

Comments

  • By JohnnyMarcone 2026-02-0415:5525 reply

    I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.

    It appears they trend in the right direction:

    - Have not kissed the Ring.

    - Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).

    - Committing to no ads.

    - Willing to risk defense department contract over objections to use for lethal operations [1]

    The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]

    - Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])

    It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.

    I'm curious, how do others here think about Anthropic?

    [1]https://archive.is/Pm2QS

    [2]https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-reg...

    [3]https://investors.palantir.com/news-details/2024/Anthropic-a...

    [4]https://archive.is/4NGBE

    • By mrdependable 2026-02-0417:456 reply

      Being the 'good guy' is just marketing. It's like a unique selling point for them. Even their name alludes to it. They will only keep it up as long as it benefits them. Just look at the comments from their CEO about taking Saudi money.

      Not that I've got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.

      • By JohnnyMarcone 2026-02-0419:287 reply

        How do you parse the difference between marketing and having values? I have difficulty with that and I would love to understand how people can be confident one way or the other. In many instances, the marketing becomes so disconnected from actions that it's obvious. That hasn't happen with Anthropic for me.

        • By mrdependable 2026-02-0420:153 reply

          I am a fairly cynical person. Anthropic could have made this statement at any time, but they chose to do it when OpenAI says they are going to start showing ads, so view it in that context. They are saying this to try to get people angry about ads to drop OpenAI and move to Anthropic. For them, not having ads supports their current objective.

          When you accept the amount of investments that these companies have, you don't get to guide your company based on principles. Can you imagine someone in a boardroom saying, "Everyone, we can't do this. Sure it will make us a ton of money, but it's wrong!" Don't forget, OpenAI had a lot of public goodwill in the beginning as well. Whatever principles Dario Amodei has as an individual, I'm sure he can show us with his personal fortune.

          Parsing it is all about intention. If someone drops coffee on your computer, should you be angry? It depends on if they did it on purpose, or it was an accident. When a company posts a statement that ads are incongruous to their mission, what is their intention behind the message?

          • By JoshTriplett 2026-02-055:50

            > Anthropic could have made this statement at any time, but they chose to do it when OpenAI says they are going to start showing ads, so view it in that context.

            Obviously they did do it for that reason, but it does make sense. They've positioned themselves from day 1 as the AI company built on more values; that doesn't make them good but it's self-consistent. If, out of the blue earlier on when nobody was talking about ads in AI, they said "we're not going to put ads in AI", that would have been a Suspiciously Specific Denial: "our shirt saying we're not going to put ads in AI has people asking a lot of questions already answered by our shirt".

            > Can you imagine someone in a boardroom saying, "Everyone, we can't do this. Sure it will make us a ton of money, but it's wrong!"

            Yes. But that's not how you'd say it. "First of all, this would go against our established ethical principles, which you knew when you invested with us. Second, those ethical principles define our position in the market, which we should not abandon."

          • By thinkling 2026-02-0422:101 reply

            Ideally, ethical buyers would cause the market to line up behind ethical products. For that to be possible, we have to have choices available to us. Seems to me Anthropic is making such a choice available to see if buyers will line up behind it.

            • By fogzen 2026-02-050:46

              “Ideally” is doing a lot of heavy lifting here.

          • By kvirani 2026-02-0422:58

            Wow. Well said.

        • By advisedwang 2026-02-0419:501 reply

          Companies, not begin sentient, don't have values, only their leaders/employees do. The question then becomes "when are the humans free to implement their values in their work, and when aren't they". You need to inspecting ownership structure, size, corporate charter and so on, and realize that it varies with time and situation.

          Anthropic being a PBC probably helps.

          • By hungryhobbit 2026-02-0421:102 reply

            >Companies, not begin sentient, don't have values, only their leaders/employees do

            Isn't that a distinction without a difference? Every real world company has employees, and those people do have values (well, except the psychopaths).

            • By advisedwang 2026-02-0517:32

              My point is that the leaders have constraints on them that prevent them actually executing on their values. E.g. imagine leadership dislikes spam, but an institutional investor on the board has warned the CEO that if there's a sales dip before quarterly earnings and the market reacts badly he'll get fired. So the CEO - against his values - orders the VP or marketing to spam for all his life is worth. This stuff gets so internalized, that we routinely make decisions at work that go against our values because we know that's what's demanded of us by our organizations.

            • By Lanzaa 2026-02-0423:53

              I think there are two key imperatives that lead to company "psychopathy".

              The first imperative is a company must survive past its employees. A company is an explicit legal structure designed to survive past the initial people in the company. A company is _not_ the employees, it is what survives past the employees' employment.

              The second imperative is the diffusion of responsibility. A company becomes the responsible party for actions taken, not individual employees. This is part of the reason we allow companies to survive past employees, because their obligations survive as well.

              This leads to individual employees taking actions for the company against their own moral code for the good of the company.

              See also The Corporation (2003 film) and Meditations On Moloch (2014)[0].

              [0] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

        • By agluszak 2026-02-0423:272 reply

          > How do you parse the difference between marketing and having values?

          You don't. Companies want people to think they have values. But companies are not people. Companies exist to earn money.

          > That hasn't happen with Anthropic for me.

          Yet.

          • By yMEyUyNE1 2026-02-058:551 reply

            > Companies exist to earn money

            By providing product or services of value, not by maximing profits at any cost (definitely not by taking advantage of people, shortcomings of rules/laws, ... , or by harming people, ... , environment)

            • By agluszak 2026-02-0512:33

              Not sure if you're being sarcastic or not

          • By whattheheckheck 2026-02-056:361 reply

            Humans don't have values either

            • By ulbu 2026-02-0513:49

              what the heck heck

        • By bigyabai 2026-02-0420:521 reply

          No company has values. Anthropic's resistance to the administration is only as strong as their incentive to resist, and that incentive is money. Their execs love the "Twitter vs Facebook" comparison that makes Sam Altman look so evil and gives them a relative halo effect. To an extent, Sam Altman revels in the evil persona that makes him appear like the Darth Vader of some amorphous emergent technology. Both are very profitable optics to their respective audiences.

          If you lend any amount of real-world credence to the value of marketing, you're already giving the ad what it wants. This is (partially) why so many businesses pivoted to viral marketing and Twitter/X outreach that feels genuine, but requires only basic rhetorical comprehension to appease your audience. "Here at WhatsApp, we care deeply about human rights!" *audience loudly cheers*

          • By astrange 2026-02-052:04

            Anthropic is a PBC, not a "company", and the people who work there basically all belong to AI safety as a religion. Being incredibly cynical is generally dumb, but it's especially dumb to apply "for profit company" incentives to something that isn't a traditional "for profit company".

        • By YetAnotherNick 2026-02-057:03

          The difference is if they are willingly ready to lose money on a personal level. If folks in the company are not willing to sacrifice their comp for good, they are not "good" guys.

          For Anthropic and lot of startups with very high growth(even including OpenAI 4 years back or Google or Amazon), they don't have to lose anything to be good as they can just raise money. But when the growth stops that's when the test starts.

        • By Computer0 2026-02-0420:48

          People have values, Corporations do not.

        • By haritha-j 2026-02-0420:311 reply

          I believe in "too big to have values". No company that has grown beyond a certain size has ever had true values. Only shareholder wealth maximisation goals.

          • By astrange 2026-02-052:051 reply

            Anthropic is a PBC. The shareholder goals are public benefit (PB) not "wealth maximization".

            (Also, wealth maximization is a dumb goal and not how successful companies work. Cynicism is a bad strategy for being rich because it's too shortsighted.)

            • By haritha-j 2026-02-057:41

              Yes and OpenAI was a not for profit and look how that’s going. Now it’s a PBC. So Anthropic won’t even be the first PBC AI company pretending that they’re doing it for the good of the world and then trying to shove in porn and Add for wealth maximisation. Also most companies that go big have an IPO, and it’s mostly just about short term strategies to make share price go up after that.

      • By rhubarbtree 2026-02-057:281 reply

        What evidence do you have for that? Your point about Saudi is literally mentioned by the parent as one of the few negative points.

        I’m not saying this is how it will play out, but this reads as lazy cynicism - which is a self-realising attitude and something I really don’t admire about our nerd culture. We should be aiming higher.

        • By netdur 2026-02-057:411 reply

          While there is nothing wrong about Saudi money, they are spreading FUD about open weights and wants to regulate and control how the LLM is used

          • By rhubarbtree 2026-02-068:11

            Regulation is not always bad. What’s the FUD you refer to?

      • By qudat 2026-02-0423:55

        Agreed. Companies don’t have the capacity to be moral entities. They are driven purely based on monetary incentives. They are mechanical machinery. People are anthropomorphizing values onto companies or being duped by marketing speak.

      • By zombot 2026-02-055:42

        > but I trust them about as much as I trust OpenAI.

        So, ideally, not at all?

      • By libraryofbabel 2026-02-0419:091 reply

        I mean, yes and. Companies may do things for broadly marketing reasons, but that can have positive consequences for users and companies can make committed decisions that don't just optimize for short term benefits like revenue or share price. For example, Apple's commitment to user privacy is "just marketing" in a sense, but it does benefit users and they do sacrifice sources of revenue for it and even get into conflicts with governments over the issue.

        And company execs can hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic's CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.

        Basically it's complicated. 'Good guys' and 'bad guys' are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, any strong social structure). I think we all know this, I'm not saying you don't! But it's useful to spell it out.

        And I agree with you that we shouldn't really trust any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone's product or ecosystem if it's not open source.

        • By bigyabai 2026-02-0420:52

          > and even get into conflicts with governments over the issue.

          To be fair, they also cooperate with the US government for immoral dragnet surveillance[0], and regularly assent to censorship (VPN bans, removed emojis, etc.) abroad. It's in both Apple and most governments' best interests to appear like mortal enemies, but cooperate for financial and domestic security purposes. Which for all intents and purposes, it seems they do. Two weeks after the San Bernardino kerfuffle, the iPhone in question was cracked and both parties got to walk away conveniently vindicated of suspicion. I don't think this is a moral failing of anyone, it's just the obvious incentives of Apple's relationship with their domestic fed. Nobody holds Apple's morality accountable, and I bet they're quite grateful for that.

          [0] https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...

      • By yoyohello13 2026-02-0419:211 reply

        At the end of the day, the choices in companies we interact with is pretty limited. I much prefer to interact with a company that at least pays lip service to being 'good' as opposed to a company that is actively just plain evil and ok with it.

        That's the main reason I stick with iOS. At least Apple talks about caring about privacy. Google/Android doesn't even bother to talk about it.

        • By astrange 2026-02-052:07

          That's probably not true - government regulators require a lot of privacy work and Android certainly complies with that. Legal compliance is a large business strategy because small companies can't afford to do it.

    • By Jayakumark 2026-02-0418:242 reply

      They are the most anti-opensource AI Weights company on the planet, they don't want to do it and don't want anyone else to do it. They just hide behind safety and alignment blanket saying no models are safe outside of theirs, they wont even release their decommissioned models. Its just money play - Companies don't have ethics , the policies change based on money and who runs it - look at google - their mantra once was Don't be Evil.

      https://www.anthropic.com/news/anthropic-s-recommendations-o...

      Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.

      • By skerit 2026-02-0421:12

        They don't even want people using OpenCode with their Max subscriptions (which OpenAI does allow, kind of)

      • By Epitaque 2026-02-0418:364 reply

        [flagged]

        • By heavyset_go 2026-02-056:20

          Think of the harm that bad actors could cause with access to high school chemistry and physics textbooks

        • By wavemode 2026-02-0422:34

          The steelman argument is that super-intelligent AGI could allow any random person to build destructive technology, so companies on the path toward creating that ought to be very careful about alignment, safety and, indeed, access to weights.

          The obvious assumed premise of this argument is that Anthropic are actually on the path toward creating super-intelligent AGI. Many people, including myself, are skeptical of this. (In fact I would go farther - in my opinion, cosplaying as though their AI is so intelligent that it's dangerous has become a marketing campaign for Anthropic, and their rhetoric around this topic should usually be taken with a grain of salt.)

        • By thenewnewguy 2026-02-0419:06

          I would not consider myself an expert on LLMs, at least not compared to the people who actually create them at companies like Anthropic, but I can have a go at a steelman:

          LLMs allow hostile actors to do wide-scale damage to society by significantly decreasing the marginal cost and increasing the ease of spreading misinformation, propaganda, and other fake content. While this was already possible before, it required creating large troll farms of real people, semi-specialized skills like photoshop, etc. I personally don't believe that AGI/ASI is possible through LLMs, but if you do that would magnify the potential damage tenfold.

          Closed-weight LLMs can be controlled to prevent or at least reduce the harmful actions they are used for. Even if you don't trust Anthropic to do this alone, they are a large company beholden to the law and the government can audit their performance. A criminal or hostile nation state downloading an open weight LLM is not going to care about the law.

          This would not be a particularly novel idea - a similar reality is already true of other products and services that can be used to do widespread harm. Google "Invention Secrecy Act".

        • By 10xDev 2026-02-0418:531 reply

          "please do all the work to argue my position so I don't have to".

          • By Epitaque 2026-02-0419:041 reply

            I wouldn't mind doing my best steelman of the open source AI if he responds (seriously, id try).

            Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.

            I just think it's really dumb that people argue passionately about open weight LLMs without even mentioning the risks.

            • By Jayakumark 2026-02-0420:06

              Since you asked for it, here is my steelman argument : Everything can cause harm - it depends on who is holding it , how determined are they , how easy is it and what are the consequences. Open source will make this super easy and cheap. 1. We are already seeing AI Slop everywhere Social media Content, Fake Impersonation - if the revenue from whats made is larger than cost of making it , this is bound to happen, Open models can be run locally with no control, mostly it can be fine tuned to cause damage - where as closed source is hard as vendors might block it. 2. Less skilled person can exploit or create harmful code - who otherwise could not have. 3. Remove Guards from a open model and jailbreak, which can't be observed anymore (like a unknown zero day attack) since it may be running private. 4. Almost anything digital can be Faked/Manipulated from Original/Overwhelmed with false narratives so they can rank better over real in search.

    • By throwaw12 2026-02-0418:494 reply

      I am on the opposite side of what you are thinking.

      - Blocking access to others (cursor, openai, opencode)

      - Asking to regulate hardware chips more, so that they don't get good competition from Chinese labs

      - partnerships with palantir, DoD as if it wasn't obvious how these organizations use technology and for what purposes.

      at this scale, I don't think there are good companies. My hope is on open models, and only labs doing good in that front are Chinese labs.

      • By mym1990 2026-02-0419:141 reply

        The problem is that "good" companies cannot succeed in a landscape filled with morally bad ones, when you are in a time of low morality being rewarded. Competing in a rigged market by trying to be 100% morally and ethically right ends up in not competing at all. So companies have to pick and choose the hills they fight on. If you take a look at how people are voting with their dollars by paying for these tools...being a "good" company doesn't seem to factor much into it on aggregate.

        • By throwaw12 2026-02-0419:28

          exactly. you cant compete morally when cheating, doing illegal things and supporting bad guys are norm. Hence, I hope open models will win in the long term.

          Similar to Oracle vs Postgres, or some closed source obscure caching vs Redis. One day I hope we will have very good SOTA open models where closed models compete to catch up (not saying Oracle is playing a catch up with Pg).

      • By signatoremo 2026-02-0423:312 reply

        No good companies for you, yet you bet on Chinese labs! Even if you have no moral problems at all with the China authoritarian, Chinese companies are as morally trustworthy as American ones. That is clear.

        As it’s often said: there is no such thing as free product, you are the product. AI training is expensive even for Chinese companies.

        • By nemomarx 2026-02-0423:34

          I expect to some degree the Chinese models don't need immediate profits, because having them as a show of capability for the state is already a goal met? They're probably getting some support from the state at least.

        • By throwaw12 2026-02-0511:03

          > Even if you have no moral problems at all with the China authoritarian

          It's funny how you framed your sentence. Let's unpack it.

          1. I didn't say Chinese companies are good, I said my hope is on open models and only Chinese labs are doing good in that front

          2. Chinese company doesn't immediately mean its about regime. Maybe its true in the US with the current admin, see how Meta, Google, Microsoft got immediately aligned with current admin

          3. Even when company is associated with Chinese regime, I don't remember Chinese authoritarian regime kidnapping the head of another state, invading bunch of countries in the Middle East and supporting states committing genocide and ethnic cleansing (Israel in Gaza, UAE in Sudan and many more small militant groups across Africa and ME), and authoritarian regimes like Saudi Arabia.

          If you ask me to rate them by evil level, I would give the US 80/100, and China 25/100 - no invasions, no kidnapping of head of states, no obvious terror acts - but unfortunate situation with Uyghurs.

      • By esbranson 2026-02-0419:09

        > Blocking access

        > Asking to regulate hardware chips more

        > partnerships with [the military-industrial complex]

        > only labs doing good in that front are Chinese labs

        That last one is a doozy.

      • By derac 2026-02-0419:05

        I agree, they seem to be following the Apple playbook. Make a closed off platform and present yourself as morally superior.

    • By Zambyte 2026-02-0419:10

      They are the only AI company more closed than OpenAI, which is quite a feat. Any "commitment" they make should only be interpreted as marketing until they rectify this. The only "good guys" in AI are the ones developing inference engines that let you run models on your own hardware. Any individual model has some problems, but by making models fungible and fully under the users control (access to weights) it becomes a possible positive force for the user.

    • By falloutx 2026-02-0419:581 reply

      >I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.

      There are no good guys, Anthropic is one of the worst of the AI companies. Their CEO is continuously threatening all of the white collar workers, they have engineering playing the 100x engineer game on Xitter. They work with Palantir and support ICE. If anything, chinese companies are ethically better at this point.

      • By delaminator 2026-02-057:364 reply

        Even in CNN polling the majority of US citizens support ICE.

        Perhaps your moral bubble is not universal.

        • By selfhoster11 2026-02-0512:041 reply

          Murder is immoral. That much is certain. ICE murders people.

          • By delaminator 2026-02-0515:16

            No one's been murdered. And those polls were after the protesters trying to kill police were shot in self-defense.

        • By Defenestresque 2026-02-0817:59

          I apologize for resurrecting a fairly old thread (I originally came across it via Googling the Claude Ad announcement), and this has nothing to do with the main post anyway but my desire not to further derail things is butting up against my pigheadedness as famously exemplified in xkcd.com/386

          You wrote "the majority of US citizens support ICE", later specifying it was after the controversial killings. I want to fact-check your claim, because if I came here from Google I think others may too and so far there has been no pushback on this apart from "no, they don't support ICE" which is problematic.

          The Navigator Research poll is probably the most interesting in terms of actually visualizing the data [5], it used to be at the bottom but I moved this section to the top. It's an excellent visualization, my eye was especially drawn to the incredibly stark divide between the MAGA-Republicans vs. non-MAGA Republicans. I've never seen it in such contrast and I wonder how that will play out. Regardless, that's outside of the topic of this comment, wich is already too long.

          The poll (n=~1,000) conducted around the five following days states that among all those polled, the Net Favorable rating, asked as:

              Please indicate how favorable or unfavorable you are to: ICE, also known as Immigration and Customs Enforcement.
          
          was:

              June 30 '25    - 8
              Dec   8 '25    -14
              Jan  12 '26    -20
              Feb   1 '26    -22
          
          As an added curio, among Democrats it shifted -12 points, but among Republicans it shifted -17 points. Make of that what you will.

          A poll released on February 5th and conducted January 27th-30th (n=1,462) by Marist University's MaristPoll organization states [1]:

              62% of Americans say the actions of ICE are making Americans somewhat less safe or much less safe. 37% think the actions of ICE are making Americans much more safe or somewhat more safe
          
          The following information is from Pages 14-16 of the PDF [2]. The specific question was:

              Do you approve or disapprove of the job Immigration and Customs Enforcement (ICE) is doing?
          
              33% Approve
              60% Disapprove
               7% Unsure
          
          The poll is extremely along the party lines in some respects (e.g. only 2% of the people who answered 'Approve' voted for Kamala in 2024), but I feel like that's to be expected. At the same time, while people who approve of ICE definitely didn't vote for Kamala, people who voted for Trump are not extremely strong supporters, with 24% of 2024 Trump voters answering either Disapprove or Unsure (18% and 6% respectively).

          Some other interesting correlations are that while recent ICE actions are generally very unpopular among Democrats and somewhat popular among Republicans, the usual rural/urban differences which are usually much better predictors for how someone will vote, far ahead of correlations with education, race or income. In this case "approve" difference between Big city (31%) and Rural (43%) was only 12 points. Interestingly, the approval rating almost follows a reverse bell curve (?) in terms of population density:

              Big city   31%
              Small city 31%
              Suburban   28%
              Small town 40%
              Rural      43%
          
          Again, apologies for the digression, I figured maybe someone can make some sense of those numbers. Another question:

              How would you describe the actions of Immigration and Customs Enforcement (ICE) in enforcing immigration laws? Do you think they have:
          
              65% Gone too far 
              12% Not gone far enough
              22% Or are their actions about right
               1% Vol: unsure
          
          This can be summarized in a bunch of different ways, but it's not dishonest to say that 2/3rds of Americans think that ICE has "gone too far", which heavily contradicts your statement. Everything after this is kind of just because I felt three is the minimum number of sources to thoroughly dismiss your assertion.

          Here are the results from June, 2025 for the exact same question, with the change in percentage from June, 2025 to January, 2026 in brackets.

              54% Gone too far                      [+11%]
              18% Not gone far enough               [-06%]
              26% Or are their actions about right  [-04%]
               1% Vol: Unsure
          
          Ipsos released a poll [3] on January 27th, conducted between January 23rd-25th (n=1,139) with a self-reported margin of error of +/- 3.0 percentage points at the 95% confidence level.

          Immigration-related outtakes:

          Only 10% of those polled picked "Immigration" as their selection to a 17-option question that asked "In your opinion, what is the most important problem facing the U.S. today?" By comparison, 15% picked the "Economy, unemployment and jobs", 12% picked "Political violence and extremism" and 16% picked "Threats to democratic values and norms" (Page 1)

              Overall, do you think efforts by U.S. Immigration and Customs Enforcement (ICE) agents to deal with unauthorized immigration in the country... (Page 10)
          
              Go too far           58% 
              Do not go far enough 12%
              Are about right      26%
              
              Do you approve or disapprove of the way President Donald Trump is handling the following issues? Immigration (Page 3)
          
              Strongly approve    26%
              Somewhat approve    14%
              Somewhat disapprove 10%
              Strongly disapprove 43%
          
              Generally speaking, would you say the following things are heading in the right direction, or are they off on the wrong track? Immigration policy (Page 5)
          
              Right direction 35%
              Wrong track     51%
              Don’t know      11%
          
          P.S. I don't claim this comment to be unbiased, I clearly "cherry-picked" certain facts, but they were not out of context nor did I massage the numbers, the reason I phrased it this way was because it was a direct rebuttal to the claim that most Americans support ICE [even after the shootings].

          [1] https://maristpoll.marist.edu/polls/the-actions-of-ice-febru... [2] https://maristpoll.marist.edu/wp-content/uploads/2026/02/NPR... [3] https://www.ipsos.com/en-us/majority-americans-say-federal-i... [4] https://www.ipsos.com/sites/default/files/ct/news/documents/... [5] https://navigatorresearch.org/wp-content/uploads/2026/02/Nav...

    • By insane_dreamer 2026-02-0418:18

      I don’t know about “good guys” but the fact that they seem to be highly focused on coding rather than general purpose chat bot (hard to overcome chatGPT mindshare there) they have a customer base that is more willing to pay for usage and therefore are less likely to need to add an ad revenue stream. So yes so far I would say they are on stronger ground than the others.

    • By skybrian 2026-02-0416:10

      When powerful people, companies, and other organizations like governments do a whole lot of very good and very bad things, figuring out whether this rounds to “more good than bad” or “more bad than good” is kind of a fraught question. I think Anthropic is still in the “more good than bad” range, but it doesn’t make sense to think about it along the lines of heros versus villains. They’ve done things that I put in the “seems bad” column, and will likely do more. Also more good things, too.

      They’re moving towards becoming load-bearing infrastructure and then answering specific questions about what you should do about it become rather situational.

    • By deaux 2026-02-056:55

      They're 16% owned by Google and Amazon, so they're already a minimum of 16% "bad guys".

    • By rowyourboat 2026-02-057:34

      Remember when OpenAI was about not-for-profit AI development for the betterment of humanity?

    • By easterncalculus 2026-02-056:59

      > Committing to no ads.

      No one who believes this should be in any position of authority in the AI space. Anthropic's marketing BS has basically been taken as fact on this website since they started and it's just so tiring to watch this industry fall for the same nonsense over and over and over again.

      Anthropic is younger. That's why they're not doing ads. As soon as they actually reach the spending to (not) reach their AGI goals they will start running ads and begging the taxpayer for even more money.

    • By adriand 2026-02-0417:48

      > I'm curious, how do others here think about Anthropic?

      I’m very pleased they exist and have this mindset and are also so good at what they do. I have a Max subscription - my most expensive subscription by a wide margin - and don’t resent the price at all. I am earnestly and perhaps naively hoping they can avoid enshittification. A business model where I am not the product gives me hope.

    • By nilkn 2026-02-0423:401 reply

      Anthropic was founded by OpenAI defectors who said OpenAI's product strategy was too dangerous and needed more safety research. But in reality Anthropic has almost exactly the same product strategy. A lot of this is just marketing to raise money to make the founders billionaires rather than the multi-millionaires they only would've been if they hadn't founded a competitor.

      • By astrange 2026-02-052:081 reply

        Anthropic hasn't released image or video generation models. Seems pretty different to me.

        Claude is somewhat sycophantic but nowhere near 4o levels. (or even Gemini 3 levels)

    • By hackernews90210 2026-02-0618:59

      I am not sure, if it will turn out to be.

      I always find the CEO using hype and fear as his marketing strategy. A year ago, he came out and said, "Blood-bath" for white collar jobs. It seems to create some sense of anxiety in the receiving end.

    • By agluszak 2026-02-0423:24

      In Poland, before the last presidential election, a member of one candidate’s campaign team had a moment of accidental honesty. Asked whether his candidate would pledge not to raise taxes after winning, he replied: “Well, what’s the harm in promising?”

    • By raincole 2026-02-0423:221 reply

      Google was the 'good guy.' Until it isn't.

      Hell, OpenAI was the good guy.

      • By JumpinJack_Cash 2026-02-051:03

        I can't see how Google turned to become evil or how OpenAI did for that matter.

        Google delivered on their promise, and OpenAI well it's too soon but it's looking good.

        The name OpenAI and its structure is a relic from a world where the sentiment was to be heavily preoccupied and concerned by the potential accidental release of an AGI.

        Now that it's time for products the name and the structure are no longer serving the goal

    • By 4d4m 2026-02-0522:19

      On not having kissed the ring - the board appointments and nat sec appointments indicate to me this is not true, thoughts?

    • By cedws 2026-02-0417:461 reply

      Their move of disallowing alternative clients to use a Claude Code subscription pissed me off immensely. I triggered a discussion about it yesterday[0]. It’s the opposite of the openness that led software to where it is today. I’m usually not so bothered about such things, but this is existential for us engineers. We need to scrutinise this behaviour from AI companies extra hard or we’re going to experience unprecedented enshittification. Imagine a world where you’ve lost your software freedoms and have no ability to fight back because Anthropic’s customers are pumping out 20x as many features as you.

      [0]: https://news.ycombinator.com/item?id=46873708

      • By 2001zhaozhao 2026-02-0420:59

        Anthropic's move of disallowing opencode is quite offputting to me because there really isn't a way to interpret it as anything other than a walled-garden move that abuses their market position to deliberately lock in users.

        Opencode ought to have similar usage patterns to Claude Code, being a very similar software (if anything Opencode would use fewer tokens as it doesn't have some fancy features from Claude Code like plan files and background agents). Any subscription usage pattern "abuses" that you can do with Opencode can also be done by running Claude Code automatically from the CLI. Therefore restricting Opencode wouldn't really save Anthropic money as it would just move problem users from automatically calling Opencode to automatically calling CC. The move seems to purely be one to restrict subscribers from using competing tools and enforce a vertically-integrated ecosystem.

        In fact, their competitor OpenAI has already realized that Opencode is not really dissimilar from other coding agents, which is why they are comfortable officially supporting Opencode with their subscription in the first place. Since Codex is already open-source and people can hack it however they want, there's no real downside for OpenAI to support other coding agents (other than lock-in). The users enter through a different platform, use the service reasonably (spending a similar amount of tokens as they would with Codex), and OpenAI makes profit from these users as well as PR brownie points for supporting an open ecosystem.

        In my mind being in control of the tools I use is a big feature when choosing an AI subscription and ecosystem to invest into. By restricting Opencode, Anthropic has managed to turn me off from their product offerings significantly, and they've managed to do so even though I was not even using Opencode. I don't care about losing access to a tool I'm not using, but I do care about what Anthropic signals with this move. Even if it isn't the intention to lock us in and then enshittify the product later, they are certainly acting like it.

        The thing is, I am usually a vote-with-my-wallet person who would support Anthropic for its values even if they fall behind significantly compared to competitors. Now, unless they reverse course on banning open-source AI tools, I will probably revert to simply choosing whichever AI company is ahead at any given point.

        I don't know whether Anthropic knows that they are pissing off their most loyal fanbase of conscientious consumers a lot with these moves. Sure, we care about AI ethics and safety, but we also care about being treated well as consumers.

    • By b3ing 2026-02-053:33

      Too late for that, they came out and said they will train on anything you type in there months ago

    • By drawfloat 2026-02-0418:391 reply

      They work with the US military.

      • By mhb 2026-02-0418:493 reply

        Defending the US. So?

        • By drawfloat 2026-02-0419:582 reply

          What year do you think it is? The US is actively aggressive in multiple areas of the world. As a non US citizen I don’t think helping that effort at the expense of the rest of the world is good.

          • By mhb 2026-02-0422:061 reply

            Two things can be true. The US pays for most of the defense of NATO.

            • By drawfloat 2026-02-057:001 reply

              You’re currently threatening to invade a NATO country and being investigated for electoral interference in another. Not buying it anymore.

              Whatever is going on there, I would not categorise a company backing that as morally “good”

              • By mhb 2026-02-0521:07

                Yeah. That's all absurd. It's a big leap from that to the conclusion that the US military isn't a net good for the world. What post-modern world do you think you're living in in which militaries aren't needed for defense? And if they weren't around to defend you, you'd be better off? The veneer of civilization isn't as thick as you seem to think.

          • By riku_iki 2026-02-0422:59

            Sure, as well as other powers are actively aggressive during last N thousands years, that's how humans operate, who don't they extinct.

        • By spacechild1 2026-02-0421:06

          The US military is famous for purely acting in self defence...

        • By cess11 2026-02-0419:031 reply

          That's pretty bad.

          • By mhb 2026-02-0419:111 reply

            Sweden too. So there's that.

            • By cess11 2026-02-058:461 reply

              What do you mean?

              • By mhb 2026-02-0521:031 reply

                I mean that it's one thing to think it's bad to be defending the US. And it's another thing to think it's bad for the US to be defending the place you live.

                • By cess11 2026-02-0622:021 reply

                  It doesn't. US presence where I live, make it a target.

                  • By mhb 2026-02-071:071 reply

                    M'okay. Cops are evil, military is evil. Looks bleak.

                    • By cess11 2026-02-0719:061 reply

                      I'd prefer we kicked out the usians and reformed back into people's defense forces.

                      I've spent some time in the military but didn't pursue it because they made a law where we could be ordered to participate in foreign missions, and at the time I knew people who had been to Afghanistan and helped out protecting US drug barons. They weren't feeling very good and got very little assistance with their psychiatric issues, like going to parties and drinking and crying a lot and sometimes tell stories about kids they'd murdered.

                      • By mhb 2026-02-0721:46

                        That sounds reasonable but is it realistic to believe that with military spending of 10% of your potential adversary (i.e., Russia) Sweden would be able to mount a credible defense on its own?

    • By romanovcode 2026-02-059:11

      > - Willing to risk defense department contract over objections to use for lethal operations [1]

      > The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]

      Dude, you cannot put these two sentences together. The defense department was either a fluke or a PR stunt. If they partner with Palintir they absolutely do not care that their tech is going to be used for killing and other horrible deeds.

      A company with morals (which does not exist BTW) would never partner with Palintir.

    • By marxisttemp 2026-02-0416:521 reply

      I think I’m not allowed to say what I think should happen to anyone who works with Palantir.

      • By fragmede 2026-02-0418:41

        Maybe you could use an LLM to clean up what you want to say

    • By yuiasdfj 2026-02-056:19

      [dead]

    • By threetonesun 2026-02-0418:381 reply

      Given that LLMs essentially stole business models from public (and not!) works the ideal state is they all die in favor of something we can run locally.

      • By mirekrusin 2026-02-0419:152 reply

        Anthropic settled with authors of stolen work for $1.5b, this case is closed, isn't it?

  • By waldopat 2026-02-0419:013 reply

    I feel like they are picking a lane. ChatGPT is great for chatbots and the like, but, as was discussed in a prior thread, chatbots aren't the end-all-be-all of AI or LLMs. Claude Code is the workhorse for me and most folks I know for AI assisted development and business automation type tasks. Meanwhile, most folks I know who use ChatGPT are really replacing Google Search. This is where folks are trying to create llm.txt files to become more discoverable by ChatGPT specifically.

    You can see the very different response by OpenAI: https://openai.com/index/our-approach-to-advertising-and-exp.... ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.

    For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just "one of the good guys" but that they may be stabilizing on a business model of both seat and usage based subscriptions.

    Either way, both companies are hemorrhaging money.

    • By guidoism 2026-02-0420:281 reply

      > ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.

      Yeah I remember when Google used to be like this. Then today I tried to go to 39dollarglasses.com and accidentally went to the top search result which was actually an ad for some other company. Arrrg.

      • By panarky 2026-02-0421:321 reply

        Before Google, web search was a toxic stew of conflicts of interest. It was impossible to tell if search results were paid ads or the best possible results for your query.

        Google changed all that, and put a clear wall between organic results and ads. They consciously structured the company like a newspaper, to prevent the information side from being polluted and distorted by the money-making side.

        Here's a snip from their IPO letter [0]:

        Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a well-run newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see.

        Anthropic's statement reads the same way, and it's refreshing to see them prioritize long-term values like trust over short-term monetization.

        It's hard to put a dollar value on trust, but even when they fall short of their ideals, it's still a big differentiator from competitors like Microsoft, Meta and OpenAI.

        I'd bet that a large portion of Google's enterprise value today can be traced to that trust differential with their competitors, and I wouldn't be surprised to see a similar outcome for Anthropic.

        Don't be evil, but unironically.

        [0] https://abc.xyz/investor/founders-letters/ipo-letter/default...

        • By AceJohnny2 2026-02-0421:55

          I agree. Having watched Google shift from its younger idealistic values to its current corrupted state, I can't help but be cynical about Anthropic's long-term trajectory.

          But if nothing else, I can appreciate Anthropic's current values, and hope they will last as long as possible...

    • By Gud 2026-02-0421:102 reply

      Disagree.

      I end up using ChatGPT for general coding tasks because of the limited session/weekly limit Claude pro offers, and it works surprisingly well.

      The best is IMO to use them both. They complement each other.

      • By stavros 2026-02-0422:38

        I use OpenCode and I made an "architect" agent that uses Opus to make a plan, then gives that plan to a "developer" agent (with Sonnet) that implements it, and a "reviewer" agent (Codex) reviews it in the end. I've gotten much better results with this than with straight up Opus throughout, and obviously hit the limits much less often as well.

      • By waldopat 2026-02-0518:31

        Agreed on using both. I definitely know people who prefer Codex or Cursor. It's probably Coke or Pepsi at this point. I tend to prefer Claude Code, but that's just me.

    • By johnsimer 2026-02-0419:484 reply

      Both companies are making bank on inference

      • By waldopat 2026-02-0420:212 reply

        You may not like this sources, but both the tomato throwers to the green visor crowds agree they are losing money. How and when they make up the difference is up to speculation

        https://www.wheresyoured.at/why-everybody-is-losing-money-on... https://www.economist.com/business/2025/12/29/openai-faces-a... https://finance.yahoo.com/news/openais-own-forecast-predicts...

        • By mediaman 2026-02-051:44

          The comment was with reference to inference, not total P&L.

          Of course they are losing money in total. They are not, however, losing money per marginal token.

          It’s trivial to see this by looking at the market clearing price of advanced open source models and comparing to the inference prices charged by OpenAI.

        • By mh2266 2026-02-051:111 reply

          > green visor crowds

          ??

      • By lysace 2026-02-0420:072 reply

        That is the big question. Got reliable data on that?

        (My gut feeling tells me Claude Code is currently underpriced with regards to inference costs. But that's just a gut feeling...)

        • By tvink 2026-02-0420:421 reply

          https://www.wheresyoured.at/costs/

          Their AWS spend being higher than their revenue might hint at the same.

          Nobody has reliable data, I think it's fair to assume that even Anthropic is doing voodoo math to sleep at night.

          • By rcxdude 2026-02-059:55

            The closed frontier models seem to sell at a substantial premium to inference on open-source models, so that does suggest that there is a decent margin to the inference. The training is where they're losing money, and the bull case is that every model makes money eventually, but the models keep getting bigger or at least more expensive to train, so they're borrowing money to make even more money later (which does need to converge somehow, i.e. they can't just keep shooting larger until the market can't actually afford to pay for the training). The bear case is that this is basically just a treadmill to stay on the frontier where they can make that premium (if the big labs ever stop they'll quickly get caught up by cheaper or even open-source models and lose their edge), in which case it's probably never going to actually become sustainable.

        • By simianwords 2026-02-0420:321 reply

          > If we subtract the cost of compute from revenue to calculate the gross margin (on an accounting basis),2 it seems to be about 50% — lower than the norm for software companies (where 60-80% is typical) but still higher than many industries.

          https://epoch.ai/gradient-updates/can-ai-companies-become-pr...

          • By lysace 2026-02-0420:36

            The context of that quote is OpenAI as a whole.

      • By exitb 2026-02-0420:212 reply

        Maybe on the API, but I highly doubt that the coding agent subscription plans are profitable at the moment.

        • By lawrenceyan 2026-02-050:00

          Build out distribution first and generate network effects.

        • By tvink 2026-02-0420:39

          For sure not

      • By ehsanu1 2026-02-0419:561 reply

        Could you substantiate that? That take into account training and staffing costs?

        • By ihsw 2026-02-0420:071 reply

          The parent specifically said inference, which does not include training and staffing costs.

          • By ehsanu1 2026-02-051:36

            But those aren't things you can really separate for proprietary models. Keeping inference running also requires staff, not just for the R&D.

  • By 4corners4sides 2026-02-0421:248 reply

    This is one of those “don’t be evil” like articles that companies remove when the going gets tough but I guess we should be thankful that things are looking rosy enough for Anthropic at the moment that they would release a blog like this.

    The point about filtering signal vs. noise in search engines can’t really be stated enough. At this point using a search engine and the conventional internet in general is an exercise in frustration. It’s simply a user hostile place – infinite cookie banners for sites that shouldn’t collect data at all, auto play advertisements, engagement farming, sites generated by AI to shill and produce a word count. You could argue that AI exacerbates this situation but you also have to agree that it is much more pleasant to ask perplexity, ChatGPT or Claude a question than to put yourself through the torture of conventional search. Introducing ads into this would completely deprive the user of a way of navigating the web in a way that actually respects their dignity.

    I also agree in the sense that the current crop of AIs do feel like a space to think as opposed to a place where I am being manipulated, controlled or treated like some sheep in flock to be sheared for cash.

    • By pixelready 2026-02-0421:593 reply

      The current crop of LLM-backed chatbots do have a bit of that “old, good internet” flavor. A mostly unspoiled frontier where things are changing rapidly, potential seems unbounded, the people molding the actual tech and discussing it are enthusiasts with a sort of sorcerer’s apprentice vibe. Not sure how long it can persist, since I’ve seen this story before and we all understand the incentive structures at play. Does anyone know how if there are precedents for PBCs or B-Corp type businesses to be held accountable for betraying their stated values? Or is it just window dressing with no legal clout? Can they change to a standard corporation on a whim and ditch the non-shareholder maximization goals?

      • By petetnt 2026-02-050:432 reply

        There’s nothing old internet about these AI companies. Old internet was about giving out and asking for nothing in return. These companies take everything and give back nothing, unless you are willing to pay that is.

        • By dasil003 2026-02-050:502 reply

          I get the sentiment, but if you can't acknowledge that AI is useful and currently a lot better than search for a great many things, then it's hard to have a rational conversation.

          • By y0eswddl 2026-02-051:222 reply

            why do they need to acknowledge something outside of the point they're trying to make?

            • By dasil003 2026-02-055:11

              Because it was a middlebrow dismissal of the GP

            • By j-bos 2026-02-053:41

              because that's how conversations work. anything less is sparkling debate.

          • By nickphx 2026-02-052:41

            how is it useful to be fed misleading nonsense?

      • By einpoklum 2026-02-0423:40

        No, they don't. They soak up tons of your most personal and sensitive information like a sponge, and you don't know what's done with it. In the "good old Internet", that did not happen. Also in the good old Internet, it wasn't the masses all dependent on a few central mega-corporations shaping the interaction, but a many-to-many affair, with people and organizations of different sizes running the sites where interaction took place.

        Ok, I know I'm describing the past with rosy glasses. After all, the Internet started as a DARPA project. But still, current reality is itself rather dystopic in many ways.

      • By chanux 2026-02-053:08

        Just enjoy the "good times" powered by other peoples money.

    • By stavros 2026-02-0422:36

      > This is one of those “don’t be evil” like articles that companies remove when the going gets tough but I guess we should be thankful that things are looking rosy enough for Anthropic at the moment that they would release a blog like this.

      Exactly this. Show me the incentive, and I'll show you the outcome, but at least I'm glad we're getting a bit more time ad-free.

    • By namelosw 2026-02-051:49

      And it's very timely and intentional, as Gemini is already shoveling product links on my face repeatedly, while OpenAI is testing ads recently. [0]

      [0] https://openai.com/index/our-approach-to-advertising-and-exp...

    • By jhhh 2026-02-055:16

      Right, if there's no legal weight to any of their statements then they mean almost nothing. It's a very weak signal and just feels like marketing. All digital goods can and will be made worse over time if it benefits the company.

    • By zombot 2026-02-055:41

      > Introducing ads into this would completely deprive the user of a way of navigating the web in a way that actually respects their dignity.

      Say what you will, there are at least ad blockers for ads on the internet. There are _no_ ad blockers for ads in chatbots.

    • By terminalbraid 2026-02-0422:02

      > I guess we should be thankful that things are looking rosy enough for Anthropic

      Forgive me if I am not.

    • By mrcwinn 2026-02-053:55

      I agree, but at least this is a policy. "Don't be evil" was vague bullshit.

    • By jorvi 2026-02-0422:364 reply

      Current LLMs often produce much, much worse results than manually searching.

      If you need to search the internet on a topic that is full of unknown unknowns for you, they're a pretty decent way to get a lay of the land, but beyond that, off to Kagi (or Google) you go.

      Even worse is that the results are inconsistent. I can ask Gemini five times at what temperature I should take a waterfowl out of the oven, and get five different answers, 10°C apart.

      You cannot trust answers from an LLM.

      • By signatoremo 2026-02-0423:161 reply

        > I can ask Gemini five times at what temperature I should take a waterfowl out of the oven, and get five different answers, 10°C apart.

        Are you sure? Both Gemini and ChatGPT gave me consistent answers 3 times in a row, even if the two versions are slightly different.

        Their answers are inline with this version:

        https://blog.thermoworks.com/duck_roast/

        • By jorvi 2026-02-053:481 reply

          What do you mean, "are you sure"? I literally saw and see it happen in front of my eyes. Just now tested it with slight variations of "ideal temperature waterfowl cooking", "best temperature waterfowl roasting", etc. and all these questions yield different answers, with temperatures ranging from 47c-57c (ignoring the 74c food safety ones).

          That's my entire point. Even adding an "is" or "the" can get you way different advice. No human would give you different info when you ask "what's the waterfowl's best cooking temperature" vs "what is waterfowl's best roasting temperature".

          • By cruffle_duffle 2026-02-054:312 reply

            Did you point that out to one of them… like “hey bro, I’ve asked y’all this question in multiple threads and get wildly different answers. Why?”

            And the answer is probably because there is no such thing as an ideal temperature for waterfowl because the answer is “it depends” and you didn’t give it enough context to better answer your question.

            Context is everything. Give it poor prompts, you’ll get poor answers. LLMs are no different than programming a computer or anything else in this domain.

            And learning how to give good context is a skill. One we all need to learn.

            • By jorvi 2026-02-0517:21

              But that isn't how normal people interact with search engines. Which is the whole argument everyone is saying here, how LLMs are now better 'correct answer generators' than search engine. They're not. My mother directly experienced that. Her food would have come out much better if she completely ignored Gemini and checked a site.

              One of the best things LLMs could do (and that no one seems to be doing) is allow it to admit uncertainty. If the average weight of all tokens in a response drops below X, it should just say "I don't know, you should check a different source."

              At any rate, if my mother has to figure out some 10 sentence stunted multiform question for the LLM to finally get a good consistent answer, or can just type "best Indian restaurant in Brooklyn" (maybe even with site:restaurant reviews.com"), which experience is superior?

              > LLMs are no different than programming a computer or anything else in this domain.

              Just feel like reiterating against this: virtually no one programs their search queries or query engineers a 10 sentence search query.

            • By jhhh 2026-02-055:14

              If I made a new, not-AI tool called 'correct answer provider' which provided definitive, incorrect answers to things you'd call it bad software. But because it is AI we're going to blame the user for not second guessing the answers or holding it wrong ie. bad prompting.

      • By r0x1n1t3 2026-02-0423:211 reply

        I created an account just to point out that this is simply not true. I just tried it! The answers were consistent across all 5 samples with both "Fast" mode and Pro (which I think is really important to mention if you're going to post comments like this - I was thinking maybe it would be inconsistent with the Flash model)

        • By jorvi 2026-02-053:49

          Unfortunately, despite your account creation it remains true that this happened. Just tested it again and got different answers.

      • By chrysoprace 2026-02-0422:54

        It obviously takes discipline, but using something like Perplexity as an aggregator typically gets me better results, because I can click through to the sources.

        It's not a perfect solution because you need the discipline/intuition to do that, and not blindly trust the summary.

      • By 12345hn6789 2026-02-050:091 reply

        Did you actually ask the model this question or are you fully strawmanning?

        • By jorvi 2026-02-053:433 reply

          My mother did, for Christmas. It was a goose that ended up being raw in a lot of places.

          I then pointed out this same inconsistency to her, and that she shouldn't put stock in what Gemini says. Testing it myself, it would give results between 47c-57c. And sometimes it would just trip out and give the health-approved temperature, which is 74c (!).

          Edit: just tested it again and it still happens. But inconsistency isn't a surprise for anyone who actually knows how LLMs work.

          • By 12345hn6789 2026-02-164:02

            https://imgur.com/a/qYmznHa

            I just asked gemini 3 5 times: `what temperature I should take a waterfowl out of the oven`

            and received generic advice every single time it gave nearly identical charts. 165F was in every response. LLMs are unpredictable yes. But I am more skeptical it would give incorrect answers (raw goose) rather than your mother preparing the fowl wrong.

            Cooking correctly is a skill, just as prompting is. Ask 10 people how to cook fowl and their answers will mimic the LLM.

          • By ziml77 2026-02-054:24

            > But inconsistency isn't a surprise for anyone who actually knows how LLMs work

            Exactly. These people saying they've gotten good results for the same question aren't countering your argument. All they're doing is proving that sometimes it can output good results. But a tool that's randomly right or wrong is not a very useful one. You can't trust any of its output unless you can validate it. And for a lot of the questions people ask of it, if you have to validate it, there was no reason to use the LLM in the first place.

HackerNews