Outsourcing thinking

2026-01-3121:06270220erikjohannes.no

30 Jan 2026First, a note to the reader: This blog post is longer than usual, as I decided to address multiple connected issues in the same post, without being too restrictive on length. With modern…

First, a note to the reader: This blog post is longer than usual, as I decided to address multiple connected issues in the same post, without being too restrictive on length. With modern browsing habits and the amount of available online media, I suspect this post will be quickly passed over in favor of more interesting reading material. Before you immediately close this tab, I invite you to scroll down and read the conclusion, which hopefully can give you some food for thought along the way. If, however, you manage to read the whole thing, I applaud your impressive attention span.

A common criticism of the use of large language models (LLMs) is that it can deprive us of cognitive skills. The typical argument is that outsourcing certain tasks can easily cause some kind of mental atrophy. To what extent this is true is an ongoing discussion among neuroscientists, psychologists and others, but to me, the understanding that with certain skills you have to "use it or lose it" seems intuitively and empirically sound.

The more relevant question is whether certain kinds of use are better or worse than others, and if so, which? In the blog post The lump of cognition fallacy, Andy Masley discusses this in detail. His entry point to the problem is to challenge the idea that "there is a fixed amount of thinking to do", and how it leads people to the conclusion that "outsourcing thinking" to chatbots will make us lazy, less intelligent, or in other ways be negative for our cognitive abilities. He compares this to the misconception that there is only a finite amount of work that needs to be done in an economy, which often is referred to as "the lump of labour fallacy". His viewpoint is that "thinking often leads to more things to think about", and therefore we shouldn't worry about letting machines do the thinking for us — we will simply be able to think about other things instead.

Reading Masley's blog post prompted me to write down my own thoughts on the matter, as it has been churning in my mind for a long time. I realized that it could be constructive to use his blog post as a reference and starting point, because it contains arguments that are often brought up in this discussion. I will use some examples from Masley's post to show how I think differently about this, but I'll extend the scope beyond the claimed fallacy that there is a limited amount of thinking to be done. I have done my best to write this text in a way that does not require reading Masley's post first. My aim is not to refute all of his arguments, but to explain why the issue is much more complicated than "thinking often leads to more things to think about". Overall, the point of this post is to highlight some critical issues with "outsourcing thinking".

When should we avoid using generative language models?

Is it possible to define categories of activities where the use of LLMs (typically in the form of chatbots) is more harmful than helpful? Masley lists certain cases where, in his view, it is obviously detrimental to outsource thinking. To fully describe my own perspective, I'll take the liberty to quote the items on his list. He writes it's "bad to outsource your cognition when it:"

  • Builds complex tacit knowledge you'll need for navigating the world in the future.
  • Is an expression of care and presence for someone else.
  • Is a valuable experience on its own.
  • Is deceptive to fake.
  • Is focused in a problem that is deathly important to get right, and where you don't totally trust who you're outsourcing it to.

I was surprised to discover that we are to a large extent in agreement on this list, despite having fundamentally different views otherwise. The disagreement lies, I believe, in the amount of activities that fall within the categories outlined above, particularly three of them.

Personal communication and writing

Let's start with the point "Is deceptive to fake". Masley uses the example of:

If someone’s messaging you on a dating app, they want to know what you’re actually like.

Very true, but in my view, it's not only in such intimate or private situations where it is deceptive to fake what you are like. Personal communication in general is an area where it matters how we express ourselves, both for ourselves and those we talk or write to. When we communicate with each other, there are certain expectations framing the whole exchange. Letting our words and phrases be transformed by a machine is a breach of those expectations. The words we choose and how we formulate our sentences carry a lot of meaning, and direct communication will suffer if we let language models pollute this type of interaction. Direct communication is not only about the information being exchanged, it's also about the relationship between the communicators, formed by who we are and how we express ourselves.

I think this is not only relevant for communication between two humans, but also for text with a personal sender conveyed to a human audience in general. To a certain extent, the same principles apply. There has been a debate in the Norwegian media lately regarding the undisclosed use of LLMs in public writing, with allegations and opinions flying around. I'm very happy to see this discussion reaching broad daylight, because we need to clarify our expectations to communication, now that chatbots are being so widely used. While I clearly think that it is beneficial to keep human-to-human communication free from an intermediate step of machine transformation, not everyone shares that view. If, going forward, our written communication will for the most part be co-authored with AI models, we need to be aware of it, and shift our expectations accordingly. Some have started disclosing when they have used AI in their writing, which I think is a good step towards better understanding of our use of LLMs. Knowing whether a text is written or "co-authored" by an LLM has an important effect on how a receiver views it; pretending otherwise is simply false.

Many see LLMs as a great boon for helping people express their opinions more clearly, particularly for people not using their native language or those who have learning disabilities. As long as the meaning originates from a person, LLMs can help express that meaning in correct and effective language. I have two main objections against this. The first one is about what happens to the text: In most cases it's impossible to separate the meaning from the expression of it. That is in essence what language is — the words are the meaning. Changing the phrasing changes the message. The second one is about what happens to us: We rob ourselves of the opportunity to grow and learn, without training wheels. LLMs can certainly help people improve the text, but the thinking process — developing the ideas — will be severely amputated when leaving the phrasing up to an AI model. They quickly become a replacement instead of help, depriving us the opportunity of discovering our own voice and who we can be and become when we stand on our own two feet.

With great care, one may be able to use a chatbot without being affected by these two drawbacks, but the problem is that with LLMs, there is an exceptionally thin line between getting help with spelling or grammar, and having the model essentially write for you, thereby glossing over your own voice. This is unavoidable with the current design of chatbots and LLM-powered tools; the step from old-school autocorrect to a generative language model is far too big. If we really envision LLMs as a tool for helping people become better at writing, we need to have a much more carefully considered interface than the chatbots we have today.

At the same time, I realize many are far more utilitarian. They just want to get the job done, finish their work, file that report, get that complaint through, answer that email, in the most efficient way possible, and then get on with their day. Getting help from an LLM to express oneself in a second language also seems useful, without considering how much or little one learns from it (I would be more positive to LLMs for translation if it wasn't for the fact that current state-of-the-art LLMs are simply very bad at producing Norwegian text. I can only hope the state is better for other non-English languages, or that it will improve over time). Additionally, LLMs seem to be efficient for people who are fighting with bureaucracy, such is filing complaints and dealing with insurance companies. In this case the advantage seems greater. We must, however, remember that the "weapon" exists on both sides of the table. What will happen to bureaucratic processes when all parties involved are armed with word generators?

It is not without reservation that I express these opinions, because it may come across as I want to deny people something that looks like a powerful tool. The point is that I think this tool will make you weaker, not stronger. LLMs don't really seem to empower people. Some of the effect I currently see is the number of applications to various calls (internships, research proposals, job openings) multiplying, but the quality dropping. Students are asking chatbots for help with solving collaborative tasks, not realizing that everyone is asking the same chatbot, robbing us of the diversity of ideas that could have formed if they took a minute to think for themselves.

The chatbots may have lowered the threshold for participation, but the competition's ground rules hasn't changed. To get better at writing, you need to write. The same goes for thinking. Applying for a job means showing who you are, not who the LLM thinks you are, or should be. Participating in the public debate is having to work out how to express opinions in clear language. Am I really participating if I'm not finding my own words?

It is important to note that not all text is affected in the same way. The category of writing that I like to call "functional text", which are things like computer code and pure conveyance of information (e.g., recipes, information signs, documentation), is not exposed to the same issues. But text that has a personal author addressing a human audience, has particular role expectations and rests on a particular trust. An erosion of that trust will be a loss for humanity.

A pragmatic attitude would be to just let the inflation of text ensue, and take stock after the dust has settled. What will be left of language afterwards? My conservative viewpoint stems from believing that what we will lose is of greater worth than what we gain. While LLMs can prove useful in the short term, using them is treating a symptom instead of the problem. It is a crutch, although some may truly be in need of that crutch. My only advice would be to make sure you actually need it before you lean on it.

Valuable experiences

Using LLMs is not only about writing. Masley mentions that it's bad to outsource activities that are "a valuable experience on its own". I couldn't agree more, but I suspect that he will disagree when I say that I think this category encompasses a lot of what we already do in life. Major LLM providers love to show how their chatbots can be used to plan vacations, organize parties, and create personal messages to friends and family. I seldom feel more disconnected from the technological society than when I watch these advertisements.

To me, this highlights a problem that goes to the core of what it means to be human. Modern life brings with it a great deal of activities that can feel like chores, but at the same time it seems like we are hell-bent on treating everything as a chore as well. Humans are surprisingly good at finding discontentment in nearly anything, maybe because of an expectation in modern society that we should be able to do anything we want, anytime we want it — or perhaps more importantly, that we should be able to avoid doing things we don't feel like doing. Our inability to see opportunities and fulfillment in life as it is, leads to the inevitable conclusion that life is never enough, and we would always rather be doing something else.

In theory, I agree that automating some things can free up time for other things that are potentially more meaningful and rewarding, but we have already reached a stage where even planning our vacation is a chore that apparently a lot of people would like to avoid doing. I hope that AI's alleged ability to automate "nearly anything" helps us realize what is worth spending time and effort on, and rediscover the value of intentional living.

Building knowledge

The third point I would like to address is that we shouldn't use chatbots when it "builds complex tacit knowledge you'll need for navigating the world in the future", according to Masley. Again, I agree completely, and again, I think that this point encompasses a great deal of daily life. Building knowledge happens not only when you sit down to learn something new, but also when you do repetitive work.

This misconception is not new for chatbots, but has been present since we started carrying smartphones in our pockets. With internet at hand at all times, there's apparently no need to remember information anymore. Instead of using our brains for storing knowledge, we can access information online when we need it, and spend more time learning how to actually use the information and think critically. The point we are missing here, is that acquiring and memorizing knowledge is a huge part of learning to use the knowledge. It is naive to think that we can simply separate the storage unit from the processing unit, like if we were a computer.

I learned this lesson while being a piano student. I was trying to understand jazz, and figure out how good improvisers could learn to come up with new phrases so easily on the spot. How does one practice improvisation? Is it possible to exercise the ability to come up with something new that immediately sound good? I ended up playing similar riffs almost every time I tried. After a while I got convinced that good jazz players must be born with some inherent creativity, some inner musical inspiration that hummed melodies inside their heads for them to play.

One of my tutors taught me the real trick: Good improvisation comes not from just practicing improvisation. You need to play existing songs and tunes, many of them, over and over, learn them by heart, get the chord progressions and motifs under your skin. This practice builds your intuition for what sounds good, and your improvisation can spring from that. Bits and pieces of old melodies are combined into new music. In that sense, we are more like a machine learning model than a computer, but do not make the mistake of thinking that is actually what we are.

There is a need for clarification here: I'm not saying that nothing should be automated by LLMs. But I think many are severely underestimating the knowledge we are building from boring tasks, and we are in danger of losing that knowledge when the pressure for increased efficiency makes us turn to the chatbots.

The extended mind

As a sidenote, I would like to contest the idea of the extended mind, as explained by Masley:

[M]uch of our cognition isn’t limited to our skull and brain, it also happens in our physical environment, so a lot of what we define as our minds could also be said to exist in the physical objects around us.

It seems kind of arbitrary whether it’s happening in the neurons in your brain or in the circuits in your phone.

This statement is simply absurd, even when read in context. The fact that something happens in your brain rather than on a computer makes all the difference in the world. Humans are something more than information processors. Yes, we process information, but it is extremely reductionist to treat ourselves as objects where certain processes can be outsourced to external devices without consequences. Does it really matter if I remember my friend's birthday, when I can have a chatbot send them an automated congratulation? Yes, it matters because in the first case you are consciously remembering and thinking about your friend, consolidating your side of the relationship.

The quoted statement above is followed up with:

It’s true that you could lose your phone and therefore lose the stored knowledge, but you could also have a part of your brain cut out.

Losing your phone and losing a part of your brain are two tremendously different things, both in terms of likelihood and consequences. Not only does the statement above significantly underestimate the processes that happens in our brain, but to even liken having a part of your brain cut out to losing your phone reveals that the premiss of the argument is severely detached from reality.

The design of our built environments is also brought up to show how it's beneficial to minimize the amount of thinking we do:

[M]ost of our physical environments have been designed specifically to minimize the amount of thinking we have to do to achieve our daily goals.

Try to imagine how much additional thinking you would need to do if things were designed differently.

This doesn't hold up to scrutiny. Yes, if our environment suddenly changed, it would require extra mental effort of us to navigate. For a time. But, then we would have gotten familiar with that alternative design, and adapted ourselves. The only case where we would have had to do additional thinking is if the design of our physical environments changed all the time.

What we think about does matter

Regarding the "lump of cognition fallacy", I fully agree that we need not worry about "draining a finite pool" of thinking, leaving "less thinking" — whatever that means — for humans. There is, however, another fallacy at play here, which is that "it does not matter what we think about, as long as we think about something". It is easy to be convinced that if a computer can do the simple, boring tasks for me, I can deal with more complex, exciting stuff myself. But we must be aware that certain mental tasks are important for us to do, even though a machine technically could do them for us.

To illustrate: If I outsource all my boring project administration tasks to a chatbot, it can leave more time for my main task: research. But it will also rob me of the opportunity to feel ownership to the project and build a basis for taking high-level decisions in the project. In a hypothetical situation where a chatbot performs all administrative tasks perfectly on my behalf, I will still have lost something, which may again have impact on the project. I'm not saying that no tasks should be automated at all, but we must be aware that we always lose something when automating a process.

Comparing with the "lump of labour" fallacy again: While it may be true that outsourcing physical work to machines will simply create new types of work to do, it doesn't mean that the new work is useful, fulfilling, or beneficial for individuals and society. The same goes for thinking. We must acknowledge that all kinds of thinking have an effect on us, even the boring and tedious kinds. Removing the need for some cognitive tasks can have just as much influence, positive or negative, as taking up new types of cognitive tasks.

Conclusion

We have a major challenge ahead of us in figuring out what chatbots are suitable for in the long term. Personal communication may change forever (that is to say, maybe it won't stay personal anymore), education systems will require radical adaptations, and we need to reflect more carefully about which experiences in life actually matter. What is truly exciting about this new type of technology, is that it forces us to face questions about our humanity and values. Many formerly theoretical questions of philosophy are becoming relevant for our daily lives.

A fundamental point I'm trying to bring forth is that how we choose to use chatbots is not only about efficiency and cognitive consequences; it's about how we want our lives and society to be. I have tried to argue that there are good reasons for protecting certain human activities against the automation of machines. This is in part based on my values, and does not rely on research into whether or not our efficiency at work or cognitive abilities are affected by it. I cannot tell other people what they should do, but I challenge everyone to consider what values they want to build our communities on, and let that weigh in alongside what the research studies tell us.

Reply via email. Subscribe with RSS. No tracking. No cookies. No visit logs. No generative AI was involved in any writing on this site.

©2018-2026 Erik Johannes Husom.


Read the original article

Comments

  • By 3371 2026-02-015:544 reply

    Ever since Google experimented LLM in Gmail it bothers me alot. I firmly believe every word and the way you put them together portrays who you are. Using LLM for direct communication is harmful to human connections.

    • By mettamage 2026-02-0111:131 reply

      It can be. It can also not be. A friend of mine had a PITA boss. Thanks to ChatGPT he salvaged his relationship with him even though he hated working with him.

      He went on to something else but his stress levels went way down.

      All this is to say: I agree with you if the human connection is in good faith. If it isn’t then LLMs are helpful sometimes.

      • By lofties 2026-02-0111:522 reply

        It sounds like that relationship was not supposed to be salvaged to begin with. ChatGPT perhaps prolonged your friend's suffering, who ended up moving on in the end. Perhaps unnecessarily delayed.

        • By borroka 2026-02-0116:381 reply

          My knee-jerk reaction is that outsourcing thinking and writing to an LLM is a defeat of massive proportions, a loss of authenticity in an increasingly less authentic world.

          On the other hand, before LLMs came along, didn't we ask a friend or colleague for their opinion on an email we were about to write to our boss about an important professional or personal matter? I have been asked several times to give advice on the content and tone of emails or messages that some of my friends were about to send. On some occasions, I have written emails on their behalf.

          Is it really any different to ask an LLM instead of me? Do I have a better understanding of the situation, the tone, the words, or the content to use?

          • By pm215 2026-02-0117:231 reply

            I think there are a couple of differences here:

            Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently.

            Secondly, I think when a friend is giving advice the responses are more likely to be advice, i.e. more often generalities like "you should emphasize this bit of your resume more strongly" or point fixes to grammar errors, partly because that's less effort and partly because "let me just rewrite this whole thing the way I would have written it" can come across as a bit rude if it wasn't explicitly asked for. Obviously you can prompt the LLM to only provide critique at that level, but it's also really easy to just let it do a lot more of the work.

            But if you know you're prone to getting into conflicts in email, an LLM powered filter on outgoing email that flagged up "hey, you're probably going to regret sending that" mails before they went out the door seems like it might be a helpful tool.

            • By borroka 2026-02-0118:201 reply

              "Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently."

              - I find this a point in favor of LLM and not a flaw. It is a philosophical stance, one for which what does not require effort or time is intrinsically not valuable (see using GLP peptides vs sucking it up for losing weight). Sure, it requires effort and dedication to clean your house, but given the means (money), wouldn't you prefer to have someone else clean your place?

              "Secondly, I think when a friend is giving advice the responses are more likely to be advice"

              - You can ask an LLM for advice instead of writing directly and without further reflection on the writing provided by the model. Here I find parallels with therapy, which in its modern version, does not provide answers, but questions, means of investigation, and tools to better deal with the problems of our lives.

              But if you ask people who go to therapy, the vast majority of them would much prefer to receive direct guidance (“Do this/don't do that”).

              In the cases in which I wrote a message or email on behalf of someone else, I was asked to do it: can you write it for me, please? I even had to write recommendation letters for myself--I was asked to do that by my PhD supervisor.

              • By pm215 2026-02-0118:311 reply

                I wasn't arguing that getting LLMs to do this is necessarily bad -- I just think it really is different from having in the past been able to ask other humans for help, and so that past experience isn't a reliable guide to whether we might find we have problems with unexpected effects of this new technology.

                If you are concerned about possible harms in "outsourcing thinking and writing" (whether to an LLM or another human) then I think that the frequency and completeness with which you do that outsourcing matters a lot.

                • By borroka 2026-02-0118:56

                  It all depends on the use one makes of it.

                  It can become an indispensable asset over time, or a tool that can be used at certain times to solve, for example, mundane problems that we have always found annoying and that we can now outsource, or a coaching companion that can help us understand something we did not understand before. Since humans are naturally lazy, most will default to the first option.

                  It's a bit like the evolution of driving. Today, only a small percentage of people are able to describe how an internal combustion engine works (<1%?), something that was essential in the early decades after the invention of the car. But I don't think that those who don't understand how an engine works feel that their driving experience is limited in any way.

                  Certainly, thinking and reasoning are universal tools, and it could be that in the near future we will find ourselves dumber than we were before, unable to do things that were once natural and intuitive.

                  But LLMs are here to stay, they will improve over time, and it may well be that in a few decades, the human experience will undergo a downgrade (or an upgrade?) and consist mainly of watching short videos, eating foods that are engineered to stimulate our dopamine receptors, and living a predominantly hedonistic life, devoid of meaning and responsibility. Or perhaps I am describing the average human experience of today.

        • By mettamage 2026-02-0119:04

          Not really, he was looking for other jobs. One can't just be without a job unless they have enough savings which he didn't.

    • By gjadi 2026-02-020:15

      IMHO, the real problem is that they create an even greater dissonance between online life and IRL.

      Think about dating apps, pictures could be fake, and now words exchanged can be fake too.

      You thought you were arguing with a gentle and smart colleague by chat and mails, too bad, when you meet then at a conference or at a restaurant you find them very unpleasant.

    • By nullsanity 2026-02-017:203 reply

      This comment has made me glad for LLM in Gmail. If someone is going to over analyze my every word because he firmly believes it portrays who I am, I'd appreciate the layer obfuscation between me and this creepazoid.

      • By flyinglizard 2026-02-017:23

        Assuming you did not use an LLM to craft your comment, I’d say “case in point”.

      • By johnfn 2026-02-017:362 reply

        If your words don’t portray who you are, what does?

        • By ako 2026-02-018:182 reply

          People make mistakes in the words they use, I often think “oops, I shouldn’t have said it like that”.

        • By jaggederest 2026-02-018:371 reply

          Actions? I generally judge people by what they do, not what they say - though of course I have to admit that saying things does fall under "doing something", if it's impactful.

          • By atoav 2026-02-0112:14

            The truth is that both words and actions communicate something, especially in combination. And sometimes words are the action.

      • By 3371 2026-02-0520:30

        That's exactly the point. In your case, you don't want to show who you are, connection or not does not matter.

  • By sumul 2026-02-0119:391 reply

    This part really caught my attention (along with the rest of the preceding paragraph):

    > Our inability to see opportunities and fulfillment in life as it is, leads to the inevitable conclusion that life is never enough, and we would always rather be doing something else.

    I agree with the article completely, as it effectively names an uneasy feeling of hesitation I’ve had all along with how I use LLMs. I have found them tremendously valuable as sounding boards when I’m going in circles in my own well-worn cognitive (and sometimes even emotional) ruts. I have also found them valuable as research assistants, and I feel grateful that they arrived right around the time that search engines began to feel all but useless. I haven’t yet found them valuable in writing on my behalf, whether it’s prose or code.

    During my formal education, I was very much a math and science person. I enjoyed those subjects. They came easily to me, which I also enjoyed. I did two years of liberal arts in undergrad, and they kicked my butt academically in a way that I didn’t realize was possible. I did not enjoy having to learn how to think and articulate those thoughts in seminars and essays. I did not enjoy the vulnerability of sharing myself that way, or of receiving feedback. If LLMs had existed, I’m certain I would have leaned hard on them to get some relief from the constant feeling of struggle and inadequacy. But then I wouldn’t have learned how to think or how to articulate myself, and my life and career would have been significantly less meaningful, interesting, and satisfying.

    • By port11 2026-02-038:19

      As the quotes go, before you judge others make sure your affairs are in order. I’m not judging the young that are now trying to make sense of this hectic and overwhelming world.

      But… I do agree with you, that had these things been there, I/we’d all be leaning on them. It’s the manageable hardship of life that makes it worth it, we better ourselves through the pain. My 18-year old self would complain, as would any me up to mid-30s. I’d have to insist to him that things will get better, but that he must work on what needs improving. Can’t just ask a language model for validation.

  • By b00ty4breakfast 2026-02-015:225 reply

    What I am worried about (and it's something about regular internet search that has worried me for the past ~10 years or so) is that, after they've trained a generation of folks to rely on this tech, they're going to start inserting things into the training data (or whatever the method would be) to bias it towards favoring certain agendas wrt the information it presents to the users in response to their queries.

    • By chii 2026-02-017:321 reply

      > after they've trained a generation of folks to rely on this tech ... bias it towards favoring certain agendas

      previously, this happened with print media. Then it happened with the airwaves. It only makes logical sense that the trend continues with LLMs.

      Basically, the fundamental issue is that the source of information is under someone else's control, and that someone will always have an agenda.

      But with LLMs, it's crucial to try change the trend. IMHO, it should be possible for a regular person to own their computing - this should include the LLM capability/hardware, as well as the model(s). Without such capabilities, the exact same will happen as has in the past with new technologies.

      • By yetihehe 2026-02-0117:16

        > it should be possible for a regular person to own their computing

        And regular persons will not care about this and will select a model with biases of anyone who they deem "works better for me at this one task that I needed".

        Just like you said:

        > previously, this happened with print media. Then it happened with the airwaves. It only makes logical sense that the trend continues with LLMs.

        I wish it wasn't so, but I have no idea how to make people care about not being under someone's control.

    • By simonw 2026-02-0110:262 reply

      I worried about this a lot more at the tail end of 2003, when OpenAI's GPT-4 (since March) was still very clearly ahead of every other model. It briefly looked like control of the most useful model would stay with a single organization, giving them outsized influence in how LLMs shape human society.

      I don't worry about that any more because there's so much competition: dozens of organizations now produce usable LLMs and the "best" is no longer static. We have frontier models from the USA, France (Mistral) and China now.

      The risk of a model monopoly centralizing cultural power feels a lot lower now then it did a couple of years ago.

      • By norir 2026-02-0111:301 reply

        Model competition does nothing to address monopoly consolidation of compute. If you have control over compute, you can exert control over the masses. It doesn't matter how good my open source model is if I can't acquire the resources to run it. And I have no doubt that the big players will happily buy legislation to both entrench their compute monopoly/cartel and control what can be done using their compute (e.g. making it a criminal offence to build a competitor).

        • By simonw 2026-02-0115:431 reply

          Model competition means that users have multiple options to chose from, so if it turns out one of the models has biases baked in they can switch to another.

          Which incentivizes the model vendors not to mess with the models in ways that might lose them customers.

          • By direwolf20 2026-02-0212:39

            I don't think anyone considers biases more important than, say, convenience. The model that only suggests Coca–Cola brands will win over the one that's ten times slower because it runs on your computer.

      • By b00ty4breakfast 2026-02-0118:45

        I don't think model competition is necessarily the fix to this issue. We're not even sure if the setup as it exists today will be the norm. It could be that other entities license out the models for their own projects which then become the primary contact point for users and LLMs. They are obviously going to want to fine-tune the models to their use-case and this could result in intentional commercial or ideological biases.

        And commercial biases wouldn't necessarily be affected by competition in the way that you're describing. For example, if it becomes profitable for one of these companies to offer to insert links to buy ingredients at WalMart (or wherever) for the goulash recipe you asked for that's going to become the thing that companies go after.

        And all of this assumes that these biases will be obvious rather than subtle.

    • By human_llm 2026-02-015:50

      Absolutely. Like most things on the Internet, it will get enshittified. I think it is very likely that at some point there will be "ads" in the form of the chat bot giving recommendations that favor certain products and services.

    • By ben_w 2026-02-0111:04

      > to bias it towards favoring certain agendas wrt the information it presents to the users in response to their queries.

      Do you mean like Grok is already doing in such a ham-fisted way?

    • By Gud 2026-02-018:553 reply

      This is already happening. People are conditioned to embrace capitalism, where a small percentage of the population are born into the owning class, and a majority who labour.

      • By yetihehe 2026-02-0117:22

        I think that's called feudalism. Maybe our reality doesn't work like it's named and we are starting to have other system despite what we are calling it.

      • By wafflemaker 2026-02-019:421 reply

        Being told how my grandma had problems and was eventually told to shut down her knitting production (done in free time in addition to regular work) by police in the Communist Poland, I believe that it's better to have somehow upgraded capitalism then try to build a good communism just one more time.

        It still got her enough extra buck to build a house in the city after moving out from the village.

        • By norir 2026-02-0111:31

          Communism is neither the opposite of laissez-faire capitalism nor the only alternative.

      • By ThrowawayR2 2026-02-0117:131 reply

        The opposition to capitalism have such a disastrous track record, economically and in terms of body count, that embracing capitalism is far more sensible.

        • By b00ty4breakfast 2026-02-0118:342 reply

          I'm not saying that the other systems, by which I assume you mean the various marxist political projects, are good (and we won't even get into how many of those alternatives were actually not-capitalism) but I think to dismiss the "body count" of capitalism while simultaneously ascribing all deaths under those alternative systems as the direct result of {otherSystem} is extremely disingenuous. Doubly so given that modern first-world capitalism often outsources the human cost of it's milieu to the third world so that middle-class suburbanites don't have to see real price of their mass-produced lifestyles.

          • By inglor_cz 2026-02-0118:41

            Modern Western countries mostly drifted towards a mix of capitalism and social democracy.

            "modern first-world capitalism often outsources the human cost of it's milieu to the third world"

            This is a bit of "damned if you do, damned if you don't".

            If you don't do any business with poorer countries, you can be called a heartless isolationist who does not want to share any wealth and only hoards his money himself.

            If you do business with poorer countries, but let them determine their own internal standards, you will be accused of outsourcing unpleasant features of capitalism out of your sight.

            If you do business with poorer countries and simultaneously demand that they respect your standards in ecology, human rights etc., you will be accused of ideological imperialism and making impossible demands that a poorer country cannot realistically meet.

            Pick your poison.

          • By ThrowawayR2 2026-02-0118:53

            The alternative systems were just as willing to plunder their satellite states and the third world IIRC as the capitalists were so it would be an equal demerit for both, I'd think?

HackerNews