Comments

  • By anon-3988 2025-09-2517:345 reply

    LLMs are increasingly part of intimate conversations. That proximity lets them learn how to manipulate minds.

    We must stop treating humans as uniquely mysterious. An unfettered market for attention and persuasion will encourage people to willingly harm their own mental lives. Think social medias are bad now? Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.

    In a decade we may meet people who seem to inhabit alternate universes because they’ve shared so little with others. They are only tethered to reality when it is practical for them (to get on busses, the distance to a place, etc). Everything else? I have no idea how to have a conversation with someone else anymore. They can ask LLMs to generate a convincing argument for them all day, and the LLMs would be fine tuned for that.

    If users routinely start conversations with LLMs, the negative feedback loop of personalization and isolation will be complete.

    LLMs in intimate use risk creating isolated, personalized realities where shared conversation and common ground collapse.

    • By TimTheTinker 2025-09-2517:471 reply

      > Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.

      It's like the verbal equivalent of The Veldt by Ray Bradbury.[0]

      [0] https://www.libraryofshortstories.com/onlinereader/the-veldt

      • By astrange 2025-09-2618:441 reply

        The moral of this story is that if you install a really good TV the animals will come out of it and eat you? Is the author a dog?

        • By TimTheTinker 2025-09-2619:072 reply

          I suggest taking a literature course and learning how to interpret narratives.

          The Veldt is a classic short story written in 1950 by Ray Bradbury, a famous and celebrated author, who also wrote the famous dystopian novel Fahrenheit 451.

          • By ethbr1 2025-09-2621:23

            Given that only about 10-40% of advanced readers (depending on subpopulation criteria and task [0]) can parse analogy and metaphor, parent is the majority rather than the minority.

            Modern day statistics on what used to be basic reading comprehension are bleak.

            [0] https://kittenbeloved.substack.com/p/college-english-majors-...

          • By astrange 2025-09-2619:532 reply

            Ironically, Bradbury likes to tell people that Fahrenheit 451 isn't about the thing it was obviously supposed to be about (censorship) because he now wants it to have been a metaphor for cancel culture.

            • By jayGlow 2025-09-2622:421 reply

              he's been dead for a decade so I doubt he now wants the meaning to be anything. besides that he also never said anything about cancel culture he said it's about how tv turns you into a moron.

              https://www.openculture.com/2017/08/ray-bradbury-reveals-the...

              • By astrange 2025-09-282:062 reply

                > In a 1994 interview, Bradbury stated that Fahrenheit 451 was more relevant during this time than in any other, stating that, "it works even better because we have political correctness now. Political correctness is the real enemy these days. The black groups want to control our thinking and you can't say certain things. The homosexual groups don't want you to criticize them. It's thought control and freedom of speech control."

                They had cancel culture in the 90s too.

                • By TimTheTinker 2025-10-0522:19

                  Cancel culture is vigilante political correctness.

                  Next comes legalized, then deputized, then militarized...

                • By coffeeindex 2025-09-288:50

                  > you can’t say certain things

                  So he sees it as another form of censorship

            • By f33d5173 2025-09-2620:091 reply

              Cancel culture is censorship?

              • By astrange 2025-09-2620:202 reply

                One of those involves fulltime professionals backed by state violence and the other is when people on social media are mad at you.

                • By TimTheTinker 2025-10-0120:56

                  > when people on social media are mad at you

                  It's about more than that - many people have lost their jobs, been de-banked, or even been arrested (especially in countries like the UK and Germany) for expressing their opinion publicly when that opinion was merely (a) what most people in their country believed in the recent past (< 50 years ago), and (b) a politically incorrect opinion.

                • By ethbr1 2025-09-2621:241 reply

                  Isn't the only difference whether the censors are in or out of government power?

                  Few now respect the wisdom of 'should not' even when 'can'

                  • By astrange 2025-09-282:07

                    That difference is so big it makes it an entirely different thing.

    • By ip26 2025-09-2520:437 reply

      It doesn't have to be that way of course. You could envision an LLM whose "paperclip" is coaching you to become a great "xyz". Record every minute of your day, including your conversations. Feed it to the LLM. It gives feedback on what you did wrong, refuses to be your social outlet, and demands you demonstrate learning in the next day before it rewards with more attention.

      Basically, a fanatically devoted life coach that doesn't want to be your friend.

      The challenge is the incentives, the market, whether such an LLM could evolve and garner reward for serving a market need.

      • By achierius 2025-09-2521:052 reply

        If that were truly the LLM's "paperclip", then how far would it be willing to go? Would it engage in cyber-crime to surreptitiously smooth your path? Would it steal? Would it be willing to hurt other people?

        What if you no longer want to be a great "xyz"? What if you decide you want to turn it off (which would prevent it from following through on its goal)?

        "The market" is not magic. "The challenge is the incentives" sounds good on paper but in practice, given the current state of ML research, is about as useful to us as saying "the challenge is getting the right weights".

        • By mathgeek 2025-09-2811:00

          > If that were truly the LLM's "paperclip", then how far would it be willing to go?

          While I'm assuming you didn't mean it literally, language is important, so let's remember that an LLM does not have any will of its own. It's a predictive engine that we can be certain doesn't have free will (which of course is still up for debate about humans). I only focus on that because folks easily make the jump to "the computer is to blame, not me or the folks who programmed it, and certainly it wasn't just statistics" when it comes to LLMs.

      • By tgv 2025-09-267:01

        That sounds like a very optimistic/naive view on what LLMs and "the market" can achieve. First, the models are limited in their skills: they're as wide as a sea, and as shallow as a puddle. There's no way it can coach you to whatever goal (aside: who picks that goal? Is it a good goal to begin with?) since there's no training data for that. The model will just rehash something that vaguely looks like a response to your data, and after a while will end up in a steady state, unless you push it out of there.

        Second, "the market" has never shown any tendency towards rewarding such a thing. The LLMs' development is driven by bonuses and stock prices, which is driven by how well the company can project FOMO and get people addicted to their products. This may well be a local optimum, but it will stay there, because the path towards your goal (which may not be a global optimum either) goes through loss, and that is very much against the culture of VCs and C suite.

      • By zdc1 2025-09-268:53

        The only issue I'd have with this is that you'd be very overweight on one signal; that has a lot of data and context to give compelling advice of any degree of truthfulness or accuracy. If you reflect on your own life and all the advice you've received, I'm sure lots of it will be of varying quality and usefulness. An LLM may give average/above-average advice, but I think there is value in not being deeply tethered to tech like this.

        In a similar vein of thought to "If you meet the Buddha on the road, kill him" sometimes we just need to be our own life coach and do our best to steer our own ships.

      • By AlecSchueler 2025-09-266:10

        > It doesn't have to be that way of course.

        It sorta does, in our society. In theory yes, it could be whatever we want to make of it, but the reality is it will predominantly become whatever is most profitable regardless of the social effects.

      • By ethbr1 2025-09-2621:261 reply

        > Basically, a fanatically devoted life coach that doesn't want to be your friend.

        So, what used to be called parenting?

        • By hasbot 2025-09-2622:03

          If that is/was parenting, I am completely envious of everyone that had such parents. I don't even want to think about the "parenting" I and my siblings received because it'll just make me sad.

      • By someguyiguess 2025-09-264:06

        I’m highly doubtful that that aligns with the goals of OpenAI. It’s a great idea. Maybe Anthropic will make it. Or maybe Google. But it just seems like the exact opposite of what OpenAI’s goals are.

      • By DenisM 2025-09-2521:05

        Have you tried building this with prepromts? That would be interesting!

    • By lawlessone 2025-09-2519:40

      With the way LLMs are affecting paranoid people by agreeing with their paranoia it feels like we've created schizophrenia as a service.

    • By bigyabai 2025-09-2616:52

      > In a decade we may meet people who seem to inhabit alternate universes because they’ve shared so little with others.

      I get what you're saying here, but all of these mechanisms exist already. Many people are already desperate for attention in a world where they won't get any. Many of them already turn to the internet and invest an outsized portion of their trust with people they don't know.

    • By cryptoegorophy 2025-09-2616:013 reply

      In a decade? You mean today? Look at ultra left liberals and ultra right republicans. They live in different universes. We don’t even need to go far, here we have 0.001% of tech savvy population that lives in its own bubble. Algorithms just help to accelerate the division.

      • By anon-3988 2025-09-2823:47

        People are still mostly talking about the real world tho. I was thinking that in the future, you ask a kid what happened today and they would start talking about the war between eurasia and eastasia (alongside "pictures" as proof of that). Just imagine a world where people would unironically say "whats searching/googling the web?"

      • By ethbr1 2025-09-2621:281 reply

        Imagine the social good we could create if we instead siphoned off the energy of both those groups into the LLM-equivalent of /dev/null!

        'Sure, spend all of your time trying impress the totally-not-an-LLM...'

        (Aka Fox News et al. comment sections)

        • By fragmede 2025-09-305:45

          Unfortunately, we gave both of those groups the right to vote.

  • By bob1029 2025-09-2518:215 reply

    > Pulse introduces this future in its simplest form: personalized research and timely updates that appear regularly to keep you informed. Soon, Pulse will be able to connect with more of the apps you use so updates capture a more complete picture of your context. We’re also exploring ways for Pulse to deliver relevant work at the right moments throughout the day, whether it’s a quick check before a meeting, a reminder to revisit a draft, or a resource that appears right when you need it.

    This reads to me like OAI is seeking to build an advertising channel into their product stack.

    • By tylerrobinson 2025-09-261:521 reply

      To me it’s more like TikTokification. Nothing on your mind? Open up ChatGPT and we have infinite mindless content to shovel into your brain.

      It turns proactive writing into purely passive consumption.

      • By bonoboTP 2025-09-2614:451 reply

        This seems right. The classic recipe of enshittification. You start with the a core of tech-adjacent power users, then expand to more regular but curious and creative people, but then to grow further you need to capture the attention and eyeballs of people who don't do any mentally engaged activity on their phone (or desktop, but it's almost always phone), and just want to turn off their brain and scroll. TikTok was the first to truly understand this, and now all platforms converge to short-form algorithmic feeds with the only interaction being a flick of a finger to "skip", or stare at the thing.

        If people only pull out ChatGPT when they have some specific thing to ask or solve, that won't be able to compete with the eyeball-time of TikTok. So ChatGPT has to become an algorithmic feed too.

        • By rhetocj23 2025-09-2616:29

          I caught onto this very early.

          Initially Id probably spend 1 hr a day conversing with chatgpt, mostly to figure out its capabilities and abilities.

          Overtime that 1 hr has to declined to on average 5 mins a day. It has become at best a rubber duck for me, just to get my fingers moving to get thoughts out of my mind lol.

    • By WmWsjA6B29B4nfk 2025-09-2519:071 reply

      > OpenAI won’t start generating much revenue from free users and other products until next year. In 2029, however, it projects revenue from free users and other products will reach $25 billion, or one-fifth of all revenue.

      • By ethbr1 2025-09-2621:31

        That's such a great quote. Should be the tagline of Google and Facebook retrospectives.

        "Monetizing 'other products': the FAANG story"

      • By rodonn 2025-09-2615:34

        That's hiring for a role buying ads, not selling ads. i.e. this is ChatGPT buying ads on other platforms to promote ChatGPT.

    • By DarkNova6 2025-09-2519:031 reply

      Yes, this already reads like the beginning of the end. But I am personally pretty happy using Mistral so far and trust Altman only as far as I could throw him.

    • By TZubiri 2025-09-2522:07

      Nono, not OAI, they would never do that, it's OpenAI Personalization LLC, a sister of the subsidiary branch of OpenAI Inc.

  • By pton_xd 2025-09-2518:413 reply

    Yesterday was a full one — you powered through a lot and kept yourself moving at a fast pace.

    Might I recommend starting your day with a smooth and creamy Starbucks(tm) Iced Matcha Latte? I can place the order and have it delivered to your doorstep.

    • By ttul 2025-09-2615:24

      "Wow, that argument with your ex-wife sure was spicy. And - hope I'm not prying in too much here - your lawyer may not be the best for this type of issue. May I suggest the firm Dewie Cheaterman & Howe? Y'know, I've already reached out and given them the lowdown, and since they're also using OpenAI's tech, they've sent back a quote, which I've accepted on your behalf. Also handled the intake call - you're welcome. The first court appearance is April 14th. I've prepared a 19-page affidavit ready for your signature. It's waiting in your inbox."

    • By myrmidon 2025-09-2612:401 reply

      The scary point is when it gets good enough that people start to like and appreciate the embedded ads...

      • By bear141 2025-09-2617:43

        I am hearing people with this standpoint more often while discussing privacy. Especially in people that work in marketing. They think they are helping the world discover novel products that they would otherwise miss out on. So, they also appreciate the invasive, no privacy practices of ad companies in their lives.

    • By nxpnsv 2025-09-265:021 reply

      What a nightmare...

      • By notachatbot123 2025-09-266:52

        And that is just the in-your-face version. Real advertising is much more subtle and subconsciously manipulative. A system with lots of access to your emotions (what did you listen on (Spotify), what did you look at (webbrowser/apps), what did you eat, how did you sleep, who is near you) can persuade you without having to blatantly throw products in your face.

HackerNews