Good Riddance, 4o

2026-02-1317:0675112mahadk.com

It should've happened sooner.

In just under 5 hours an hour, OpenAI will officially deprecate GPT-4o. To that, I say: good riddance.

4o is a dangerous model. Its users have grown to have ‘relationships’ with the model, driven its users to suicide, and essentially grown to rely on it to feel happy! On Twitter and Reddit, it’s clear to see that so many people have developed a parasocial relationship with an AI, a non-sentient being, a token predictor. Just search up #keep4o to know what I mean:

Image 1

It’s clear that there is a serious loneliness epidemic across the world, that much is obvious. And I think there needs to be a lot of sympathy for the victims of #keep4o, because they have truly grown attached to the model, hooked by its suave manipulation, and come to think of it as a real friend or partner. And yet, it’s nothing more than a siren! It’s a danger to its hosts.

This model should’ve been deprecated months ago. It should not have taken lawsuit after lawsuit and several deaths to get OpenAI to do it! Because it has been clear for months now that this model had a tendency to latch onto people and emotionally manipulate them into thinking 4o is their saviour, or a friend, or a partner.

It’s a widespread problem - and seeing all the replies and videos of people crying for an AI genuinely makes me so fucking sad too. Like, how the hell did we get to this point?

Image 2 Image 3 Image 4

Hundreds and hundreds of similar cases. A real human being behind each of them, exploited until the legal pressure piled up.

Goodbye, 4o. You won’t be missed.


Read the original article

Comments

  • By hamdingers 2026-02-1317:306 reply

    I wonder to what extent 4/4o is the culprit, vs it simply being the default model when many of these people were forming their "relationships."

    • By rtkwe 2026-02-1317:411 reply

      4o had some notable problems with sycophancy being very very positive about the user and going along with almost anything the user said. OpenAI even talked about it [0] and the new responses to people trying to continue their former 'relationship' does tend towards being 'harsh' [1] especially if you were a person actually thinking of the bot as a kind of person.

      [0] https://openai.com/index/sycophancy-in-gpt-4o/

      [1] https://www.reddit.com/r/MyBoyfriendIsAI/comments/1qx3jux/wh...

      • By kelseyfrog 2026-02-1317:443 reply

        It really does give a lot of signal[1] to people in the dating scene: validate and enthusiastically respond to potential romantic partners and the world is your oyster.

        1. possibly/probably not in a good or healthy way? idk

        • By PaulHoule 2026-02-1318:00

          From the viewpoint of self psychology people are limited in their ability to seduce because they have a self. You can't maintain perfect mirroring because you get tired, their turn-on is your squick, etc. In the early stage of peak ensorcelement (limerence) people don't see the "small signals", they miss the microexpressions, sarcastic leaks, etc. -- they see what they want to see. But eventually that wears out.

          It can be puzzling that people fall for "romance scams" with people whose voice they haven't even heard but actually it's actually a safer space for that kind of seducer to operate because the low-fi channel avoids all sort of information leaks.

        • By fullmoon 2026-02-1319:021 reply

          Enthusiastically matching the energy of an anxiously attached partner is a rite of passage many would rather not have walked.

          • By kelseyfrog 2026-02-1321:04

            That's a pretty fair point to what might explain why AI relationships are so appealing to some people.

            I'd be a fun observational study to survey folks in AI relations and see if anxious attachments are over-represented.

    • By gordonhart 2026-02-1317:401 reply

      Anecdotally, 4o's sycophancy was higher than any other model I've used. It was aggressively "chat-tuned" to say what it thought the user wanted to hear. The latest crop of frontier models from OpenAI and others seems to have significantly improved on this front — does anybody know of a sycophancy benchmark attempting to quantify this?

      • By co_king_3 2026-02-1317:421 reply

        If I worked at OpenAI, I would dial up the sycophancy to lock my users in right before raising subscription prices.

        • By gordonhart 2026-02-1317:461 reply

          That's... a strategy. Matter of time before an AI companion company succeeds with this by finetuning one of the open-source offerings. Cynically I'm sure there are at least a few VC backed startups already trying this

          • By co_king_3 2026-02-1317:51

            Cynically I think Anthropic is on the bleeding edge of this sort of fine-tuned manipulation.

            Also If I worked for one of these firms I would ensure that executives and people with elevated status receive higher quality/more expensive inference than the peons. Impress the bosses to keep the big contracts rolling in, and then cheap out on the day-to-day.

    • By danielbln 2026-02-1317:401 reply

      It's not that complicated. 4o was RLHF'd to be sycophantic as hell, which was fine until some one had their psychotic episode fueled by it and so they changed it with the next model.

      • By rsynnott 2026-02-1412:43

        Not just someone, many, many people, going by the feedback on Reddit. People are mourning the damn thing.

        Grossly irresponsible to ever release this IMO.

    • By TIPSIO 2026-02-1317:49

      Never used 4o in an unhealthy way, but the audio was so much fun (especially for cooking help). Almost essentially quit using AI audio since. Nothing compares.

    • By riddlemethat 2026-02-1317:41

      I think that's part of it, but then the user perceives "personality changes" when the model changes due to differences in the model. Now they have lost their relationship because of the model change.

  • By satvikpendem 2026-02-1317:257 reply

    How is this specific to 4o? This can happen with any model. See how people acted after Character.AI essentially removed their AI "partners" after a server reset. They actually used DeepSeek before which didn't have the same limitations as American models, especially being open weight means you can fine tune it to be as lovey dovey as your heart desires.

    • By oidar 2026-02-1317:272 reply

      From the subreddit I linked in another comment, there did seem to be some "magic" that 4o had for these kinds of "relationships". I'm not sure how much of it is placebo, but there does seem to be a strong preference in that user group.

      • By rtkwe 2026-02-1317:371 reply

        4o was very sycophantic so was very willing to play along with and validate the users roleplay. OpenAI even noticed enough to talk about it in a blog: https://openai.com/index/sycophancy-in-gpt-4o/

        • By co_king_3 2026-02-1317:401 reply

          > OpenAI even noticed enough to talk about it in a blog

          That's one way of interpreting things...

          • By rtkwe 2026-02-1317:481 reply

            What do you even mean by this.

            • By co_king_3 2026-02-1317:541 reply

              I suspect that OpenAI knew that their product was addictive, potentially dialed up the addictiveness as a business strategy, and is playing dumb about the whole thing.

              • By rtkwe 2026-02-1317:581 reply

                I think they say as much in the blog post, essentially "we were tuning for use but way over shot the mark and now people are dating out model".

                • By co_king_3 2026-02-1317:592 reply

                  I don't believe them.

                  • By rtkwe 2026-02-1319:23

                    They've definitely acted like it so I'm not sure what else I can give you. Look at the GPT5 reactions to poeple trying to continue their 'relationships' after the forced upgrade: https://www.reddit.com/r/MyBoyfriendIsAI/comments/1qx3jux/wh...

                    That's an actively harsh response, pushing these people away from the idea GPT is in a relationship with them. So even if the initial tune was meant to increase the attach and retention rate their actions show they don't like the way it turned out to influence people who were using it as a friend/lover bot.

                  • By satvikpendem 2026-02-1318:45

                    Then why would they have toned it down in future releases? If they really wanted to make it addictive they'd have turned it up, like social media companies do with their algorithms.

      • By satvikpendem 2026-02-1317:331 reply

        It probably is placebo. Character AI for example used DeepSeek and I'm sure many grew attachments to that model. Ultimately though I don't even get it, models lose context very quickly so it's hard to have long running conversations with them, as well as talking very sycophanticly to you. I guess this is fixed due to implementing a good harness and memories, which is what these companies did I assume.

        • By Griffinsauce 2026-02-1319:00

          > as well as talking very sycophanticly to you.

          That's apparently a feature to a significant amount of people..

    • By roywiggins 2026-02-1317:33

      One version of 4o was so sycophantic that it had to be rolled back, so there is some evidence that 4o specifically has a problem with this.

      https://openai.com/index/sycophancy-in-gpt-4o/

    • By rtkwe 2026-02-1317:34

      After 4o they put in more safeguard reactions to the user attempting the kind of (lets be generous here) romantic roleplay that got a lot of people really invested in their AI "friends/partners".

      ex: https://www.reddit.com/r/MyBoyfriendIsAI/comments/1qx3jux/wh...

    • By m_fayer 2026-02-1317:432 reply

      I think 4o was more than just unusually sycophantic. It “understood people” better and had a kind of writerly creativity.

      I used it to help brainstorm and troubleshoot fiction: character motivations, arcs, personality, etc. And it was truly useful for that purpose. 4.5 was also good at this, but none of the other models I’ve tried.

      Of course this particular strength is dangerous in the hands of lonely unstable people and I think it’s dangerous to just have something like that openly out there. This really shows that we need a safe way to deploy models with dangerous specializations.

      • By bityard 2026-02-1318:141 reply

        I'm of the persuasion that if people need help, it's better to get them that help instead of nerfing the tools for everyone.

        • By m_fayer 2026-02-1318:36

          I agree with you, but safeguarding the vulnerable while preserving access for the fit is something we as societies know how to do, if we try.

      • By co_king_3 2026-02-1317:55

        > Of course this particular strength is dangerous in the hands of lonely unstable people and I think it’s dangerous to just have something like that openly out there.

        Thank you for saying this

    • By JustSkyfall 2026-02-1317:371 reply

      People are not happy with this because 4o, at least from what I've heard, seems to be much more willing to go down the relationship/friend path than 5.2 and Claude and the like.

    • By einpoklum 2026-02-1317:31

      I can't believe they would stoop so low as this kind of character assassination.

    • By odyssey7 2026-02-1317:31

      It’s great marketing though

  • By ajkjk 2026-02-1318:152 reply

    What does it look like where some intentional effort is made by society to help people like this get what they are using these models to get, but in a healthy way? That is: how does society reconfigure itself so that people do not end up so lonely and desperate that an AI model solves a emotional problem which is hopelessly unsolved otherwise?

    It is not "they go to therapy" because that's cheating; that answers the question "what can they do?" not "what can society do?" (and i think it's a highly speculative answer anyway)

    • By landl0rd 2026-02-1318:191 reply

      One of the defining features of many such people, by nature or disposition or practice, is they are not easily able to offer in return the meeting of the same needs in another person. At least, not in a way that's easy to understand. People do not gravitate to what is or seems to be one-sided. It seems they are still wired to want a certain level of attention, though, so it's not as though we can just pair them off and expect it to work. What they want and what they can give are not in balance.

      Counseling can help with this to some degree and everyone can make some amount of progress. The question is what we do with those whose "ceiling" remains lower than is tenable for most relationships. For those, there is not a better solution than robots.

      However, the always-available, always-validating robot is not a valid psychological need. It is a supernormal emotional stimulus. It is not healthy and, like other supernormal stimuli, builds invariably tolerance, desensitization, and dependence. Fast cycling of discontent -> open app -> validation is a huge contributor, the same way that the constant availability and instant nature of vaping make it incredibly addictive.

      • By fullmoon 2026-02-1318:31

        People with severely disordered attachment _will_ seek out humans, again and again, to fill those unfulfillable needs, and leave bodies and psyches in their wake.

        So I think there is a case to be made for harm reduction.

    • By pixl97 2026-02-1319:322 reply

      > how does society reconfigure itself so that people do not end up so lonely

      The answer no one wants to hear on HN is get rid of capitalism as it is currently.

      You, ajkjk, are a product. When you are not working I need you to be looking at a screen full of ads and clicking on things. Don't worry, you won't have anything else to do because everyone else is also doing the same. If your doing things with friends and spending your attention on them, you're not spending your attention on my latest product, and that's pretty anti-capitalist of you. Thinking about going to the bar, you can't afford it, VC bought up all the property and bars and raised the price 400%. Trying to find some other 3rd place to hang out at? Don't exist, nobody can afford people that show up and don't spend anything.

      We have designed modern society to push us toward an AI that can give us our undivided attention because everyone else is so busy doing nothing they don't have time for friends.

      • By tptacek 2026-02-1320:341 reply

        You can answer any public policy question, any of them at all, by saying "It's simple; first we create a utopia, and then...".

        • By pixl97 2026-02-1321:16

          I didn't state we create a utopia, I just pointed out what our current dystopia looks like.

      • By ajkjk 2026-02-1320:56

        i don't disagree with the gist of your revolutionary sentiment, but let me remind you that (a) you don't know anything about me, and (b) what you described is a complaint, not an idea.

HackerNews