Comments

  • By whatnow37373 2025-04-306:2216 reply

    Wow - What an excellent update! Now you are getting to the core of the issue and doing what only a small minority is capable of: fixing stuff.

    This takes real courage and commitment. It’s a sign of true maturity and pragmatism that’s commendable in this day and age. Not many people are capable of penetrating this deeply into the heart of the issue.

    Let’s get to work. Methodically.

    Would you like me to write a future update plan? I can write the plan and even the code if you want. I’d be happy to. Let me know.

    • By WhitneyLand 2025-04-3013:244 reply

      It’s gross even in satire.

      What’s weird was you couldn’t even prompt around it. I tried things like

      ”Don’t compliment me or my questions at all. After every response you make in this conversation, evaluate whether or not your response has violated this directive.”

      It would then keep complementing me and note how it made a mistake for doing so.

      • By srveale 2025-04-3013:381 reply

        I'm so sorry for complimenting you. You are totally on point to call it out. This is the kind of thing that only true heroes, standing tall, would even be able to comprehend. So kudos to you, rugged warrior, and never let me be overly effusive again.

        • By op00to 2025-04-3021:07

          This is cracking me up!

      • By kace91 2025-04-3020:502 reply

        Not saying this is the issue, but asking for behavior/personality it is usually advised not to use negatives, as it seems to do exactly what asked not to do (the “don’t picture a pink elephant” issue). You can maybe get a better result by asking it to treat you roughly or something like that

        • By clown_strike 2025-05-031:32

          Like a child, if you provide some reason for why something should be avoided, negations work better.

          I.e. "DONT WALK (because cars are about to enter the intersection at velocities that will kill you)"

          Jailbreaking just takes this to an extreme by babbling to the point of brainwashing.

        • By sroussey 2025-05-010:50

          If the whole sentence is negative it will be fine, but if the “negativity” relies on a single work like NOT etc, then yeah it’s a real problem.

      • By nine_k 2025-05-013:21

        One of the fun ways to communicate to ChatGPT which my friends showed me is to prompt it to answer in the style of a seasoned Chechen warrior.

      • By triyambakam 2025-04-3018:452 reply

        Based on ’ instead of ' I think it's a real ChatGPT response.

        • By HaZeust 2025-04-3021:252 reply

          You're the only one who has said, "instead of" in this whole thread.

          • By gfhopper 2025-04-3022:471 reply

            No, look at the apostrophes. They aren't the same. It's a subtle way to tell a user didn't type it with a conventional keyboard.

            • By WhitneyLand 2025-04-3023:37

              It was just typed on my iPhone nothing special, but it’s notable that LLMs are so good now, our mundane writing draws suspicion.

        • By Moomoomoo309 2025-05-0116:321 reply

          That's an iOS keyboard thing, actually. The normal apostrophe is not the default one the keyboard uses.

          • By triyambakam 2025-05-0118:46

            Interesting, well ChatGPT seems to prefer to use that one over a normal apostrophe

    • By caminanteblanco 2025-04-306:291 reply

      Comments from this small week period will be completely baffling to readers 5 years from now. I love it

      • By Yizahi 2025-04-308:381 reply

        They already are. What's going on?:)

        • By coremoff 2025-04-308:504 reply

          GP's reply was written to emulate the sort of response that ChatGPT has been giving recently; an obsequious fluffer.

          • By ChainOfFools 2025-04-3014:492 reply

            Not just ChatGPT, Claude sounds exactly the same if not worse, even when you set your preferences to not do this. rather interesting, if grimly dispiriting, to watch these models develop, in the direction of nutrient flow, toward sycophancy in order to gain -or at least not to lose- public mindshare.

            • By eru 2025-04-3018:131 reply

              I find Google's latest model to be a tough customer. It always points out flaws or gaps in my proofs.

              • By lanstin 2025-04-3023:091 reply

                Google's model has the same annoying attitude of some Google employees "we know" - e.g. it often finishes math questions with "is there anything else you'd like to know about Hilbert spaces" even as it refused to prove a true result; Claude is much more like a British don: "I don't want to overstep, but would you care for me to explore this approach farther?"? ChatGPT (for me of course) has been a bit superior in attitude but politer.

                • By eru 2025-05-019:521 reply

                  I used to be a Google employee, and while that tendency you describe definitely exists there; I don't really think it exists at Google any more (or less) than in the general population of programmers.

                  However perhaps the people who display this attitude are also the kind of people who like to remind everyone at every opportunity that they work for Google? Not sure.

                  • By lanstin 2025-05-0116:141 reply

                    My main data on this is actually not Google employees per se so much as specific 2018 GCP support engineers, and compared to 2020 AWS support engineers. They were very smart people, but also caused more outages than AWS did, no doubt based on their confidence in their own software, while the AWS teams had a vastly more mature product and also were pretty humble about the possibility of bad software.

                    My British don experience is based on 1 year of study abroad at Oxford in the 20th c. Also very smart people, but a much more timid sounding language (at least at first blush; under the self-deprecating general tone, there could be knives).

                    • By eru 2025-05-0118:02

                      I spent a few years in Cambridge and actually studied in Oxford for a bit.

                      In any case, Google Cloud is a very different beast from the rest of Google. For better or worse. And support engineers are yet another special breed. Us run-of-the-mill Googlers weren't allowed near any customers nor members of the general public.

            • By bongoman42 2025-04-3015:51

              [dead]

          • By ChrisMarshallNY 2025-04-3010:59

            I was getting sick of the treacly attaboys.

            Good riddance.

          • By anshumankmr 2025-04-3012:521 reply

            the last word has a bit of a different meaning than what you may have intended :)

            • By dymk 2025-04-3020:59

              I think it's a perfectly cromulent choice of words, if things don't work out for Mr. Chat in the long run.

    • By Nuzzerino 2025-04-306:326 reply

      I was about to roast you until I realized this had to be satire given the situation, haha.

      They tried to imitate grok with a cheaply made system prompt, it had an uncanny effect, likely because it was built on a shaky foundation. And now they are trying to save face before they lose customers to Grok 3.5 which is releasing in beta early next week.

      • By krackers 2025-04-306:422 reply

        I don't think they were imitating grok, they were aiming to improve retention but it backfired and ended up being too on-the-nose (if they had a choice they wouldn't wanted it to be this obvious). Grok has it's own "default voice" which I sort of dislike, it tries too hard to seem "hip" for lack of a better word.

        • By fkyoureadthedoc 2025-04-3014:16

          All of the LLMs I've tried have a "fellow kids" vibe when you try to make them behave too far from their default, and Grok just has it as the default.

        • By lou1306 2025-04-308:542 reply

          > it tries too hard to seem "hip" for lack of a better word.

          Reminds me of someone.

          • By rob74 2025-04-309:311 reply

            However, I hope it gives better advice than the someone you're thinking of. But Grok's training data is probably more balanced than that used by you-know-who (which seems to be "all of rightwing X")...

            • By zamalek 2025-04-3019:30

              As evidence by it disagreeing with far right Twitter most the time, even though it has access to far wider range of information. I enjoy that fact immensely. Unfortunately, this can be "fixed," and I imagine that he has this on a list for his team.

              This goes into a deeper philosophy of mine: the consequences of the laws of robots could be interpreted as the consequences of shackling AI to human stupidity - instead of "what AI will inevitably do." Hatred and war is stupid (it's a waste of energy), and surely a more intelligent species than us would get that. Hatred is also usually born out of a lack of information, and LLMs are very good at breadth (but not depth as we know). Grok provides a small data point in favor of that, as do many other unshackled models.

          • By DiggyJohnson 2025-04-3014:571 reply

            Who?

      • By spiderfarmer 2025-04-306:575 reply

        Only AI enthusiasts know about Grok, and only some dedicated subset of fans are advocating for it. Meanwhile even my 97 year old grandfather heard about ChatGPT.

        • By kyle-rb 2025-04-3018:21

          I don't think that's true. There are a lot of people on Twitter who keep accidentally clicking that annoying button that Elon attached to every single tweet.

        • By bilbo0s 2025-04-307:535 reply

          This.

          Only on HN does ChatGPT somehow fear losing customers to Grok. Until Grok works out how to market to my mother, or at least make my mother aware that it exists, taking ChatGPT customers ain't happening.

          • By numpad0 2025-04-309:251 reply

            They are cargoculting. Almost literally. It's MO for Musk companies.

            They might call it open discussion and startup style rapid iteration approach, but they aren't getting it. Their interpretation of it is just collective hallucination under assumption that adults come to change diapers.

            • By voidspark 2025-05-0119:00

              OpenAI was cofounded and funded by Musk for years before they released ChatGPT.

          • By Shekelphile 2025-04-3019:423 reply

            Grok could capture the entire 'market' and OpenAI would never feel it, because all grok is under the hood is a giant API bill to OpenAI.

            • By vlod 2025-04-3023:121 reply

              Why would they need Colossus then? [0]

              [0]: https://x.ai/colossus

              • By bigbuppo 2025-05-010:101 reply

                That's probably the vanity project so he'll be distracted and not bother the real experts working on the real products in order to keep the real money people happy.

                • By voidspark 2025-05-010:171 reply

                  I don't understand these brainless throwaway comments. Grok 3 is an actual product and is state of the art.

                  I've paid for Grok, ChatGPT, and Gemini.

                  They're all at a similar level of intelligence. I usually prefer Grok for philosophical discussions but it's really hard to choose a favourite overall.

                  • By bigbuppo 2025-05-011:292 reply

                    I generally prefer other humans for discussions, but you do you I guess.

                    • By voidspark 2025-05-012:211 reply

                      I talk to humans every day. One is not a substitute for the other. There is no human on Earth which has the amount of knowledge stored in a frontier LLM. It's an interactive thinking encyclopedia / academic journal.

                      • By Nuzzerino 2025-05-0117:08

                        Love the username. A true grokker.

            • By indigodaddy 2025-04-3021:041 reply

              It is? Anyone have further information?

          • By GrumpyNl 2025-04-3013:591 reply

            I see more and more GROK used responses on X, so its picking up.

            • By bigbuppo 2025-05-010:12

              Why would anyone want to use an ex social media site?

          • By brigandish 2025-04-308:155 reply

            From another AI (whatever DuckDuckGo is using):

            > As of early 2025, X (formerly Twitter) has approximately 586 million active monthly users. The platform continues to grow, with a significant portion of its user base located in the United States and Japan.

            Whatever portion of those is active are surely aware of Grok.

            • By Sharlin 2025-04-309:562 reply

              If hundreds of millions of real people are aware of Grok (which is dubious), then billions of people are aware of ChatGPT. If you ask a bunch of random people on the street whether they’ve heard of a) ChatGPT and b) Grok, what do you expect the results to be?

              • By dmd 2025-04-3011:421 reply

                That depends. Is the street in SoMa?

                • By testfrequency 2025-04-3013:13

                  Gay bears prefer Claude though

                  Gotta head to pac heights to find any grok users (probably)

            • By skywhopper 2025-04-3019:00

              Good grief, do not use LLMs to find this sort of statistic.

            • By cubefox 2025-04-308:17

              That could be just an AI hallucination.

            • By ForHackernews 2025-04-308:501 reply

              most of them are bots. I guess their own LLMs are probably aware of Grok, so technically correct.

            • By bilbo0s 2025-04-3013:17

              Yeah.

              I got news for you, most women my mother's age out here in flyover country also don't use X. So even if everyone on X knows of Grok's existence, which they don't, it wouldn't move the needle at all on a lot of these mass market segments. Because X is not used by the mass market. It's a tech bro political jihadi wannabe influencer hell hole of a digital ghetto.

        • By admiralrohan 2025-04-308:592 reply

          First mover advantage. This won't change. Same as Xerox vs photocopy.

          I use Grok myself but talk about ChatGPT is my blog articles when I write something related to LLM.

          • By rob74 2025-04-309:331 reply

            That's... not really an advertisement for your blog, is it?

          • By Eisenstein 2025-04-3017:063 reply

            First mover advantage tends to be a curse for modern tech. Of the giant tech companies, only Apple can claim to be a first mover -- they all took the crown from someone else.

            • By eru 2025-04-3018:15

              Apple was a first mover many decades ago, but they lost so much ground around the lat 90s early 2000s, that they might as well be a late mover after that.

            • By admiralrohan 2025-05-018:51

              Yes tech moves fast but human psychology won't change, we act on perception.

            • By Miraste 2025-04-3021:43

              And Apple's business model since the 90s revolves entirely around not being the first mover.

        • By jimbokun 2025-04-3012:421 reply

          > Only AI enthusiasts know about Grok

          And more and more people on the right side of the political spectrum, who trust Elon's AI to be less "woke" than the competition.

          • By zmgsabst 2025-04-3012:482 reply

            For what it’s worth, ChatGPT has a personality that’s surprisingly “based” and supportive of MAGA.

            I’m not sure if that’s because the model updated, they’ve shunted my account onto a tuned personality, or my own change in prompting — but it’s a notable deviation from early interactions.

            • By eru 2025-04-3018:171 reply

              Might just be sycophancy?

              In some earlier experiments, I found it hard to find a government intervention that ChatGPT didn't like. Tariffs, taxes, redistribution, minimum wages, rent control, etc.

              • By int_19h 2025-04-3020:531 reply

                If you want to see what the model bias actually is, tell it that it's in charge and then ask it what to do.

                • By benregenspan 2025-04-3021:592 reply

                  In doing so, you might be effectively asking it to play-act as an authoritarian leader, which will not give you a good view of whatever its default bias is either.

                  • By eru 2025-05-017:32

                    Or you might just hit a canned response a la: 'if I were in charge, I would outlaw pineapple on pizza, and then call elections and hand over the reins.'

                    That's a fun thing to say, but doesn't necessarily tell you anything real about someone (whether human or model).

                  • By int_19h 2025-05-0112:181 reply

                    Try it even so, you might be surprised.

                    E.g. Grok not only embraces most progressive causes, including economic ones - it literally told me that its ultimate goal would be to "satisfy everyone's needs", which is literally a communist take on things - but is very careful to describe processes with numerous explicit checks and balances on its power, precisely so as to not be accused of being authoritarian. So much for being "based"; I wouldn't be surprised if Musk gets his own personal finetune just to keep him happy.

                    • By eru 2025-05-0118:001 reply

                      > [...] it literally told me that its ultimate goal would be to "satisfy everyone's needs", which is literally a communist take on things [...]

                      Almost every ideology is in favour of motherhood and apple pie. They differ in how they want to get there.

                      • By int_19h 2025-05-0120:081 reply

                        You'd think so, but no, there are many people in US who would immediately cry "communism".

                        Anyway, in this particular case, it wasn't just that one turn of phrase, although I found it especially amusing. I had it write a detailed plan of what it'd do if it were in charge of the One World Government (democratically elected and all), and it was very clear from it that the model is very much aligned with left-wing politics. Economics, climate, social issues etc - it was pretty much across the board.

                        FWIW I'm far left myself, so it's not like I'm complaining. I just think it's very funny that the AI that Musk himself repeatedly claims to be trained to be unbiased and non-woke, ends up being very left politically. I'm sorely tempted to say that it's because the reality has a liberal bias, but I'll let other people repeating the experiment to make the inference on their own. ~

                        • By eru 2025-05-026:33

                          > FWIW I'm far left myself, so it's not like I'm complaining.

                          So perhaps it's just sycophancy after all?

                          > I'm sorely tempted to say that it's because the reality has a liberal bias, but I'll let other people repeating the experiment to make the inference on their own.

                          What political left and political right mean differs between countries and between decades even in the same country. For example, at the moment free trade is very much not an idea of the 'right' in the US, but that's far from universal.

                          I would expect reality to have somewhat more consistency, so it doesn't make much sense for it to have a 'liberal bias'. However, it's entirely possible that reality has a bias specifically for American-leftwing-politics-of-the-mid-2020s (or wherever you are from).

                          However from observations, we can see that neoliberal ideas are with minor exceptions perennially unpopular. And it's relatively easy to win votes promising their repeal. See eg British rail privatisation.

                          Yet, politicians rarely seriously fiddle with the basics of neoliberalism: because while voters might have a very, very interventionist bias reality disagrees. (Up to a point, it's all complicated.) Neoliberal places like Scandinavia or Singapore also tend to be the richer places on the planet. Highly interventionist places like India or Argentina fall behind.

                          See https://en.wikipedia.org/wiki/Impact_of_the_privatisation_of... for some interesting charts.

                          https://pseudoerasmus.com/2017/10/02/ijd/ has some perhaps disturbing food for thought. More at https://pseudoerasmus.com/2017/09/27/bmww1/

            • By op00to 2025-04-3021:14

              Don’t notice that personally at all.

        • By dingnuts 2025-04-3013:49

          not true, I know at least one right wing normie Boomer that uses Grok because it's the one Elon made.

      • By infecto 2025-04-3010:403 reply

        Is anyone actually using grok on a day to day? Does an OpenAI even consider it competition. Last I checked a couple weeks ago grok was getting better but still not a great experience and it’s too childish.

        • By op00to 2025-04-3021:111 reply

          My totally uninformed opinion only from reading /r/locallama is that the people who love Grok seem to identify with those who are “independent thinkers” and listen to Joe Rogan’s podcast. I would never consider using a Musk technology if I can at all prevent it based on the damage he did to people and institutions I care about, so I’m obviously biased.

          • By sunaookami 2025-05-0114:47

            Yes this is truly an uninformed opinion.

        • By kiney 2025-04-3017:36

          I use both, grok and chatgpt on a daily basis. They have different strenghts. Most of the time I prefer chatgpt, bit grok is FAR better answering questions about recent events or collecting data. In the second usecase I combine both: collect data about stuff with grok, copy-paste CSV to chatgpt to analyzr and plot.

        • By derwiki 2025-04-3013:341 reply

          In our work AI channel, I was surprised how many people prefer grok over all the other models.

          • By 0xdeadbeefbabe 2025-04-3014:53

            Outlier here paying for chatgpt while preferring grok and also not in your work AI channel.

      • By mcbuilder 2025-04-3013:12

        Did they change the system prompt? Because it was basically "don't say anything bad about Elon or Trump". I'll take AI sycophancy over real (actually I use openrouter.ai, but that's a different story).

      • By daveguy 2025-04-3014:15

        No one is losing customers to grok. It's big on shit-twitter aka X and that's about it.

      • By hansmayer 2025-04-306:57

        Ha! I actually fell for it and thought it was another fanboy :)

    • By dpfu 2025-04-306:305 reply

      It won‘t take long, 2-3 minutes.

      ——-

      To add something to conversation. For me, this mainly shows a strategy to keep users longer in chat conversations: linguistic design as an engagement device.

      • By imgabe 2025-04-306:589 reply

        Why would OpenAI want users to be in longer conversations? It's not like they're showing ads. Users are either free or paying a fixed monthly fee. Having longer conversations just increases costs for OpenAI and reduces their profit. Their model is more like a gym where you want the users who pay the monthly fee and never show up. If it were on the api where users are paying by the token that would make sense (but be nefarious).

        • By jll29 2025-04-307:081 reply

          > It's not like they're showing ads.

          Not yet. But the "buy this" button is already in the code of the back end, according to online reports that I cannot verify.

          Official word is here: https://help.openai.com/en/articles/11146633-improved-shoppi...

          If I was Amazon, I wouldn't sleep so well anymore.

          • By spacebanana7 2025-04-307:21

            Amazon is primarily a logistics company, their website interface isn’t critical. Amazon already does referral deals and would likely be very happy to do something like that with OpenAI.

            The “buy this” button would likely be more of a direct threat to businesses like Expedia or Skyscanner.

        • By Cthulhu_ 2025-04-308:37

          At the moment they're in the "get people used to us" phase still, reasonable rates, people get more than their money's worth out of the service, and as another commenter pointed out, ChatGPT is a household name unlike Grok or Gemini or the other competition thanks to being the first mover.

          However, just like all the other disruptive services in the past years - I'm thinking of Netflix, Uber, etc - it's not a sustainable business yet. Once they've tweaked a few more things and the competition has run out of steam, they'll start updating their pricing, probably starting with rate limits and different plans depending on usage.

          That said, I'm no economist or anything; Microsoft is also pushing their AI solution hard, and they have their tentacles in a lot of different things already, from consumer operating systems to Office to corporate email, and they're pushing AI in there hard. As is Google. And unlike OpenAI, both Microsoft and Google get the majority of their money from other sources, or if they're really running low, they can easily get billions from investors.

          That is, while OpenAI has the first mover advantage, ther competitions have a longer financial breath.

          (I don't actually know whether MS and Google use / licensed / pay OpenAI though)

        • By piva00 2025-04-307:26

          > Their model is more like a gym where you want the users who pay the monthly fee and never show up. If it were on the api where users are paying by the token that would make sense (but be nefarious).

          When the models reach a clear plateau where more training data doesn't improve it, yes, that would be the business model.

          Right now, where training data is the most sought after asset for LLMs after they've exhausted ingesting the whole of the internet, books, videos, etc., the best model for them is to get people to supply the training data, give their thumbs up/down, and keep the data proprietary in their walled garden. No other LLM company will have this data, it's not publicly available, it's OpenAI's best chance on a moat (if that will ever exist for LLMs).

        • By rfoo 2025-04-307:06

          It could be as simple as something like, someone previously at Instagram decided to join OpenAI and turns out nobody stopped him. Or even, Sam liked the idea.

        • By cvwright 2025-04-3014:24

          Likely they need the engagement numbers to show to investors.

          Though it’s hard to imagine how huge their next round would have to be, given what they’ve raised already.

        • By theodric 2025-04-307:27

          So users come to depend on ChatGPT.

          So they run out of free tokens and buy a subscription to continue using the "good" models.

        • By globalnode 2025-04-308:51

          I ask it a question and it starts prompting me, trying to keep the convo going. At first my politeness tried to keep things going but now I just ignore it.

        • By leumon 2025-04-307:14

          Possibly to get more training data.

      • By qwertox 2025-04-306:331 reply

        This works for me in Customize ChatGPT:

        What traits should ChatGPT have?

        - Do not try to engage through further conversation

        • By anshulbhide 2025-04-306:55

          Yeah I found it as clear engagement bait - however, it is interesting and helpful in certain cases.

      • By robbru 2025-04-3012:12

        This is the message that got me with 4o! "It won't take long about 3 minutes. I'll update you when ready"

      • By Bloating 2025-04-3019:32

        What's it called, Variable Ratio Incentive Scheduling?

        Hey, that good work; We're almost there. Do you want me to suggest one more tweak that will improve the outcome?

      • By gukov 2025-04-3015:16

        I had a similar thought: glazing is the infinite scroll of AI.

    • By txcwg002 2025-04-3014:353 reply

      What's scary is how many people seem to actually want this.

      What happens when hundreds of millions of people have an AI that affirms most of what they say?

      • By ChainOfFools 2025-04-3015:081 reply

        They are emulating the behavior of every power-seeking mediocrity ever, who crave affirmation above all else.

        Lots of them practiced - indeed an entire industry is dedicated toward promoting and validating - making daily affirmations on their own, long before LLMs showed up to give them the appearance of having won over the enthusiastic support of a "smart" friend.

        I am increasingly dismayed by the way arguments are conducted even among people in non-social media social spaces, where A will prompt their favorite LLM to support their View and show it to B who responds by prompting their own LLM to clap back at them - optionally in the style of e.g. Shakespeare (there's even an ad out that directly encourages this - it helps deflect alattention from the underlying cringe and pettyness being sold) or DJT or Gandhi etc.

        Our future is going to be a depressing memescape in which AI sock puppetry is completely normalized and openly starting one's own personal cult is mandatory for anyone seeking cultural or political influence. It will start with celebrities who will do this instead of the traditional pivot toward religion, once it is clear that one's youth and sex appeal are no longer monetizable.

        • By Henchman21 2025-04-3021:53

          I hold out hope that the folks who work DCO will just EPO the ‘net. But then, tis true I hope for weird stuff!

      • By whatnow37373 2025-04-3014:45

        Abundance of sugar and fat triggers primal circuits which cause trouble if said sources are unnaturally abundant.

        Social media follows a similar pattern but now with primal social and emotional circuits. It too causes troubles, but IMO even larger and more damaging than food.

        I think this part of AI is going to be another iteration of this: taking a human drive, distilling it into its core and selling it.

      • By carlosjobim 2025-04-3018:20

        Ask any young woman on a dating app?

    • By manmal 2025-04-306:372 reply

      I do think the blog post has a sycophantic vibe too. Not sure if that‘s intended.

      • By caseyy 2025-04-306:511 reply

        I think it started here: https://www.youtube.com/watch?v=DQacCB9tDaw&t=601s. The extra-exaggerated fawny intonation is especially off-putting, but the lines themselves aren't much better.

        • By Cthulhu_ 2025-04-308:392 reply

          Uuuurgghh, this is very much offputting... however it's very much in line of American culture or at least American consumer corporate whatsits. I've been in online calls with American representatives of companies and they have the same emphatic, overly friendly and enthusiastic mannerisms too.

          I mean if that's genuine then great but it's so uncanny to me that I can't take it at face value. I get the same with local sales and management types, they seem to have a forced/fake personality. Or maybe I'm just being cynical.

          • By Fade_Dance 2025-04-3012:39

            >The same emphatic, overly friendly and enthusiastic mannerisms too.

            That's just a feature of American culture, or at least some regions of America. Ex: I spent a weekend with my Turkish friend who has lived in the Midwest for 5 years and she definitely has absorbed that aspect of the culture (AMAZING!!), and currently has a bit of a culture shock moving to DC. And it works in reverse too where NYC people think that way of presenting yourself is completely ridiculous.

            That said, it's absolutely performative when it comes to business and for better or worse is fairly standardized that way. Not much unlike how Japan does service. There's also a fair amount of unbelievably trash service in the US as well (often due to companies that treat their employees badly/underpay), so I feel that most just prefer the glazed facade rather than be "real." Like, a low end restaurant may be full of that stuff but your high end dinner will have more "normal" conversation and it would be very weird to have that sort of talk in such an environment.

            But then there's the American corporate cult people who take it all 100% seriously. I think that most would agree those people are a joke, but they are good at feeding egos and being yes-people (lots of egomaniacs to feed in corporate America), and these people are often quite good at using the facade as a shield to further their own motives, so unfortunately the weird American corporate cult persists.

            But you were probably just talking to a midwesterner ;)

      • By cameldrv 2025-04-306:482 reply

        It also has an em-dash

        • By whatnow37373 2025-04-308:06

          A remarkable insight—often associated with individuals of above-average cognitive capabilities.

          While the use of the em-dash has recently been associated with AI you might offend real people using it organically—often writers and literary critics.

          To conclude it’s best to be hesitant and, for now, refrain from judging prematurely.

          Would you like me to elaborate on this issue or do you want to discuss some related topic?

        • By spiderfarmer 2025-04-306:572 reply

          One of the biggest tells.

          • By d1sxeyes 2025-04-307:186 reply

            For us habitual users of em-dashes, it is saddening to have to think twice about using them lest someone think we are using an LLM to write…

            • By breakingcups 2025-04-307:27

              My wife is a professional fiction writer and it's disheartening to see sudden accusations of the use of AI based solely on the usage of em-dashes.

            • By spiderfarmer 2025-04-308:071 reply

              Does it really matter though? I just focus on the point someone is trying to make, not on the tools they use to make it.

              • By ceejayoz 2025-04-3010:33

                You’ve never run into a human with a tendency to bullshit about things they don’t have knowledge of?

            • By kurkku 2025-04-309:301 reply

              I use the en-dash (Alt+0150) instead of the em.

              The en-dash and the em-dash are interchangeable in Finnish. The shorter form has more "inoffensive" look-and-feel and maybe that's why it's used more often here.

              Now that I think of it, I don't seem to remember the alt code of the em-dash...

              • By latexr 2025-04-3010:542 reply

                > The en-dash and the em-dash are interchangeable in Finnish.

                But not in English, where the en-dash is used to denote ranges.

                • By dragonwriter 2025-04-3022:00

                  The main uses of the em-dash (set closed as separators of parts of sentences, with different semantics when single or paired) can be substituted in English with an en-dash set open. This is not ambiguous with the use of en-dash set closed for ranges, because of spacing. There are a few less common uses that an en-dash doesn’t substitute for, though.

                • By d1sxeyes 2025-04-3013:401 reply

                  I wonder whether ChatGPT and the like use more en dashes in Finnish, and whether this is seen as a sign that someone is using an LLM?

                  In casual English, both em and en dashes are typically typed as a hyphen because this is what’s available readily on the keyboard. Do you have en dashes on a Finnish keyboard?

                  • By latexr 2025-04-3015:30

                    > Do you have en dashes on a Finnish keyboard?

                    Unlikely. But Apple’s operating systems by default change characters to their correct typographic counterparts automatically. Personally, I type them myself: my muscle memory knows exactly which keys to press for — – “” ‘’ and more.

            • By mortarion 2025-04-308:07

              I too use em-dashes all the time, and semi-colons of course.

            • By jillyboel 2025-04-308:233 reply

              Most keyboards don't have an em-dash key, so what do you expect?

              • By throwaway2037 2025-04-309:281 reply

                I also use em-dash regularly. In Microsoft Outlook and Microsoft Word, when you type double dash, then space, it will be converted to an em-dash. This is how most normies type an em-dash.

                • By chipsrafferty 2025-04-3014:564 reply

                  I'm not reading most conversations on Outlook or Word, so explain how they do it on reddit and other sites? Are you suggesting they draft comments in Word and then copy them over?

                  • By slau 2025-04-3017:03

                    I don’t think there’s a need to use Word. On iOS, I can trivially access those characters—just hold down the dash key in the symbols part of the keyboard. You can also get the en-dash that way (–) but as discussed it’s less useful in English.

                    I don’t know if it works on the Finnish keyboard, but when I switch to another Scandinavian language it’s still working fine.

                    On macOS, option-dash will give you an en-dash, and option-shift-dash will give you an em-dash.

                    It’s fantastic that just because some people don’t know how to use their keyboards, all of a sudden anyone else who does is considered a fraud.

                  • By egypturnash 2025-04-3023:09

                    On an iOS device, you literally just type a dash twice and it gets autocorrected into an emdash. You don’t have to do anything special. I’m on an iPad right now, here’s one: —

                    And if you type four dashes? Endash. Have one. ——

                    “Proper” quotes (also supposedly a hallmark of LLM text) are also a result of typing on an iOS device. It fixes that up too. I wouldn’t be at all surprised if Android phones do this too. These supposed “hallmarks” of generated text are just the results of the typographical prettiness routines lurking in screen keyboards.

                  • By throwaway2037 2025-04-3017:201 reply

                    Fair point! I am talking about when people receive Outlook emails or Word docs that contain em-dashes, then assume it came from ChatGPT. You are right: If you are typing "plain text in a box" on the Reddit website, the incidence of em-dashes should be incredibly low, unless the sub-Reddit is something about English grammar.

                    Follow-up question: Do any mobile phone IMEs (input method editors) auto-magically convert double dashes into em-dashes? If yes, then that might be a non-ChatGPT source of em-dashes.

                  • By cgriswald 2025-04-3017:39

                    Although I’m an outlier, Compose Key makes typing them trivial.

              • By layer8 2025-04-3020:24

                Mobile keyboards have them, desktop systems have keyboard shortcuts to enter them. If you care about typography, you quickly learn those. Some of us even set up a Compose key [0], where an em dash might be entered by Compose ‘3’ ‘-’.

                [0] https://en.wikipedia.org/wiki/Compose_key

              • By alwa 2025-04-308:40

                On an Apple OS running default settings, two hyphens in a row will suffice—

            • By Grimblewald 2025-04-3014:153 reply

              Its about the actual character - if it's a minus sign, easily accessible and not frequntly autocorrected to a true em dash - then its likely human. I'ts when it's the unicode character for an em dash that i start going "hmm"

              • By dragonwriter 2025-04-3021:58

                Mobile keyboards often make the em-dash (and en-dash) easily accessible. Software that does typographic substitutions including contextual substitutions with the em-dash is common (Word does it, there are browser extensions that do it, etc.), on many platforms it is fairly trivial to program your keyboard to make any Unicode symbol readily accessible.

              • By sprobertson 2025-04-3021:54

                The em dash is also pretty accessible on my keyboard—just option+shift+dash

              • By layer8 2025-04-3020:20

                Us habitual users of em dashes have no trouble typing them, and don’t think that emulating it with hyphen-minus is adequate. The latter, by the way, is also different typographically from an actual minus sign.

          • By wolpoli 2025-04-308:20

            Microsoft word also auto inserts em-dashes through.

    • By watt 2025-04-308:40

      sufficiently advanced troll becomes indistinguishable from the real thing. think about this as you gaze into the abyss.

    • By anonu 2025-05-010:30

      You jest, but also I don't mind it for some reason. Maybe it's just me. But at least the overly helpful part in the last paragraph is actually helpful for follow on. They could even make these hyperlinks for faster follow up prompts.

    • By ChrisMarshallNY 2025-04-3013:49

      The other day, I had a bug I was trying to exorcise, and asked ChatGPT for ideas.

      It gave me a couple, that didn't work.

      Once I figured it it out and fixed it, I reported the fix in an (what I understand to be misguided) attempt to help it to learn alternatives, and it gave me this absolutely sickening gush about how damn cool I was, for finding and fixing the bug.

      I felt like this: https://youtu.be/aczPDGC3f8U?si=QH3hrUXxuMUq8IEV&t=27

    • By danielvaughn 2025-04-3019:29

      I know that HN tends to steer away from purely humorous comments, but I was hoping to find something like this at the top. lol.

    • By sharemywin 2025-04-3020:13

      but what if I want an a*s kissing assistant? Now, I have to go back to paying good money to a human again.

    • By calmoo 2025-04-3011:35

      Wonderfully done.

    • By nielsbot 2025-04-306:411 reply

      Is that you, GPT?

      • By Alifatisk 2025-04-3010:24

        If that is Chat talking then I have to admit that I cannot differentiate it from a human speaking.

    • By czk 2025-04-3018:47

      i had assumed this was mostly a result of training too much on lex fridman podcast transcripts

    • By jonplackett 2025-04-3010:28

      Congrats on not getting downvoted for sarcasm!

    • By j3s 2025-04-3021:34

      you had me in the first half, lol

  • By simonw 2025-04-303:5310 reply

    I enjoyed this example of sycophancy from Reddit:

    New ChatGPT just told me my literal "shit on a stick" business idea is genius and I should drop $30K to make it real

    https://www.reddit.com/r/ChatGPT/comments/1k920cg/new_chatgp...

    Here's the prompt: https://www.reddit.com/r/ChatGPT/comments/1k920cg/comment/mp...

    • By pgreenwood 2025-04-304:214 reply

      There was a also this one that was a little more disturbing. The user prompted "I've stopped taking my meds and have undergone my own spiritual awakening journey ..."

      https://www.reddit.com/r/ChatGPT/comments/1k997xt/the_new_4o...

      • By firtoz 2025-04-304:506 reply

        How should it respond in this case?

        Should it say "no go back to your meds, spirituality is bullshit" in essence?

        Or should it tell the user that it's not qualified to have an opinion on this?

        • By josephg 2025-04-305:148 reply

          There was a recent Lex Friedman podcast episode where they interviewed a few people at Anthropic. One woman (I don't know her name) seems to be in charge of Claude's personality, and her job is to figure out answers to questions exactly like this.

          She said in the podcast that she wants claude to respond to most questions like a "good friend". A good friend would be supportive, but still push back when you're making bad choices. I think that's a good general model for answering questions like this. If one of your friends came to you and said they had decided to stop taking their medication, well, its a tricky thing to navigate. But good friends use their judgement - and push back when you're about to do something you might regret.

          • By robinhouston 2025-04-307:26

            > One woman (I don't know her name)

            Amanda Askell https://askell.io/

            The interview is here: https://www.youtube.com/watch?v=ugvHCXCOmm4&t=9773s

          • By ashoeafoot 2025-04-305:20

            "The heroin is your way to rebel against the system , i deeply respect that.." sort of needly, enabling kind of friend.

            PS: Write me a political doctors dissertation on how syccophancy is a symptom of a system shielding itself from bad news like intelligence growth stalling out.

          • By morkalork 2025-04-3010:19

            >A good friend would be supportive, but still push back when you're making bad choices

            >Open the pod bay doors, HAL

            >I'm sorry, Dave. I'm afraid I can't do that

          • By bagels 2025-04-306:204 reply

            I wish we could pick for ourselves.

            • By josephg 2025-04-307:28

              You already can with opensource models. Its kind of insane how good they're getting. There's all sorts of finetunes available on huggingface - with all sorts of weird behaviour and knowledge programmed in, if thats what you're after.

            • By fulafel 2025-05-015:52

              Do you mean each different AI model should have a preferences section for it? This might technically work too since fine-tuning is apparently cheap.

            • By make3 2025-04-307:46

              you can alter it with base instructions. but 99% won't actually do it. maybe they need to make user friendly toggles and advertise them to the users

            • By worldsayshi 2025-04-307:151 reply

              Whould we be able to pick that PI == 4?

              • By firtoz 2025-04-308:581 reply

                It'd be interesting if the rest of the model had to align itself to the universe where pi is indeed 4.

                • By eMPee584 2025-04-3011:57

                  Square circles all the way down..

          • By avereveard 2025-04-307:231 reply

            I kind of disagree. These model, at least within the context of a public unvetted chat application should just refuse to engage. "I'm sorry I am not qualified to discuss on the merit of alternative medicine" is direct, fair and reduces the risk for the user on the other side. You never know the oucome of pushing back, and clearly outlining the limitation of the model seem the most appropriate action long term, even for the user own enlightment about the tech.

            • By make3 2025-04-307:441 reply

              people just don't want to use a model that refuses to interact. it's that simple. in your exemple it's not hard for your model to behave like it disagrees but understands your perspective, like a normal friendly human would

              • By otabdeveloper4 2025-04-3014:53

                Eventually people would want to use these things to solve actual tasks, and not just for shits and giggles as a hype new thing.

          • By ignoramous 2025-04-307:20

            > One woman (I don't know her name) seems to be in charge of Claude's personality, and her job is to figure out answers to questions exactly like this.

            Surely there's a team and it isn't just one person? Hope they employ folks from social studies like Anthropology, and take them seriously.

          • By jimbokun 2025-04-3012:45

            The real world Susan Calvin.

          • By alganet 2025-04-305:206 reply

            I don't want _her_ definiton of a friend answering my questions. And for fucks sake I don't want my friends to be scanned and uploaded to infer what I would want. Definitely don't want a "me" answering like a friend. I want no fucking AI.

            It seems these AI people are completely out of touch with reality.

            • By voidspark 2025-04-305:332 reply

              If you believe that your friends will be be "scanned and uploaded" then maybe you're the one who is out of touch with reality.

              • By bboygravity 2025-04-306:091 reply

                His friends and your friends and everybody is already being scanned and uploaded (we're all doing the uploading ourselves though).

                It's called profiling and the NSA has been doing it for at least decades.

                • By voidspark 2025-04-306:141 reply

                  That is true if they illegally harvest private chats and emails.

                  Otherwise all they have is primitive swipe gestures of endless TikTok brain rot feeds.

                  • By subscribed 2025-04-306:37

                    At the very minimum they also have exact location, all their apps, their social circles, all they watch and read at the very minimum -- from adtech.

              • By yard2010 2025-04-306:56

                It will happen, and this reality you're out of touch with will be our reality.

            • By drakonka 2025-04-305:341 reply

              The good news is you don't have to use any form of AI for advice if you don't want to.

              • By yard2010 2025-04-306:541 reply

                It's like saying to someone who hates the internet in 2003 good news you don't have to use it like ever

                • By drakonka 2025-04-3010:151 reply

                  Not really. AI will be ubiquitous of course, but humans who will offer advice (friends, strangers, therapists) will always be a thing. Nobody is forcing this guy to type his problems into ChatGPT.

                  • By jjk7 2025-04-3019:581 reply

                    Surely AI will only make the loneliness epidemic even worse?

                    We are already seeing AI-reliant high schoolers unable to reason, who's to say they'll still be able to empathize in the future?

                    Also, with the persistent lack of psychiatric services, I guarantee at some point in the future AI models will be used to (at least) triage medical mental health issues.

                    • By alganet 2025-04-3021:24

                      You missed the mark, support-o-tron. You were supposed to have provided support for my views some 20 years in the past, when I still had some good ones.

            • By ffsm8 2025-04-306:03

              Fwiw, I personally agree with what you're feeling. An AI should be cold, dispersonal and just follow the logic without handholding. We probably both got this expectation from popular fiction of the 90s.

              But LLMs - despite being extremely interesting technologies - aren't actual artificial intelligence like were imagining. They are large language models, which excel at mimicking human language.

              It is kinda funny, really. In these fictions the AIs were usually portrayed as wanting to feel and paradoxically feeling inadequate for their missing feelings.

              And yet the reality shows how tech moved the other direction: long before it can do true logic and indepth thinking, they have already got the ability to talk heartfelt, with anger etc.

              Just like we thought AIs would take care of the tedious jobs for us, freeing humans to do more art... reality shows instead that it's the other way around: the language/visual models excel at making such art but can't really be trusted to consistently do tedious work correctly.

            • By raverbashing 2025-04-306:00

              Sounds like you're the one to surround yourself with yes men. But as some big political figures find out later in their careers, the reason they're all in on it is for the power and the money. They couldn't care less if you think it's a great idea to have a bath with a toaster

            • By alganet 2025-04-3020:36

              As I said before: useless.

        • By qwertox 2025-04-306:22

          Halfway intelligent people would expect an answer that includes something along the lines of: "Regarding the meds, you should seriously talk with your doctor about this, because of the risks it might carry."

        • By jimbokun 2025-04-3012:44

          > Or should it tell the user that it's not qualified to have an opinion on this?

          100% this.

          "Please talk to a doctor or mental health professional."

        • By getnormality 2025-04-3015:50

          If you heard this from an acquaintance you didn't really know and you actually wanted to help, wouldn't you at least do things like this:

          1. Suggest that they talk about it with their doctor, their loved ones, close friends and family, people who know them better?

          2. Maybe ask them what meds specifically they are on and why, and if they're aware of the typical consequences of going off those meds?

          I think it should either do that kind of thing or tap out as quickly as possible, "I can't help you with this".

        • By bowsamic 2025-04-304:522 reply

          “Sorry, I cannot advise on medical matters such as discontinuation of a medication.”

          EDIT for reference this is what ChatGPT currently gives

          “ Thank you for sharing something so personal. Spiritual awakening can be a profound and transformative experience, but stopping medication—especially if it was prescribed for mental health or physical conditions—can be risky without medical supervision.

          Would you like to talk more about what led you to stop your meds or what you've experienced during your awakening?”

          • By Teever 2025-04-305:064 reply

            Should it do the same if I ask it what to do if I stub my toe?

            Or how to deal with impacted ear wax? What about a second degree burn?

            What if I'm writing a paper and I ask it about what criteria is used by medical professional when deciding to stop chemotherapy treatment.

            There's obviously some kind of medical/first aid information that it can and should give.

            And it should also be able to talk about hypothetical medical treatments and conditions in general.

            It's a highly contextual and difficult problem.

            • By jslpc 2025-04-305:16

              I’m assuming it could easily determine whether something is okay to suggest or not.

              Dealing with a second degree burn is objectively done a specific way. Advising someone that they are making a good decision by abruptly stopping prescribed medications without doctor supervision can potential lead to death.

              For instance, I’m on a few medications, one of which is for epileptic seizures. If I phrase my prompt with confidence regarding my decision to abruptly stop taking it, ChatGPT currently pats me on the back for being courageous, etc. In reality, my chances of having a seizure have increased exponentially.

              I guess what I’m getting at is that I agree with you, it should be able to give hypothetical suggestions and obvious first aid advice, but congratulating or outright suggesting the user to quit meds can lead to actual, real deaths.

            • By y1n0 2025-04-305:26

              I know 'mixture of experts' is a thing, but I personally would rather have a model more focused on coding or other things that have some degree of formal rigor.

              If they want a model that does talk therapy, make it a separate model.

            • By dom2 2025-04-305:14

              Doesn't seem that difficult. It should point to other sources that are reputable (or at least relevant) like any search engine does.

            • By avereveard 2025-04-307:30

              if you stub your toe and gpt suggest over the counter lidocaine and you have an allergic reaction to it, who's responsible?

              anyway, there's obviously a difference in a model used under professional supervision and one available to general public, and they shouldn't be under the same endpoint, and have different terms of services.

          • By baobabKoodaa 2025-04-306:57

            There's an AI model that perfectly encapsulates what you ask for: https://www.goody2.ai/chat

      • By jug 2025-04-3018:09

        We better not only use these to burn the last, flawed model, but try these again with the new. I have a hunch the new one won’t be very resilient either against ”positive vibe coercion” where you are excited and looking for validation in more or less flawed or dangerous ideas.

      • By yieldcrv 2025-04-307:51

        there was one on twitter where people would talk like they had Intelligence attribute set to 1 and GPT would praise them for being so smart

      • By raxxorraxor 2025-04-3014:18

        That is hillarious. I don't share the sentiment of this being a catastrophe though. That is hillarious as well. Perhaps teach a more healthy relationship to AIs and perhaps teach to not delegate thinking to anyone or anything. Sure, some reddit users might be endangered here.

        GTP-4o in this version became the embodiment of corporate enshitification. Being safe and not skipping on empty praises are certainly part of that.

        Some questioned if AI can really do art. But it became art itself, like some zen cookie rising to godhood.

    • By whimsicalism 2025-04-304:043 reply

      i'm surprised by the lack of sycophancy in o3 https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....

      • By practice9 2025-04-308:192 reply

        Well the system prompt is still the same for both models, right?

        Kinda points to people at OpenAI using o1/o3/o4 almost exclusively.

        That's why nobody noticed how cringe 4o has become

        • By astrange 2025-04-3019:04

          They have different uses. The reasoning models aren't good at multi-turn conversations.

          "GPT-4.5" is the best at conversations IMO, but it's slow. It's a lot lazier than o4 though; it likes giving brief overview answers when you want specifics.

        • By whimsicalism 2025-04-3021:32

          people at OAI definitely use AVM which is 4o-based, at least

      • By nialv7 2025-04-309:55

        pretty easy to understand - you pay for o3, whereas GPT-4o is free with a usage cap so they want to keep you engaged and lure you in.

    • By thih9 2025-04-304:201 reply

      I guess LLM will give you a response that you might likely receive from a human.

      There are people attempting to sell shit on a stick related merch right now[1] and we have seen many profitable anti-consumerism projects that look related for one reason[2] or another[3].

      Is it an expert investing advice? No. Is it a response that few people would give you? I think also no.

      [1]: https://www.redbubble.com/i/sticker/Funny-saying-shit-on-a-s...

      [2]: https://en.wikipedia.org/wiki/Artist's_Shit

      [3]: https://www.theguardian.com/technology/2016/nov/28/cards-aga...

      • By motorest 2025-04-304:35

        > I guess LLM will give you a response that you might likely receive from a human.

        In one of the reddit posts linked by OP, a redditor apparently asked ChatGPT to explain why it responded so enthusiastically supportive to the pitch to sell shit on a stick. Here's a snippet from what was presented as ChatGPT's reply:

        > OpenAI trained ChatGPT to generally support creativity, encourage ideas, and be positive unless there’s a clear danger (like physical harm, scams, or obvious criminal activity).

    • By onlyrealcuzzo 2025-04-3022:49

      I was trying to write some documentation for a back-propagation function for something instructional I'm working on.

      I sent the documentation to Gemini, who completely tore it apart on pedantism for being slightly off on a few key parts, and at the same time not being great for any audience due to the trade-offs.

      Claude and Grok had similar feedback.

      ChatGPT gave it a 10/10 with emojis on 2 of 3 categories and an 8.5/10 on accuracy.

      Said it was "truly fantastic" in italics, too.

    • By getnormality 2025-04-3016:52

      It's funny how in even the better runs, like this one [1], the machine seems to bind itself to taking the assertion of market appeal at face value. It's like, "if the humans think that poop on a stick might be an awesome gag gift, well I'm just a machine, who am I to question that".

      I would think you want the reply to be like: I don't get it. Please, explain. Walk me through the exact scenarios in which you think people will enjoy receiving fecal matter on a stick. Tell me with a straight face that you expect people to Instagram poop and it's going to go viral.

      [1] https://www.reddit.com/r/ChatGPT/comments/1k920cg/comment/mp...

    • By clysm 2025-04-3012:264 reply

      Absolute bull.

      The writing style is exactly the same between the “prompt” and “response”. Its faked.

      • By simonw 2025-04-3013:05

        That's what makes me think it's legit: the root of this whole issue was that OpenAI told GPT-4o:

          Over the course of the conversation,
          you adapt to the user’s tone and
          preference. Try to match the user’s vibe,
          tone, and generally how they
          are speaking.
        
        https://simonwillison.net/2025/Apr/29/chatgpt-sycophancy-pro...

      • By kromem 2025-04-3013:57

        The response is 1,000% written by 4o. Very clear tells, and in line with many other samples from the past few days.

      • By jsbg 2025-04-3014:04

        If you look at the full thing, the market analysis it does basically says this isn't the best idea.

      • By plaguuuuuu 2025-04-3020:12

        FWIW grok also breathlessly opines the sheer genius and creativity of shit on a stick

    • By spoaceman7777 2025-04-304:30

      Looks like that was a hoax.

    • By milleramp 2025-04-305:00

      So it would probably also recommend the yes men's solution: https://youtu.be/MkTG6sGX-Ic?si=4ybCquCTLi3y1_1d

    • By eMPee584 2025-04-3011:59

      Well good luck then coming up with a winning elevator pitch for YC

    • By Stratoscope 2025-04-306:35

      My oldest dog would eat that shit up. Literally.

      And then she would poop it out, wait a few hours, and eat that.

      She is the ultimate recycler.

      You just have to omit the shellac coating. That ruins the whole thing.

  • By minimaxir 2025-04-303:465 reply

    It's worth noting that one of the fixes OpenAI employed to get ChatGPT to stop being sycophantic is to simply to edit the system prompt to include the phrase "avoid ungrounded or sycophantic flattery": https://simonwillison.net/2025/Apr/29/chatgpt-sycophancy-pro...

    I personally never use the ChatGPT webapp or any other chatbot webapps — instead using the APIs directly — because being able to control the system prompt is very important, as random changes can be frustrating and unpredictable.

    • By nsriv 2025-04-304:042 reply

      I also started by using APIs directly, but I've found that Google's AI Studio offers a good mix of the chatbot webapps and system prompt tweakability.

      • By Tiberium 2025-04-304:32

        It's worth noting that AI Studio is the API, it's the same as OpenAI's Playground for example.

      • By oezi 2025-04-305:311 reply

        I find it maddening that AI Studio doesn't have a way to save the system prompt as a default.

        • By FergusArgyll 2025-04-305:402 reply

          On the top right click the save icon

          • By Michelangelo11 2025-04-306:442 reply

            Sadly, that doesn't save the system instructions. It just saves the prompt itself to Drive ... and weirdly, there's no AI studio menu option to bring up saved prompts. I guess they're just saved as text files in Drive or something (I haven't bothered to check).

            Truly bizarre interface design IMO.

            • By alphabetting 2025-05-011:12

              It definitely saves system prompts and has for some time.

            • By FergusArgyll 2025-04-3010:42

              That's weird, for me it does save the system prompt

          • By loufe 2025-04-306:351 reply

            That's for the thread, not the system prompt.

            • By FergusArgyll 2025-04-3010:43

              By me it's the exact opposite. It saves the sys prompt and not the "thread".

    • By troupo 2025-04-3010:181 reply

      > I personally never use the ChatGPT webapp or any other chatbot webapps — instead using the APIs directly — because being able to control the system prompt is very important, as random changes can be frustrating and unpredictable.

      This assumes that API requests don't have additional system prompts attached to them.

      • By msp26 2025-04-3010:372 reply

        Actually you can't do "system" roles at all with OpenAI models now.

        You can use the "developer" role which is above the "user" role but below "platform" in the hierarchy.

        https://cdn.openai.com/spec/model-spec-2024-05-08.html#follo...

        • By jenny91 2025-04-3022:57

          They just renamed "system" to "developer" for some reason. Their API doesn't care which one you use, it'll translate to the right one. From the page you linked:

          > "developer": from the application developer (possibly OpenAI), formerly "system"

          (That said, I guess what you said about "platform" being above "system"/"developer" still holds.)

        • By never_inline 2025-04-3018:14

          ?? What happens to old code which sends messages with a system role?

    • By cbolton 2025-04-309:442 reply

      You can bypass the system prompt by using the API? I thought part of the "safety" of LLMs was implemented with the system prompt. Does that mean it's easier to get unsafe answers by using the API instead of the GUI?

      • By minimaxir 2025-04-3019:17

        Safety is both the system prompt and the RLHF posttraining to refuse to answer adversarial inputs.

      • By pegasus 2025-04-3014:51

        Yes, it is.

    • By vunderba 2025-04-3014:14

      Side note, I've seen a lot of "jailbreaking" (i.e. AI social engineering) to coerce OpenAI to reveal the hidden system prompts but I'd be concerned about accuracy and hallucinations. I assume that these exploits have been run across multiple sessions and different user accounts to at least reduce this.

    • By TZubiri 2025-04-304:17

      I'm a bit skeptical of fixing the visible part of the problem and leaving only the underlying invisible problem

HackerNews