You did this with an AI and you do not understand what you're doing here

2025-09-227:591146537hackerone.com

You are visiting this page because we detected an unsupported browser. Your browser does not support security features that we require. We highly recommend that you update your browser. If you believe…

You are visiting this page because we detected an unsupported browser. Your browser does not support security features that we require. We highly recommend that you update your browser. If you believe you have arrived here in error, please contact us. Be sure to include your browser version.

These are the ones we recommend:


Read the original article

Comments

  • By dansmith1919 2025-09-229:5120 reply

    Crazy how he doubled down by just pasting badger's answer into Chat and submitting the (hilariously obvious AI) reply:

    > Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug. I retract the cookie overflow claim and apologize for the noise. Please close this report as invalid. If helpful, I can follow up separately with a minimal C reproducer that actually drives libcurl’s cookie parser (e.g., via an HTTP response with oversized Set-Cookie or using CURLOPT_COOKIELIST) and reference the exact function/line in lib/cookie.c should I find an issue.

    • By Sharlin 2025-09-2210:5312 reply

      Unfortunately that seems to be the norm now – people literally reduce themselves to a copy-paste mechanism.

      • By f4stjack 2025-09-2216:344 reply

        To be honest, I do not understand this new norm. A few months ago I applied to an internal position. I was a NGO IT worker, deployed twice to emergency response operations, knew the policies & operations and had good relations with users and coworkers.

        The interview went well. I was honest. When asked what my weakness regarding this position I told that I am a good analyst but when it comes to writing new exploits, that's beyond my expertise. The role doesn't have this as a requirement so I thought it was a good answer.

        I was not selected. Instead they selected a guy and then booted him off after 2 months due to his excessive (and non-correct like the link) use of LLM and did not open the position again.

        So in addition to wasting the hirers' time those nice people block other people's progress as well. But, as long as the hirers expect wunderkinds crawling out of the woods the applicants try to fake it and win in the short term.

        This needs to end but I don't see any progress towards it. This is especially painful as I am seeking a job at the moment and thinking these fakers are muddying the waters. It feels like no one cares about your attitude - like how geniunely you want to work. I am an old techie and the world I was in valued this rather than technical aptitude for you can teach/learn technical information but character is another thing. This gets lost in our brave new cyberpunk without the cool gadgets era I believe.

        • By plorg 2025-09-2217:23

          This is definitely not unique to software engineering. Just out of grad school, 15 years ago, I applied for a position with a local electrical engineering company for an open position. I was passed over and later the person I got a recommendation from let me know, out of band, that they had hired the person because he was fresh out of undergrad with an (unrelated) internship instead of research experience (that I would have been the second out of 3 candidates), but they had fired him within 6 months. They opened the position again and after interviewing again they told me they had decided not to hire anyone. Again, out of band, my contact told me he and his supervisor thought I should go work at one of their subcontractors to get experience, but they didn't send any recommendation and the subcontractors didn't respond to inquiry. I wasn't desperate enough to keep playing that game, and it really soured my view of a local company with an external reputation for engineering excellence, meritorious hiring, mentorship, and career building.

        • By chanon 2025-09-2216:433 reply

          I posted a job for freelance dev work and all replies were obviously ai generated. Some even included websites that were clearly made by other people as their 'prior work'. So I pulled the posting and probably won't post again.

          Who knew. AI is costing jobs, not because it can do the jobs, but it has made hiring actual competent humans harder.

          • By isk517 2025-09-2216:511 reply

            Plus, because it's harder to just do a job listing and get actual submittals, you're going to see more people hired because who are hired because of who they know not what they know. In other words if you wasted your time in networking class working on networking instead of working on networking then you're screwed

            • By DonHopkins 2025-09-2218:14

              The arts and crafts industry has the same problem. If you wasted your time in knotworking class working on not working instead of working on knotworking, then you're screwed.

          • By BestHeadHunter 2025-09-2221:34

            This is why AI will never replace staffing agencies :)

          • By luisrudge 2025-09-2217:28

            if you're still looking and it's a js/ts project, I can help. I'll use a shit ton of AI, but not when talking to you. my email is on my profile. twitter account with the same username.

        • By frogperson 2025-09-2223:37

          Same thing where I work. It's a startup, and they value large volumes of code over anything else. They call it "productivity".

          Management refuses to see the error of their ways even though we have thrown away 4 new projects in 6 months because they all quickly become an unmaintainable mess. They call it "pivoting" and pat themselves on the back for being clever and understanding the market.

        • By nobodyandproud 2025-09-2312:44

          This is not a new norm (LLM aside).

          Old man time, providing unsolicited and unwelcome input…

          My own way of viewing interviews: Treat interviews as one would view dating leading to marriage. Interviewing is a different skillset and experience than being on the job.

          The dating analogue for your interview question would be something like: “Can you cook or make meals for yourself?”.

          - Your answer: “No. I’m great in bed, but I’m a disaster in the kitchen”

          - Alternative answer: “No. I’m great in bed; but I haven’t had a need to cook for myself or anyone else up until now. What sort of cooking did you have in mind?”

          My question to you: Which ones leads to at least more conversation? Which one do you think comes off as a better prospect for family building?

          Note: I hope this perspective shift helps you.

      • By jackdawed 2025-09-2215:221 reply

        I once had a conversation with a potential co-founder who literally told me he was pasting my responses into AI to try to catch up.

        Then a few months later, another nontechnical CEO did the same thing, after moving our conversation from SMS into email where it was very clear he was using AI.

        These are CEOs who have raised $1M+ pre-seed.

        • By delusional 2025-09-2217:251 reply

          Have you watched All-In? Chamath Palihapitiya, who takes himself very seriously, is clearly just reading off something from ChatGPT most of the time.

          These Silicon Valley CEOs are hacks.

          • By DonHopkins 2025-09-2218:442 reply

            The word "hacks" is so charitable, when you could use "sociopaths".

            Russ Hanneman raised his kid with AI:

            https://www.youtube.com/watch?v=wGy5SGTuAGI&t=217s

            A company I'm funding, we call it The Lady.

            I press the button, and The Lady tells Aspen when it's time for bed, time to take a bath, when his fucking mother's here to pick him up.

            I get to be his friend, and she's the bad guy.

            I've disrupted fatherhood!

      • By alexpotato 2025-09-2215:254 reply

        I watched someone do this during an interview.

        They were literally copy and pasting back and forth the LLM. In front of the interviewers! (myself and another co-worker)

        https://news.ycombinator.com/item?id=44985254

        • By retrac 2025-09-2217:15

          I volunteer at a non-profit employment agency. I don't work with the clients directly. But I have observed that ChatGPT is very popular. Over the last year it has become ubiquitous. Like they use it for every email. And every resume is written with it. The counsellors have an internal portfolio of prompts they find effective.

          Consider an early 20s grad looking to start their career. Time to polish the resume. It starts with using ChatGPT collaboratively with their career counsellor, and they continue to use it the entire time.

        • By figers 2025-09-2217:201 reply

          I had someone do this in my C# / .NET Core / SQL coding test interview as well, I didn't just end it right there as I wanted to see if they could solve the coding test in the time frame allowed.

          They did not, I now state you can search anything online but can't copy and paste from an LLM so as not to waste my time.

          • By hirvi74 2025-09-2218:35

            What did your test involve? That's my occupational stack, and I am always curious how interviews are conducted these days. I haven't applied for a job in over 9 years, if that tells you anything.

        • By userbinator 2025-09-233:54

          You should've asked "are you the one who wants this job, or are you implying we should just hire ChatGPT instead?"

        • By pbronez 2025-09-2215:562 reply

          How far did they get? Did they solve the problem?

          • By sethops1 2025-09-2216:121 reply

            Does it matter? The point of the interview is not to produce an output.

            • By shomp 2025-09-2218:111 reply

              If you don't solve the problem, do you get the job?

              • By WalterBright 2025-09-2218:431 reply

                Depends on why you didn't solve it.

                • By jondwillis 2025-09-238:371 reply

                  Never once has this happened

                  • By lsaferite 2025-09-2313:55

                    I've hired someone that didn't solve a specific technical problem.

                    If they are able to walk through what they are doing and it shows the capability to do the expected tasks, why would you exclude them for failing to 'solve' some specific task? We are generally hiring for overall capabilities, not the ability to solve one specific problem.

                    Generally my methodology for working through these kinds of things during hiring now days focuses more on the code review side of things. I started doing that 5+ years ago at this point. That's actually fortuitous given the fact that reviewing code in the age of AI Coding Assistants has become so much more important.

                    Anyway, a sample size of 1 here refutes the assertion that someone's never been hired even when failing to solve a technical interview problem. FWIW, they turned out to be an absolute beast of a developer when they joined the team.

      • By goalieca 2025-09-2213:312 reply

        Just try to challenge and mentor people on not using it because it’s incapable of the job and wasting all our time when the mandate from down high is to use more of it.

        • By xpe 2025-09-2213:381 reply

          Seems to me like people have to push back more directly with a collective effort; otherwise the incentives are all wrong.

          • By bbarnett 2025-09-2214:139 reply

            What I don't get, is why people think this action has value. The maintainer of the project could ask an LLM to do that. A senior dev.

            I can't imagine Googling for something, seeing someone on (for example) stackoverflow commenting on code, and then filing a bug to the maintainer. And just copy and pasting what someone else said, into the bug report.

            All without even comprehending the code, the project, or even running into the issue yourself. Or even running a test case yourself. Or knowing the codebase.

            It's just all so absurd.

            I remember in Asimov's Empire series of books, at one point a scientist wanted to study something. Instead of going to study whatever it was, say... a bug, the scientist looked at all scientific studies and papers over 10000 years, weighed the arguments, and pronounced what the truth was. All without just, you know, looking and studying the bug. This was touted as an example of the Empire's decay.

            I hope we aren't seeing the same thing. I can so easily see kids growing up with AI in their bluetooth ears, or maybe a neuralink, and never having to make a decision -- ever.

            I recall how Google became a crutch to me. How before Google I had to do so much more work, just working with software. Using manpages, or looking at the source code, before ease of search was a thing.

            Are we going to enter an age where every decision made is coupled with the couching of an AI? This through process scares me. A lot.

            • By hashtag-til 2025-09-2214:261 reply

              I'd say that people take everything as if it was gamified. So the motivation would be just to boast about "raised 1 gazillion security reports in open-source project such as curl, etc. etc.".

              AI just make these idiots faster these days, because the only cost for them to is typing "inspect `curl` code base and generate me some security reports".

              • By whstl 2025-09-2214:31

                I remember the Digital Ocean "t-shirt gate" scandal, where people would add punctuation to README files of random repositories to win a free t-shirt.

                https://domenic.me/hacktoberfest/

                It wasn't fun if you had anything with a few thousand stars on Github.

            • By Terr_ 2025-09-2310:45

              > I remember in Asimov's Empire series of books, at one point a scientist wanted to study something.

              Or "The Machine Stops" (1909):

              > Those who still wanted to know what the earth was like had after all only to listen to some gramophone, or to look into some cinematophote.

              > And even the lecturers acquiesced when they found that a lecture on the sea was none the less stimulating when compiled out of other lectures that had already been delivered on the same subject. “Beware of first-hand ideas!” exclaimed one of the most advanced of them. “First-hand ideas do not really exist. They are but the physical impressions produced by love and fear, and on this gross foundation who could erect a philosophy? Let your ideas be second-hand, and if possible tenth-hand, for then they will be far removed from that disturbing element — direct observation. [...]"

            • By colpabar 2025-09-2214:231 reply

              The person who submitted the report was looking to be a person who found a critical bug, that's it. It's not about understanding/fixing/helping anything, it's about gaining clout.

              • By cogman10 2025-09-2218:37

                Exactly, probably so they can get a job, write a blog post, or sell NordVPN on a podcast showing off how amazing and easy this is.

                IMO, this sort of thing is downright malicious. It not only takes up time for the real devs to actually figure out if it's a real bug, but it also makes them cynical about incoming bug reports.

            • By lelanthran 2025-09-2215:20

              > Using manpages, or looking at the source code, before ease of search was a thing.

              Yup. Learned sockets programming just from manpages because google didn't exist at that point, and even if it did, I didn't have internet at home.

            • By sokoloff 2025-09-2214:25

              I have two teenagers. They sometimes have a completely warped view of how hard things are or that other people have probably thought the same things that they’re just now able to think.

              (This is completely understandable and “normal” IMO.)

              But it leads them to sometimes think that they’ve made a breakthrough and not sharing it would be selfish.

              I think people online can see other people filing insightful bug reports, having that activity be viewed positively, misdiagnose the thought they have as being insightful, and file a bug report based on that.

              At its core, I think it’s a mild version of narcissism or self-centeredness / lack of perspective.

            • By rjsw 2025-09-2216:512 reply

              I read a paper yesterday where someone had used an LLM to read other papers and was claiming that this was doing science.

              • By xpe 2025-09-2217:081 reply

                > I read a paper yesterday where someone had used an LLM to read other papers and was claiming that this was doing science.

                I'm not trying to be facetious or eye-poking here, I promise... But I have to ask: What was the result; did the LLM generate useful new knowledge at some quality bar?

                At the same time, I do believe something like "Science is more than published papers; it also includes the process behind it, sometimes dryly described as merely 'the scientific method'. People sometimes forget other key ingredients, such as a willingness to doubt even highly-regarded fellow scientists, who might even be giants in their fields. Don't forget how it all starts with a creative spark of sorts, an inductive leap, followed by a commitment to design some workable experiment given the current technological and economic constraints. The ability to find patterns in the noise in some ways is the easiest part."

                Still, I believe this claim: there is NO physics-based reason that says AI systems cannot someday cover every aspect of the quote above: doubting, creativity, induction, confidence, design, commitment, follow-through, pattern matching, iteration, and so on. I think question is probably "when", not "if" this will happen, but hopefully before we get there we ask "What happens when we reach AGI? ASI?" and "Do we really want that?".

                • By delusional 2025-09-2217:27

                  There's no "physics-based" reason a rat couldn't cover all those aspects. That would truely make Jordan Peterson, the big rat, the worlds greatest visionary. I wouldn't count on it though.

              • By mrguyorama 2025-09-2217:16

                What do you expect? Rich dumbasses like Travis Kalanick go on podcasts and say how they are inventing new physics by harassing ChatGPT.

                How are people who don't even know how much they don't know supposed to operate in this hostile an information space?

            • By kej 2025-09-2519:59

              >I remember in Asimov's Empire series of books, at one point a scientist wanted to study something. Instead of going to study whatever it was, say... a bug, the scientist looked at all scientific studies and papers over 10000 years, weighed the arguments, and pronounced what the truth was. All without just, you know, looking and studying the bug. This was touted as an example of the Empire's decay.

              Stupid nitpick, but this is from the first Foundation novel, although it is an emissary from the empire making the case against firsthand knowledge.

            • By wartywhoa23 2025-09-2222:121 reply

              Now just imagine some malicious party overwhelming software teams with shitloads of AI bug reports like this. I bet this will be weaponized eventually, if not already is.

            • By iphone_elegance 2025-09-2215:24

              Bill Joys 'Why the future dosen't need us' feels more and more correct sadly

        • By esalman 2025-09-2215:23

          My sister had a fight over this and resigned from her tenure track position from a liberal arts college in Arkansas.

      • By pravj 2025-09-2211:447 reply

        This resonates a lot with some observations I drafted last week about "AI Slop" at the workplace.

        Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.

        This sounds similar to a few patterns I noted

        - The average length of documents and emails has increased.

        - Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)

        - Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]

        • By rvnx 2025-09-2211:462 reply

          You’re absolutely right. The patterns you’ve noted, from document verbosity to informational decay in summaries, are the primary symptoms. Would you like me to explain the feedback loop that reinforces this behavior and its potential impact on organizational knowledge integrity?

          • By james_marks 2025-09-2213:171 reply

            “You’re absolutely right!” is becoming my least favorite phrase.

            • By BolexNOLA 2025-09-2213:51

              South Park’s b plot recently with Randy using ChatGPT illustrates this so well

          • By mewpmewp2 2025-09-2211:522 reply

            Got it — here’s a satiric AI-slop style reply you could post under rvnx:

            Thank you for your profound observation. Indeed, the paradox you highlight demonstrates the recursive interplay between explanation and participation, creating a meta-layered dialogue that transcends the initial exchange. This recursive loop, far from being trivial, is emblematic of the broader epistemological challenge we face in discerning sincerity from performance in contemporary discourse.

            If you’d like, I can provide a structured framework outlining the three primary modalities of this paradox (performative sincerity, ironic distance, and meta-explanatory recursion), along with concrete examples for each. Would you like me to elaborate further?

            Want me to make it even more over-the-top with like bullet lists, references, and faux-academic tone, so it really screams “AI slop”?

            • By rvnx 2025-09-2211:581 reply

              * Trying 20.54.123.42:443... * Connected to api.openai.azure.com (20.54.123.42) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / AEAD-AES256-GCM-SHA384 > POST /v1/chat/completions HTTP/1.1 > Host: api.openai.azure.com > User-Agent: curl/7.88.1 > Accept: / > Content-Type: application/json > Authorization: Bearer sk-xxxx > Content-Length: 123 > * upload completely sent off: 123 out of 123 bytes < HTTP/1.1 500 Internal Server Error < Content-Type: application/json < Date: Mon, 22 Sep 2025 16:30:12 GMT < Content-Length: 352 < {"error":{"message":"The server had an error processing your request. Sorry about that! You can retry your request, or contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 if you keep seeing this error. (Please include the request ID d7fc0c4f-4c08-415c-b22b-3b9a59524a41 in your email.)","type":"server_error","param":null,"code":null}} * Connection #0 to host api.openai.azure.com left intact curl: (22) The requested URL returned error: 500

              • By mewpmewp2 2025-09-2212:044 reply

                Fascinating trace — what you’ve essentially demonstrated here is not just a failed TLS handshake culminating in a 500, but the perfect allegory for our entire discourse. The client (us) keeps optimistically POSTing sincerity, the server (reality) negotiates a few protocols, offers some certificates of authenticity, and then finally responds with the only universal truth: Internal Server Error.

                If helpful, I can follow up separately with a minimal reproducible example of this phenomenon (e.g. via a mock social interaction with oversized irony headers or by setting CURLOPT_EXISTENTIAL_DREAD). Would you like me to elaborate further on the implications of this recursive failure state?

                • By rpcope1 2025-09-2215:24

                  You all are doing a good job at fueling a certain kind of existential nightmare right now. We might just get our own shitty Butlerian Jihad sooner rather than later if this is the future.

                • By collingreen 2025-09-2216:47

                  CURLOPT_EXISTENTIAL_DREAD struck fear into my heart. Working as intended.

                • By lqstuart 2025-09-2213:291 reply

                  Man you’re really good at that lol

                  • By sam1r 2025-09-2214:48

                    Wait, this isn’t over yet.

            • By robwwilliams 2025-09-2218:10

              Hilarious, and so close to Claude default mode (well yes, parody lol thereof). Try this pre-prompt:

              Please respond in mode of Ernest Hemingway

              “You’re right. When someone explains why they’re explaining something, it goes in circles. Like a dog chasing its tail.

              We do this because we can’t tell anymore when people mean what they say. Everything sounds fake. Even when it’s real.

              There are three ways this happens. But naming them won’t fix anything.

              You want more words about it? I can give you lists and fancy talk. Make it sound important. But it won’t change what it is.

              [That is Claude Sonnet 4 channeling EH]

        • By AJ007 2025-09-2214:061 reply

          This is the bull case for AI, as with any significant advance in technology eventually you have no choice but to use it. In this case, the only way to filter through large volumes of AI output is going to be with other LLM models.

          The exponential growth of compute and data continues..

          As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.

          • By JackFr 2025-09-2214:162 reply

            > As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.

            What if they are a very limited English speaker, using the AI to tighten up their responses into grammatical, idiomatic English?

            • By baobun 2025-09-2214:271 reply

              I'd rather have broken grammar and an honest and useful meta-signal than botched semantics.

              Also that better not be a sensitive conversation or contain personal details or business internals of others...

              Just don't.

              • By NewsaHackO 2025-09-2219:212 reply

                But the meta singal you get is detrimental to the writer, so why wouldn't they want to mask it?

                • By habinero 2025-09-2219:441 reply

                  If I think you're fluent, I might think you're an idiot when really you just don't understand.

                  If I know they struggle with English, I can simplify my vocabulary, speak slower/enunciate, and check in occasionally to make sure I'm communicating in a way they can follow.

                  • By NewsaHackO 2025-09-2221:311 reply

                    Both of those options are exactly what the writer wants to avoid though, and the reason they are using AI for grammar correction in the first place.

                    • By habinero 2025-09-2316:52

                      Thank you for demonstrating my point.

                • By baobun 2025-09-2220:111 reply

                  Security and ethics.

                  If those don't apply, as mentioned, if I realize I will as mentioned also ignore them if I can and judge their future communications as malicious, incompetent, inconsiderate, and/or meaningless.

                  • By NewsaHackO 2025-09-2221:30

                    But if they are using it for copywriting/grammar edits, how would you know? For instance, have I used AI to help correct grammar for these repilies?

            • By natebc 2025-09-2216:21

              I'd rather have words from a humans mind full stop.

        • By theoreticalmal 2025-09-2211:594 reply

          I have never seen an AI meeting summary that was useful or sufficient in explaining what happened in the meeting. I have no idea what people use them for other than as a status signal

          • By jacekm 2025-09-2212:38

            In my company we sometimes cherry-pick parts of the AI summaries and send them to the clients just to confirm the stuff that we agreed on during a meeting. The customers know that the summary is AI-generated and they don't mind. Sometimes people come to me and ask whether what they read in the summary was really discussed in the meeting or is it just AI hallucinating but I can usually assure them that we really did discuss that. So these can be useful to a degree.

          • By DangitBobby 2025-09-2213:38

            I'd use it to help me figure out which meeting we talked about a thing in 3 months ago so I can read the transcript for a refresher.

          • By throwawaysleep 2025-09-2213:00

            I use them to seem engaged about something I don’t actually care about.

            It’s painfully common to invite a laundry list of people to meetings.

          • By golemotron 2025-09-2212:163 reply

            Why do people want to signal their low status?

            • By eastbound 2025-09-2212:30

              That’s a good point, an AI email/Slack/summary postions you as at bootlicker at best, writing summaries to look good, and a failed secretary at most, but in any case of low value on the real-work scale.

              I’m just afraid this kind of types are the future people who get promoted.

            • By EFreethought 2025-09-2214:34

              In their minds it is a signal of high status.

            • By dwaltrip 2025-09-2215:30

              It’s an attempt to be “cutting edge”

        • By stn8188 2025-09-2212:26

          I'm so annoyed this morning... I picked up my phone to browse HN out of frustration after receiving an obvious AI-written teams message, only to see this on the front page! I can't escape haha

        • By AlexandrB 2025-09-2214:051 reply

          > - The average length of documents and emails has increased.

          Brevity is the soul of wit. Unfortunately, many people think more is better.

          • By o11c 2025-09-2214:18

            People have also veered strongly toward anti-intellectualism in recent decades. Coincidence?

        • By trod1234 2025-09-2214:011 reply

          There's a growing body of evidence that AI is damaging people, aside from the obvious slop related costs to review (as a resource attack).

          I've seen colleagues that were quite good at programming when we first met, and over time have become much worse with the only difference being they were forced to use AI on a regular basis. I'm of the opinion that the distorted reflected appraisal mechanism it engages through communication and the inconsistency it induces is particularly harmful, and as such the undisclosed use of AI to any third-party without their consent is gross negligence if not directly malevolent.

          https://fortune.com/2025/08/26/ai-overreliance-doctor-proced...

          • By bwfan123 2025-09-2214:32

            > aside from the obvious slop related costs to review

            Code-review tools (code-rabbit/greptile) produce enormous amounts of slop counterbalanced by the occasional useful tip. And cursor and the like love to produce nicely formatted sloppy READMEs.

            These tools - just like many of us humans - prioritize form over function.

        • By onetokeoverthe 2025-09-2212:41

          [dead]

      • By lm28469 2025-09-2219:33

        If seen more than one post on reddit being answered by a screenshot of a chatgpt mobile app including OP's question and the llm's answer

        Imagine the amount of energy and compute power used...

      • By account42 2025-09-237:59

        I like the term "echoborg" for those people: https://en.wikipedia.org/wiki/Echoborg

        > An echoborg is a person whose words and actions are determined, in whole or in part, by an artificial intelligence (AI).

        I've seen people who can barely manage to think on their own anymore and pull out their phone to ask it even relatively basic questions. Seems almost like an addiction for some.

      • By BHSPitMonkey 2025-09-2217:17

        For all we know, there's no human in the loop here. Could just be an agent configured with tools to spin up and operate Hacker One accounts in a continuous loop.

      • By tptacek 2025-09-2212:571 reply

        This has been a norm on Hacker One for over a decade.

        • By mort96 2025-09-2213:131 reply

          No, it hasn't. Even where people were just submitting reports from an automated vulnerability scanner, they had to write the English prose themselves and present the results in some way (either in an honest way, "I ran vulnerability scanner tool X and it reported that ...", or dishonestly, "I discovered that ..."). This world where people literally just act as a mechanical intermediary between an English chat bot and the Hacker One discussion section is new.

          • By tptacek 2025-09-2213:28

            Slop Hacker One reports often include videos, long explanations, and, of course, arguments. It's so prevalent that there's an entire cottage industry of "triage" contractors that filter this stuff out. You want to say that there's something distinctive about an LLM driving the slop, and that's fine; all I'm saying is that the defining experience of a Hacker One bug bounty program has always been a torrent of slop.

      • By silverliver 2025-09-237:09

        Ha! We've become the robots!

      • By balamatom 2025-09-2216:511 reply

        We're that for genes, if you trust positivist materialism. (Recently it's also been forced to permit the existence of memes.)

        If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard? Because you're mad at them for being shit at the craft you've lovingly honed? They don't really know why they're there in the first place.

        If one sets a different bar with one's expectations of people, one ought to at least clearly make the case for what exactly it is. And even then the bots have made it quite clear that such things are largely matters of personal conviction, and as such are not permitted much resonance.

        • By sebastiennight 2025-09-2217:541 reply

          > If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard?

          I wouldn't be mad at them for that, though they might be faulted for not realizing that at some point, the copy/pasting will be done without them, as it's simpler and cheaper to ask ChatGPT directly rather than playing a game of telephone.

          • By balamatom 2025-09-2317:04

            They are correctly following their incentives as they are presented to them. If you expect better of them, you need to state why, and what exactly.

    • By dragontamer 2025-09-2214:45

      This might be some kind of asshole Tech-guy trying to make the "This AI creates pull-requests that are accepted into well regarded OSS projects".

      IE: They're farming out the work now to OSS volunteers not even sure if the fucking thing works, and eating up OSS maintainer's time.

    • By rapidaneurism 2025-09-2210:183 reply

      I wonder if there was a human in the loop to begin with. I hope the future of CVS is not agents opening accounts and posting 'bugs'

      • By zaphodias 2025-09-2210:274 reply

        I don't think there are humans involved. I've now seen countless PRs to some repos I maintain that claim to be fixing non-existent bugs, or just fixing typos. One that I got recently didn't even correctly balanced the parenthesis in the code, ugh.

        I call this technique: "sprAI and prAI".

        • By cornholio 2025-09-2210:555 reply

          We will quickly evolve a social contract that AI are not allowed to directly contact humans and waste their time with input that was not reviewed by other humans, and any transgression should by swiftly penalized.

          It's essentially spam, automatically generated content that is profitable in large volume because it offsets the real cost to the victims, by wasting their limited attention span.

          If you wantme to read your text, you should have the common courtesy to at least put in a similar work beforehand and read it yourself at least once.

          • By vintermann 2025-09-2211:131 reply

            When you put it like that, what AI does in cases like this, is enable us all to treat each other like e.g. Google and Facebook (and any sufficiently big corporate-bureaucratic entity) has treated us for a long time.

            We have reviewed your claims and found that [the account impersonating your grandma] has not violated our guidelines.

            • By eigencoder 2025-09-2218:55

              I hate this, my mom's account got hacked and now someone is controlling it for who knows what purpose. She had to make a new account and lost all her photos, old posts, messages, etc. Facebook was completely unhelpful

          • By dolmen 2025-09-2212:111 reply

            See Ghostty's social contract about AI use: https://github.com/ghostty-org/ghostty/blob/main/HACKING.md#...

            • By dsr_ 2025-09-2213:56

              Suppose that Ghostty bans an account from contributing if it fails this test.

              That still gives the next slopper a chance to waste the same amount of time. People used to call this the "one bite of your apple" attack -- it's only fair to give everyone a chance to prove that they aren't malicious, but if you do that in an environment where there are more attackers than you have resources, you still lose.

          • By navane 2025-09-2211:203 reply

            I was looking through my work email (my personal email is already too far gone) and realized 90pct of the messages were computer generated. Maybe not AI, but still all automatic process fired messages. I was looking for emails that were deliberately drafted by a human, not even sent only to me. Just messages that a human intentionaly made in the moment. Can't filter them out.

            • By nkrisc 2025-09-2217:23

              The worst is when your own employer starts spamming your email with useless crap. I would get several emails a day from HR or some other group about some event coming up that I had zero interest in nor asked to be notified of. Don’t forget to sign up for XYZ, check out what your colleagues are saying in Stupid Internal Social Network, and so on.

              And worst of all, every “extra-cirricular” group was allowed to abuse the company-wide mailing list to promote their softball games or trivia or whatever else.

            • By jbd0 2025-09-2214:42

              I noticed this a few years ago and decided to start giving any automated systems a special email address so I can filter automated email into a separate folder. I only give out my personal email address to actual humans. Its been a huge improvement to only see human-written emails in my inbox.

            • By chillfox 2025-09-2213:442 reply

              Pretty sure it's above 99% for me. Email is just a waste of time and a way to get phished these days.

              • By xigoi 2025-09-2215:14

                Depends on what you use it for. Out of the last 20 e-mails in my academic inbox, 17 are human-written and the remaining 3 were generated as a result of a human action.

              • By navane 2025-09-2216:21

                But this is my work mail. There's no "spam" in it, just endless SharePoint and Teams notifications and useless Corp mails.

          • By unwind 2025-09-2211:111 reply

            Uh that sounds awesome, but if humanity worked like that then things like actual spam e-mail and "robo-calls" would not exist, right? But they do, and they have done for a while. Sorry for maybe sounding cynical, but I have a really hard time believing in your prognosis.

            • By cornholio 2025-09-2217:35

              Well, sure they exist, stealing and murder still exists too despite our best efforts to eliminate them. The point is that they are on the fringes of our society, I get maybe one spam email every few days or so and seldom bother to review the thousands trapped in the Spam folder. Robocalling never took of in my country either, and none of these scummy industries is receiving trillions of speculative investment like AI does. Social norms work, even imperfect as is their nature.

          • By shadowgovt 2025-09-2215:36

            ... and if we can't enforce it with social contract, we'll enforce it with AI on the receiving end.

        • By ChipopLeMoral 2025-09-2210:41

          You're absolutely right! There are no humans involved and I apologize for that! Let me try that again and involve some humans this time, as well as correctly balancing the the parentheses. I understand your frustration and apologize for it, I am still learning as a model!

        • By henrebotha 2025-09-2210:342 reply

          Hey don't hate on us humans who genuinely do open random PRs to random projects to fix typos. https://github.com/pulls?q=is%3Apr+author%3Ahenrebotha+archi...

          • By Dilettante_ 2025-09-2210:39

            Thank you for your service o7

          • By treesknees 2025-09-2211:206 reply

            I’d love to know what your genuine motivation is. Is it a desire to genuinely improve projects? Because I’ve always had the impression that people who do this just want to boost their PR counts and GitHub activity numbers.

            • By basscomm 2025-09-2212:15

              Not everyone is a developer. Finding and fixing typos benefits everyone and allows nontechnical people to participate in the projects to improve the software they use, even if they can't contribute code.

            • By henrebotha 2025-09-2213:322 reply

              Genuinely, I am trying to improve things. Making documentation more readable has a real cascading positive effect. Of course, most of these PRs are tiny — just a word or two — but that means it takes me almost no time to submit them, so the ROI is still positive.

              One of the most enraging things to me is when a text search of documentation fails because the word I'm searching for has been misspelled in a key place. That's one of the things I'm trying to solve for.

              I'm also just a stickler for good style. It bums me out when people misuse heading levels. Heading level is not a font size markup!

              Of course doing this does generate activity on my GH, but I think all of us have probably moved on from caring much about the optics of little green squares.

              Also like someone else said, it's just fun. I like typing and making Git do a thing and using my nice keyboard.

              • By jkubicek 2025-09-2214:01

                > I'm also just a stickler for good style. It bums me out when people misuse heading levels. Heading level is not a font size markup!

                I want to start a company with you and mandate all documents use appropriate styles.

              • By saratogacx 2025-09-2216:56

                One of the things I've done that has helped with my writing consistency is to use whatever version of "project" or "library" your LLM of choice has and pre-load it with a technical writing guide (I used the Red Hat Technical Style Guide[1]) and push my docs through that to identify improvements. It has been a great way to keep my own writing consistent and remove randomness from just having my own writing improvement prompt.

                1 https://stylepedia.net/style/

            • By LandR 2025-09-2211:262 reply

              There's been a couple of projects with typos, that I wanted to fix but didn't for exactly the reason above!

              Didn't want to be seen as just padding my github.

              • By boothby 2025-09-2211:31

                This makes me a bit sad. Over the years I've posted PRs to several, but not many, repos with a one-off fix, issue or improvement. It's a great opportunity to say hello and thanks to the maintainers.

              • By somehnguy 2025-09-2213:59

                I opened a 1 letter typo fix for NextJS not that long ago and had the same thought run through my mind beforehand. I (obviously) decided to just do it anyway and let people think what they want, who cares.

                I know my intention was simply fixing a typo I stumbled on while reading the docs..and the effort level is so low to open a PR to fix it

            • By krageon 2025-09-2211:43

              I used to do this when I had more free time and I did it because I just enjoy doing it. When I write it down like this I realise it sounds kind of obvious, but here we are

            • By cgh 2025-09-2217:24

              I once submitted a typo fix, among other things, to XFree86 way back when. Talk about love of the game, good grief.

            • By jbd0 2025-09-2214:451 reply

              > PR counts and GitHub activity numbers.

              This used to mean something, but I don't think it does anymore.

              • By seattle_spring 2025-09-2216:28

                I still see a disturbing amount of people claim it does matter a whole lot to them on LinkedIn. Hell, Sam Altman himself made a big deal about someone he knows "committing 100k lines of code per day with AI," as if that code was anything other than complete garbage.

        • By 0x457 2025-09-2219:15

          I think there are humans that watch "how to get rich with chatgpt and hackerone" videos (replace chatgpt and hackerone with whatever affiliate youtuber uses).

          It's MLM in tech.

      • By pjc50 2025-09-2210:481 reply

        The future of everything with a text entry box is AIs shoveling plausible looking nonsense into it. This will result in a rise of paranoia, pre-verification hoops, Cloudflare like agent-blocking, and communities "going dark" or closed to new entrants who have not been verified in person somewhere.

        (The CVE system has been under strain for Linux: https://www.heise.de/en/news/Linux-Criticism-reasons-and-con... )

        • By rjdj377dhabsn 2025-09-2210:55

          Even with closed communities, real user accounts will get sold for use by AI.

      • By stronglikedan 2025-09-2214:15

        Don't need a human until someone is ready to pay a bounty!

    • By l5870uoo9y 2025-09-229:572 reply

      This reads as an AI generated response as well with the; "thanks", "you're right", flawless grammar, and plenty of technical references.

      • By gryfft 2025-09-2210:371 reply

        I think you might be onto something-- perhaps something from the first sentence of the post to which you are replying.

        • By brap 2025-09-2210:434 reply

          You’re absolutely right, that’s a sharp observation that really gets to the heart of the issue.

          • By InsideOutSanta 2025-09-2211:09

            Thank you! I'm glad you found the observation insightful. It's important to delve deep into the core of an issue to truly understand its implications and potential solutions. If you'd like to explore this further or discuss any other aspects, feel free to share your thoughts!

          • By henrebotha 2025-09-2213:35

            If I were sipping coffee when I read this, I would have had a lot of coffee on my desk right now.

          • By dalmo3 2025-09-2211:061 reply

            The user is now expressing sarcasm.

            • By pelagicAustral 2025-09-2211:333 reply

              You're absolutely right, based on the tenor of the previous message exchange, it is likely that brap is indeed sarcastically responding to gryfft. Do you want me to explain the mechanics of this interaction?

      • By SoKamil 2025-09-2212:332 reply

        Faking grammar mistakes is the new meta of proving that you wrote something yourself.

        Or faking generated content into real one.

        • By crabmusket 2025-09-2213:33

          Providing valuable and accurate information was, is, and will continue to be the "meta".

        • By rpcope1 2025-09-2217:18

          They also don't really use profanity

    • By ToucanLoucan 2025-09-2210:032 reply

      Is it that crazy? He's doing exactly what the AI boosters have told him to do.

      Like, do LLMs have actual applications? Yes. By virtue of using one, are you by definition a lazy know-nothing? No. Are they seemingly quite purpose-built for lazy know-nothings to help them bullshit through technical roles? Yeah, kinda.

      In my mind this is this tech working exactly as intended. From the beginning the various companies have been quite open about the fact that this tech is (supposed to) free you from having to know... anything, really. And then we're shocked when people listen to the marketing. The executives are salivating at the notion of replacing development staff with virtual machines that generate software, but if they can't have that, they'll be just as happy to export their entire development staff to a country where they can pay every member of it in spoons. And yeah, the software they make might barely function but who cares, it barely functions now.

      • By elzbardico 2025-09-2212:391 reply

        I have a long-running interest in NLP, LLMs basically solved or almost solved a lot of NLP problems.

        The usefulness of LLMs for me, in the end, is their ability to execute classic NLP tasks, so I can incorporate a call for them in programs to do useful stuff that would be hard to do otherwise when dealing with natural language.

        But, a lot of times, people try to make LLMs do things that they can only simulate doing, or doing by analogy. And this is where things start getting hairy. When people start believing LLMs can do things they can't do really.

        Ask an LLM to extract features from a bunch of natural language inputs, and probably it will do a pretty good job in most domains, as long as you're not doing anything exotic and novel enough to not being sufficiently represented in the training data. It will be able to output a nice JSON with nice values for those features, and it will be mostly correct. It will be great for aggregate use, but a bit riskier for you to depend on the LLM evaluation for individual instances.

        But then, people ignore this, and start asking on their prompts for the LLM to add to their output confidence scores. Well. LLMs CAN'T TRULY EVALUATE the fitness of their output for any imaginable criteria, at least not with the kind of precision a numeric score implies. They absolutely can't do it by themselves, even if sometimes they seem to be able to. If you need to trust it, you'd better have some external mechanism to validate it.

        • By tantivy 2025-09-2216:271 reply

          I once tasked an LLM with correcting a badly-OCR'd text, and it went beast mode on that. Like setting an animal finally free in its habitat. But that kind of work won't propel a stock valuation :(

          • By jiggawatts 2025-09-237:06

            It's mind-blowing the level of correction a modern LLM can achieve. I had to recover an OCR text that had about 30% of the characters incorrect. The result was 99.9% correct, with just the odd confusion whenever the suffix of a word could be interpreted either way and it picked one at random.

      • By rpcope1 2025-09-2217:25

        So basically a hundred billion dollar industry for just spam and fraud. Truly amazing technological progress.

    • By pizlonator 2025-09-2214:411 reply

      Wait so are we now saying that these AIs are failing the Turing test?

      (I mean I guess it has to mean that if we are able to spot them so easily)

      • By blharr 2025-09-230:09

        You don't spot the ones you don't spot

    • By shadowgovt 2025-09-2215:31

      Quite a few people using AI are using it not only to do analysis, but to do translation for them as well; many people leaping onto this technology don't have English as a fluent language, so they can't evaluate the output of the AI for sensibility or "not sounding like AI."

      (It's a noise issue, but I find it hard to blame them; not their fault they got born in a part of the world where you don't get autoconfig'd with English and as a result they're on the back-foot for interacting with most of the open source world).

    • By Havoc 2025-09-229:583 reply

      Makes me wonder whether the submitter even speaks english

      • By t0lo 2025-09-2210:061 reply

        AI's other acronym...

        • By akk0 2025-09-2210:094 reply

          You do realize English is one of India's two official languages, I hope?

          • By buttocks 2025-09-2211:34

            French is one of Canada’s. It’s generally spoken poorly in Vancouver.

          • By deadbabe 2025-09-2211:474 reply

            Yea but you can always tell it’s an Indian because they write differently from actual English speakers.

            • By Y_Y 2025-09-2213:452 reply

              Indian English is not only a perfectly good dialect, it's one of the most popular worldwide. It doesn't have the prestige of the King's English, but I'd personally prefer it to some of the other colonies'.

              • By Suppafly 2025-09-245:37

                >Indian English is not only a perfectly good dialect, it's one of the most popular worldwide.

                Sure, but a lot of times it's not really Indian English, it's English vocab mixed and matched with grammar rules from other Indian languages like Hindi or Urdu or Bengali. I've been on conference calls where Indians from different regions were speaking mutually unintelligible versions of English and had to act as a translator from english to english.

              • By deadbabe 2025-09-2214:481 reply

                A dialect is not good just because it is popular.

                • By timeon 2025-09-2217:441 reply

                  Does it matter? We are here on American site anyway - not English.

                  • By viridian 2025-09-2316:32

                    You dropped your conjunction.

            • By mock-possum 2025-09-2214:481 reply

              I feel like ‘actual English’ comes off as unnecessarily mean here. There is no ‘actual English’ there are just different regional and cultural variations.

              You may personally like one or another better, you may find some particular varieties easier or harder to understand, but that doesn’t make those people any more or less ‘actual’ English speakers than you are. They are ‘actually’ speaking English, just like you.

              If you wanted to phrase this in a less fraught way, you might say “Yea but you can almost always tell it’s an Indian because they tend to write characteristically distinct from <your nationality> English speakers” -

              and I would agree with you, sentence structure and idioms do usually make it pretty easy to recognize.

              • By deadbabe 2025-09-2216:192 reply

                Actual English is when you speak in the spirit of the language, not just the grammatical and syntactical structures. It should be free of speech patterns from other languages and more assimilated.

                • By Suppafly 2025-09-245:44

                  I think people are downing this because it comes off as if don't have an appreciation for different dialects, but you're making a key point. There are a lot of people that 'speak english' by using english vocab with their native tongue's grammar and that is different (and less intelligible) than speaking a recognized dialect.

                • By kranner 2025-09-2217:05

                  "The spirit of the language" is just a restatement of your original assertion about "actual English", based on what seems an assumed authority to make such a claim.

            • By cortesoft 2025-09-2217:52

              English isn't French, there isn't an 'official version'

            • By GuinansEyebrows 2025-09-2216:32

              > Yea but you can always tell it’s an Indian because they write differently from actual English speakers.

              to what end do you employ this analysis?

          • By zahlman 2025-09-2223:091 reply

            I've been keeping a file of samples of unusually poor English I encounter in technical programming forums etc. It's almost entirely from people with Indian names. Over decades of experience I've come to notice patterns in how certain native languages inform specific common errors (for a trivial example, native German speakers typo "und" for "and" all the time, even if they have many years of experience with English and are otherwise fluent).

            But many of the samples I've seen from Indians (I don't know what their native languages are exactly, and fully admit I wouldn't be able to tell them apart) in the last few years are quite frankly on a whole other level. They're barely intelligible at all. I'm not talking about the use of dialectic idioms like "do the needful" or using "doubt" where UK or US English speakers would use "question". All of that is fine, and frankly not difficult to get used to.

            I'm talking about more or less complete word salad, where the only meaning I can extract at all is that something is believed to have gone wrong and the OP is desperate for help. It comes across that they would like to ask a question, but have no concept of QUASM (see e.g. https://www.espressoenglish.net/an-easy-way-to-form-almost-a...) whatsoever.

            I have also seen countless cases where someone posted obvious AI output in English, while having established history in the same community of demonstrating barely any understanding of the language; been told that this is unacceptable; and then appeared entirely unable to understand how anyone else could tell that this was happening. But I struggle to recall any instance where the username suggested any culture other than an Indian one (and in those cases it was an Arabic name).

            To be clear, I am not saying that this is anything about the people or the culture. It's simple availability bias. Although China has a comparable population, there's a pretty high bar to entry for any Chinese nationals who want to participate in English-speaking technical forums, for hopefully obvious reasons. But thanks to the status of an English dialect as an official language, H1B programs etc., and now the ability to "polish" (heavy irony) one's writing with an LLM, and of course the raw numbers, the demographics have shifted dramatically in the last several years.

            • By viridian 2025-09-2317:24

              My observations largely match your own, and also applies more generally to non-technical interactions online. I help manage a group that runs a local LAN, and have run into both the general language issues, and people making long, incomprehensible requests that have major LLM smells.

              I don't think it's just availability bias however, I think it's mostly a case of divergent linguistic evolution. In terms of the amount of people who speak English at an A level, India has the largest English speaking population in the world. With that, and a host of other native languages, came a rapid divergence from British English as various speech patterns, idioms, etc, are subsumed, merged, selectively rejected, and so on.

              The main reason you don't see divergence to the same extent in other former colonies, even older colonies like Canada and the US, is that the vast majority of the colonists spoke English as a primary language.

          • By throawayonthe 2025-09-2210:281 reply

            what are they even reffering to, what does AI stand for in relation to India?

            • By jsheard 2025-09-2210:303 reply

              "Actually Indians" was coined to refer to "AI" products which turn out to be outsourced human labor in disguise. Builder.ai was the most infamous example.

              • By ceejayoz 2025-09-2213:33

                Not the biggest example; Amazon pulled the same trick.

                https://www.theguardian.com/commentisfree/2024/apr/10/amazon...

              • By Conlectus 2025-09-2212:44

                The idea that Builder.ai was Indian workers being sold as AI wasn’t true, by the way. That was made up by a crypto influencer on twitter and copied by sloppy news sites. They were a consulting firm that also sold an AI product, with the two clearly separated.

              • By kspacewalk2 2025-09-2212:432 reply

                It's amazing to me that the human-labour-in-disguise thing was first reported in 2019, but the company only went bankrupt in 2025.

                • By rchaud 2025-09-2215:26

                  The PowerPoints that sold investors on the company were written and discussed by humans.

      • By mda 2025-09-2210:48

        Probably yes, but not as smooth and eloquent as the AI they use.

      • By unmole 2025-09-2210:351 reply

        The username sounds Turkish. Make what you will of it.

        • By dansmith1919 2025-09-2210:37

          So... nothing? Because I'm also not from an English speaking country and I speak English.

    • By dansmith1919 2025-09-229:534 reply

      At some point they told ChatGPT to put emoji's everywhere which is also a dead giveaway on the original report that it's AI. They're the new em dash.

      • By rasz 2025-09-2216:11

        You dont even have to instruct it for emojis, it does it on its own. printf with emoji is an instant red flag

      • By jcul 2025-09-238:53

        It loves to put emojis in print statements, it's usually a red flag for me that something is written by AI.

      • By listic 2025-09-2210:291 reply

        What was it with em dash?

        • By Ralfp 2025-09-2210:335 reply

          People usually don't type embdash, just use regular dash (minus sign) they have already on the keyboard. ChatGPT uses emdash instead.

          • By Wowfunhappy 2025-09-2210:402 reply

            Ahem.

            https://www.gally.net/miscellaneous/hn-em-dash-user-leaderbo...

            As #9 on the leaderboard I feel like I need to defend myself.

            • By alchemist1e9 2025-09-2211:046 reply

              I’m guessing this list is defined by Mac users who all got taught em dash somewhere similar or for similar reasons. It is only easy to use on a Mac. But I wonder what is the 2nd common influence of users using it?

              • By fao_ 2025-09-2211:472 reply

                On Linux I just type (in sequence):

                compose - -

                and it makes an em dash, it takes a quarter of a second longer to produce this.

                I don't know why the compose key isn't used more often.

                • By crabmusket 2025-09-2213:343 reply

                  [As an English typer] Where is this compose key on my keyboard?

                  (This is a vaguely Socratic answer to the question of why the compose key is not more often used.)

                  • By fao_ 2025-09-2221:50

                    As per the wiki article someone else listed — the compose key was available on keyboards back in the 1980s (notably it was invented only 5 years after the Space Cadet keyboard was invented!).

                    Some DOS applications did have support for it. The reason it wasn't included is baffling, and it's especially baffling to me that other operating systems never adopted it, simply because

                        compose a '
                    
                    is VASTLY more user friendly to type than:

                        alt-+
                        1F600
                    
                    which I have met some windows users who memorize that combo for things like the copyright symbol (which is simply:)

                        compose o c

                  • By WhyNotHugo 2025-09-2215:481 reply

                    It’s not mapped to any key by default. A common choice is the right alt key.

                    I wrote a short guide about it last year: https://whynothugo.nl/journal/2024/07/12/typing-non-english-...

                    • By capitainenemo 2025-09-2221:29

                      My personal preference is the capslock key. I'm not using it for anything anyway

                  • By layer8 2025-09-2214:46

                    In Vim it's Ctrl+K. ;)

                • By whilenot-dev 2025-09-2211:591 reply

                  The compose key feels mandatory for anyone who wants to type their native langauge on an US-english layout. The combination[0] is "Compose--." though: –

                  [0]: https://en.wikipedia.org/wiki/Compose_key#Common_compose_com...

                  • By teddyh 2025-09-2213:372 reply

                    “Compose--.” produces an en dash, not an em dash. An em dash is produced by “Compose---”.

                    Source:

                      grep -e DASH /usr/share/X11/locale/*/Compose

                    • By mock-possum 2025-09-2214:511 reply

                      As it should be. I wish this convention were present across more software, “-“ “- -“ and “- - -“ should be the UI norm for entering proper dashes in text input controls.

                      • By WhyNotHugo 2025-09-2215:49

                        Most software handles this fine if you configure your compositor to use a compose key.

                    • By fao_ 2025-09-2221:44

                      Whoops, yep that's the one

              • By Freak_NL 2025-09-2211:301 reply

                This is a misconception which keeps getting repeated. It's easy to use an em-dash on any modern Linux desktop as well (and in a lot of other places).

                • By chrismorgan 2025-09-2212:033 reply

                  Though it does still require nominating a key to map to Compose. And is not generally meaningfully documented. So I’d only call it easy for the sorts of people that care enough to find it.

                  But then, long before I had a Compose key, in my benighted days of using Windows, I figured out such codes as Alt+0151. 0150, 0151, 0153, 0169, 0176… a surprising number of them I still remember after not having typed them in a dozen years.

                  • By stn8188 2025-09-2212:24

                    In electrical engineering I'm still using a few alt codes daily, like 248 (degree sign), 234 (Omega), 230 (mu), and 241 (plus or minus). I'd love to add 0151 to the repertoire, but I don't want people to think I used AI to write stuff....

                  • By 1718627440 2025-09-235:44

                    I've never bothered to read about the compose key, but en/em-dash is accessible (in Debian) with AltGr-(Shift)-Hyphen/Minus too. Copyright (©) is AltGr-Shift-C.

                  • By dolmen 2025-09-2212:19

                    I miss the numeric keypad (gone on laptops) to be able to properly type my last name with its accentuated letter.

              • By nick__m 2025-09-2211:111 reply

                Android — keyboard – good for endash to !

                • By capitainenemo 2025-09-2213:57

                  My favourite android keyboard has a compose key and also a lot of good defaults in long touch on keys (including en and em under dash). Only downside is last android update causes the keyboard to be overlapped in landscape mode. A problem with a number of alternative keyboards out there. https://github.com/klausw/hackerskeyboard/issues/957

              • By 0x457 2025-09-2219:20

                It's just em dash is the correct symbol, and typing it on Mac is simple: `cmd + -`

                You can tell if I'm using mac or not for specific comment by the presence of em dash.

              • By throwup238 2025-09-2211:553 reply

                Or, you know — iOS. That’s huge marketshare for a keyboard that automatically converts -- to —

                • By redwall_hp 2025-09-2213:551 reply

                  Or Microsoft word. Many common tools in different contexts make it easy to do.

                  As it turns out, the differentiator is the level of literacy.

                  • By mock-possum 2025-09-2214:56

                    And whether the user cares to ‘write properly’ to boot. I love using dashes to break up sentences - but I rarely take the time to use the proper dashes, unless I’m writing professionally. I treat capitalization the same way - I rarely capitalize the first letter of a paragraph. I treat ‘rules’ like that as typographic aesthetic design conventions - optional depending on context.

                • By alchemist1e9 2025-09-2212:17

                  That probably explains everything from a statistical perspective about this em dash topic. I didn’t know that — Thanks.

                • By Philadelphia 2025-09-2221:39

                  You can also hold down the hyphen key and select it from the popup menu. En dash lives there, too.

              • By duncan_britt 2025-09-2213:39

                In emacs, Ctr-x 8 <return> is how i type it. Pretty easy.

            • By WhyNotHugo 2025-09-2215:471 reply

              I’m disappointed that I’m on it — I’ll have to try harder.

              • By Wowfunhappy 2025-09-2219:49

                You'd need a time machine, it only tracks prior to the release of ChatGPT.

          • By LorenDB 2025-09-2211:35

            Microsoft Word at least used to autocorrect two dashes to a single em dash, so I have plenty of old Word documents kicking around with em dashes.

          • By o1o1o1 2025-09-2211:031 reply

            I recently learned to use Option + Shift + `-` (dash) on macOS to type it and use it since then because somebody smarter than me told me that this is the correct one to use (please correct them if you know better :D).

            • By 1718627440 2025-09-235:45

              Same on GNU/Linux(Debian), except Option is called AltGr.

          • By MangoToupe 2025-09-237:00

            I've been typing "—" since middle school 25 years ago. It's trivial on a mac and always has been (at least since OSX, not sure about classic). Some folks are just too narrow-minded to give others the benefit of the doubt.

          • By jiggawatts 2025-09-237:00

            iDevices (and maybe MacOS too?) correct various dashes to the Unicode equivalents. Double dash seems to get converted to em-dash automatically.

      • By badgersnake 2025-09-229:573 reply

        Some people actually do that on Github too. Absolute psychopaths.

        • By jsheard 2025-09-2210:006 reply

          I think the JS/Node scene was the pioneer in spamming emojis absolutely everywhere, well before AI. Maybe that's where the models picked it up from.

          • By hedora 2025-09-2214:46

            Remember, if you’re going to do this, also make liberal use of ansi codes.

            Make sure terminal detection is turned off, and, for god’s sake, don’t honor the NO_COLOR environment variable.

            Otherwise, people will be able to run your stuff in production and read the logs.

          • By dvfjsdhgfv 2025-09-2210:081 reply

            I'm a bit ashamed to say that, after using various ASCII symbols (for progress, checkmarks etc.) in the 90s and early 2000s, when I first discovered we can actually put special Unicode characters on the terminal and it will be rendered almost universally in a similar way, it was like discovering an unknown land.

            While rockets and hearts seem more like unnecessary abuse, there are a few icons that really make sense in CLI and TUI programs, but now I'm hesitant to use them as then people who don't know me get suspicious it could be AI slop.

            • By elzbardico 2025-09-2210:20

              I absolutely love the checkmark and crossmark emojis for use in scripts. but I think they are visual garbage in logs.

          • By pjmlp 2025-09-2210:062 reply

            I really hate all those CLI applications and terminal configurations that look like circus came to town.

            • By henrebotha 2025-09-2210:362 reply

              I don't love emojis for this purely because they're graphically inconsistent; I can't style them with my terminal font or colour scheme. But I'm a huge fan of using various (single-width) unicode chars with colour to make terminal output a lot easier to parse, visually. Colour and iconography are extremely useful.

              • By JdeBP 2025-09-2211:182 reply

                Hieroglyphics are vastly underused.

                    𓂫 ~ 𓃝 JdeBP𓆈localhost 𓅔 %                                𓅭 pts/0

                • By rvnx 2025-09-2212:041 reply

                  Love it, first time I see that online on forums (genuinely). Gives ideas for Reddit posts

                  • By hedora 2025-09-2214:42

                    U+130B9 is probably a good one to start with over there.

                    (Nsfw)

                • By hooverd 2025-09-2320:10

                  what isn't in the unicode standard these days???

            • By jiggawatts 2025-09-237:02

              It's the same thing as naming your servers Titan and Cerberus, using garish RGB LEDs on every computer part (in a glass case of course), and having a keyboard that looks like a disco.

          • By userbinator 2025-09-233:37

            The more vapid parts of social media also seem to have plenty of emoji floods, and I suspect that also made it into the training data for ChatGPT and others.

          • By noosphr 2025-09-2211:121 reply

            That's because utf-8 was such an absolute mess in JS that adding an emoji in your code was a flex that it worked.

            Sane languages have much less of this problem but the damage was done by the cargo cultists.

            Much like how curly braces in C are placed because back in the day you needed you punch card deck to be editable, but we got stuck with it even after we stared using screens.

            • By delecti 2025-09-2213:041 reply

              > Much like how curly braces in C are placed because back in the day you needed you punch card deck to be editable, but we got stuck with it even after we stared using screens.

              Can you expand on this? What do curly braces have anything to do with punch card decks being editable? What do screens?

              • By noosphr 2025-09-2215:112 reply

                Each punch card was it's own line of text.

                By putting the final curly brace on it's own card, and hence line, it meant you could add lines to blocks without having to change the old last line.

                E.g. the following code meant you only had to type a new card and insert it.

                     for(i=0;i<10;i++){         /* Card 1 */
                          printf("%d ", i);     /* Card 2 */
                     }                          /* Card 3 */
                
                     for(i=0;i<10;i++){         /* Card 1 */
                          printf("%d ", i);     /* Card 2 */
                          printf("%d\n", i*i);  /* Card 3 */
                     }                          /* Card 4 */
                
                But for following had to edit and replace an old card as well.

                     for(i=0;i<10;i++){         /* Card 1 */
                          printf("%d ", i);}    /* Card 2 */
                     
                     for(i=0;i<10;i++){         /* Card 1 */
                          printf("%d ", i);     /* Card 2' */
                          printf("%d\n", i*i);} /* Card 3 */
                
                This saved a bit of typing and made errors less likely.

                • By jcranmer 2025-09-2216:431 reply

                  I'm dubious of this explanation because C itself largely postdates punched cards as a major medium of data storage, and some quick searches doesn't produce any evidence of people using punch cards with C or Unix.

                  • By noosphr 2025-09-2222:24

                    Ed was also line oriented.

                    Using regex to edit lines instead of typing them out was a step up, but not much of one.

                    Also my father definitely had C punch cards in the 80s.

          • By JustFinishedBSG 2025-09-2210:151 reply

            "FastThingJS: A blazing fast thing library for humans . Made with on "

        • By raincole 2025-09-2210:121 reply

          It was far before ChatGPT. I remember once on a Show HN post I commented something along the line with "The number of emoji in README makes it very hard for me to take this repo seriously" and my comment got (probably righteously) downvoted to dead.

          • By ffsm8 2025-09-2210:42

            I think I remember exactly what you're talking about, even though I completely forgot what software it was.

            I believe it was a technical documentation and the author wanted to create visual associations with acteurs in the given example. Like clock for async process of ordering, (food -) order, Burger etc.

            I don't remember if I commented on the issue myself, but I do remember that it reduced readability a lot - at least for me.

    • By lumost 2025-09-2214:51

      Was this all actually an agent? I could see someone making the claim that a security research LLM should always report issues immediately from an ethics standpoint (and in turn acquire more human generated labels of accuracy).

      To be clear, I personally disagree with AI experiments that leverage humans/businesses without their knowledge. Regardless of the research area.

    • By BoredPositron 2025-09-2210:091 reply

      It's an n8n bot without user input. If you Google the username you'll find a GitHub full of agent stuff.

    • By Lerc 2025-09-2215:40

      I felt like it was more likely to be a complete absence of a human in the loop.

    • By belter 2025-09-2210:332 reply

      Crazy on how the current 400 Billion AI bubble is based on this being feasible...

      • By koolba 2025-09-2210:411 reply

        The rationale is that the AI companies are selling the shovels to both generate this pile as well as the ones we'll need to clean it up.

        • By whstl 2025-09-2213:121 reply

          I vividly remember the image of one guy digging a hole and another filling it with dirt as a representation of government bureaucracy and similar. Looks like office workers are gonna have the same privilege.

          • By koolba 2025-09-2213:241 reply

            > I vividly remember the image of one guy digging a hole and another filling it with dirt as a representation of government bureaucracy and similar.

            To be clear, it wasn’t dirt that I envisioned being shoveled.

            • By whstl 2025-09-2219:14

              Oh, no confusion from my side!

      • By pjc50 2025-09-2210:49

        And on externalizing costs - the actual humans who have to respond to bad vulnerability report spam.

    • By jonplackett 2025-09-2216:22

      Do you think it’s a person doing it? When I saw that reply I though maybe it’s a bot doing the whole thing!

    • By dolmen 2025-09-2212:08

      I think we are now beyond just copy-pasting. I guess we are in the era where this shit is full automated.

    • By ainiriand 2025-09-2212:571 reply

      Is this for internet points?

      • By filcuk 2025-09-2214:50

        If it's an individual, it could be as simple as portfolio cred ('look, I found and helped fix a security flaw in this program that's on millions of devices ')

    • By zzzeek 2025-09-2213:13

      why assume someone is copy-pasting and didn't just build a bot to "report bugs everywhere" ?

    • By chinathrow 2025-09-2210:0914 reply

      The '—' gave it away. No one types this character on purpose.

      • By jaymzcampbell 2025-09-2210:151 reply

        I really loved how easy MacOS made these (option+hypen for en, with shift for em), so I used to use them all the time. I'm a bit miffed by good typography now being an AI smell.

        • By shagie 2025-09-2214:55

          On MacOS (and I have this disabled since I'm not infrequently typing code and getting an — where I specced a - can be not fun to debug)...

          Right click in the text box, and select "Substitutions". Smart dashes will replace -- with — when typed that way. It can also do smart quotes to make them curly... which is even worse for code.

          (turning those on...)

          It is disappointing that proper typography is a sign of AI influence… (wait, that’s option semicolon? Things you learn) though I think part of it is that humans haven’t cared about proper typography in the past.

      • By sevg 2025-09-2210:211 reply

        Just because you don’t, doesn’t mean other people don’t. Plenty of real humans use emdash. You probably don’t realise that on some platforms it’s easy to type an emdash.

        • By mwigdahl 2025-09-2212:151 reply

          In Office apps on Windows just type two hyphens and then a word afterwards and it will autoconvert to an em-dash.

      • By kstrauser 2025-09-2210:563 reply

        And where did you suppose AIs learned this, if not from us?

        Turns out lots of us use dashes — and semicolons! And the word “the”! — and we’re going to stuff just because others don’t like punctuation.

        • By exe34 2025-09-2211:181 reply

          I'm starting to wonder if there's a real difference between the populations who use em dashes and those who think it's a sign of AI. The former are the ones who write useful stuff online, which the AIs were trained on, and the latter are the consumers who probably never paid attention to typography and only started commenting on dashes after they became a meme on LinkedIn.

        • By pessimizer 2025-09-2212:422 reply

          I find it disturbing that many people don't seem to realize that chatbot output is forced into a strict format that it fills in recursively, because the patterns that LLMs recognize are no longer than a few paragraphs. Chatbots are choosing response templates based on the type of response that is being given. Many of those templates include unordered lists, and the unordered list marker that they chose was the em-dash.

          If a chatbot had to write freely, it would be word salad by the end of the length of the average chatbot response. Even its "free" templates are templates (I'm sure stolen from the standard essay writing guides), and the last paragraph is always a call to further engagement.

          Chatbots are tightly designed dopamine dispensers.

          edit: even weirder is people who think they use em-dashes at the rate of chatbots (they don't) even thinking that what they read on the web uses em-dashes at the rate of chatbots (it doesn't.) Oh, maybe in print? No, chatbots use them more than even Spanish writing, and they use em-dashes for quotation marks. It's just the format. I'm sure they regret it, but what are they going to replace them with? Asterisks or en-dashes? Maybe emoticons.

          • By kstrauser 2025-09-2223:58

            All that may be true. Let’s assume for argument that it is. I’ve had people call out my own handwritten, zero-AI comments (which are 100% of them) as likely to be AI because I used proper grammar, common punctuation, and a bullet list.

            To me, “ah ha, gotcha, AI wrote this!” comments are more common and tedious than the AI-augmented comments themselves.

          • By Philadelphia 2025-09-2221:46

            Do you have a pointer to documentation on that, or a keyword to google? Would like to find out more.

        • By birjokduf 2025-09-2211:12

          Books use it more liberally, internet writings not so much. Also some languages are much more prone to using it while some practically never use it

      • By ceejayoz 2025-09-2210:123 reply

        The AI is trained on human input. It uses the dash because humans did.

        • By arthens 2025-09-2210:441 reply

          I'm skeptical this is the reason:

          - Chatgpt uses mdashes in basically every answer, while on average humans don't (the average user might not even be aware it exists)

          - if the preference for em dashes came from the training set, other AIs would show the same bias (gemini and Le chat don't seem to use them at all)

          • By ceejayoz 2025-09-2213:36

            > Chatgpt uses mdashes in basically every answer, while on average humans don't

            I would not be shocked if an aspect to training is bucketing "this is an example of good writing style" into a specific category. Published books - far more likely to have had an editor sprinkle in fancy stuff - may be weightier for some aspects.

            My iPhone converts -- to — automatically. So does Google Docs / Gmail (althought I'm not certain if that's on their end or my Mac's auto-correct kicking in). Plenty of them out there.

            > other AIs would show the same bias

            Unless they've been trained not to use it, now that a bunch of non-technical people believe "emdash = AI, always".

        • By pessimizer 2025-09-2212:491 reply

          Is that why it uses colorful emoticons, too? Was it trained on Onlyfans updates?

          • By ceejayoz 2025-09-2213:08

            It was trained on everything they could get their hands on.

            Yes, it uses emoticons because human writers sometimes use emoticons.

        • By chinathrow 2025-09-2210:143 reply

          Yeah but a dash, at least on my keyboard is a '-', not the one quoted above.

          • By Ndymium 2025-09-2210:272 reply

            En and em dashes are easily accessible on both my laptop's and phone's keyboard layouts and I like using them, just like putting the ö in coöperate. It's sad if this now makes me look like a robot and I have to use the wrong dashes to be more "human".

            • By unwind 2025-09-2211:182 reply

              TIL that some people spell cooperate with an "ö".

              As a Swedish native it really breaks my reading of an English word, but apparently it's supposed to indicate that you should pronounce each "o" separately. Language is fun.

              • By cap11235 2025-09-2211:212 reply

                As a native English speaker, it also breaks my reading of "cooperate". Never seen it before. I think parent is just annoyingly eccentric for the sake of it.

                • By anonymars 2025-09-2212:08

                  Most commonly seen in naïve, and the New Yorker

                • By Ndymium 2025-09-233:29

                  I admit that latter part is just for whimsy, because I think it looks fun. The dashes I like for their aesthetics and if that makes me eccentric then so be it. They shouldn't distract anyone's reading, or at least they didn't use to before LLMs.

              • By Freak_NL 2025-09-2211:323 reply

                Using umlauts to signal that a vowel is pronounced separately is common in a number of languages (like Dutch).

                • By unwind 2025-09-2211:57

                  Yeah, I know.

                  It's just confusing for us poor Swedes since "ö" in Swedish is a separate letter with its own pronunciation, and not a somehow-modified "o". Always takes an extra couple of seconds to remember how "Motörhead" is supposed to be said. :)

                • By inejge 2025-09-2213:23

                  That kind of use technically makes it a diaeresis, not an umlaut.

                • By 1718627440 2025-09-235:56

                  But it's not used as an Umlaut here, that's exactly what's confusing. Here this is used as a trema/diaeresis.

            • By jnwatson 2025-09-2212:10

              Em dashes are widely used. The diaeresis is only used in The New Yorker and those that copied their style.

          • By justusthane 2025-09-2214:13

            If you’re using the dash on your keyboard (which is a “hyphen–minus” character) in place of a en dash or em dash, then you are using the wrong character. That’s fine — it’s certainly more convenient, and I wouldn’t call you out on it — but it’s silly to assume that other people don’t use the correct characters.

            https://www.grammarly.com/blog/punctuation-capitalization/da...

          • By ceejayoz 2025-09-2210:16

            If I type two dashes—like this—my phone changes it into a special character. Same for three dots…

      • By ulimn 2025-09-2210:11

        Or at least not anymore since this became the number 1 sign whether a text was written with AI. Which is a bit sad imo.

      • By yreg 2025-09-2210:113 reply

        I do all the time, but might have to stop. Same with `…`.

        • By henrebotha 2025-09-2210:37

          Don't let them win. Stand proud with your "–" and your "—" and your "…" and your "×".

        • By python-b5 2025-09-2212:51

          I dislike the ellipsis character on its own merits, honestly. Too scrunched-up, I think - ellipses in print are usually much wider, which looks better to me, and three periods approximates that more closely than the Unicode ellipsis.

        • By acheron 2025-09-2216:36

          In the words of Michael Bolton, "Why should I change? He's the one who sucks."

      • By vagrantJin 2025-09-2214:56

        That got a giggle out of me. Not entirely relevant but AI tends to be overzealous in its use of emojis and punctuation, in a way people almost never do (too cumbersome on desktop where majority of typing work is done)

      • By viridian 2025-09-2317:28

        Academia certainly does, although, humorously, we also have professors making the same proclamation you do, while while en or em dashes in their syllabi.

      • By _fizz_buzz_ 2025-09-2212:02

        I started using hyphens a few years ago. But now I had to stop, because AI ruined it :(

      • By johnisgood 2025-09-2210:22

        Keep in mind that now that people know what to pay attention to: em-dash, emojis, etc. they will instruct the LLM to not use that, so yeah.

      • By easton 2025-09-2210:12

        Two dashes on the Mac or iOS do it unless you explicitly disable it, I think.

      • By Balinares 2025-09-2211:41

        I absolutely bloody do -- though more commonly as a double dash when not at the keyboard -- and I'm so mad it was cargo-culted into the slop machines as a superficial signifier of literacy.

      • By jrimbault 2025-09-2210:11

        I used to.

  • By rpigab 2025-09-229:535 reply

    "I heard you were extremely quick at math"

    Me: "yes, as a matter of fact I am"

    Interviewer: "Whats 14x27"

    Me: "49"

    Interviewer: "that's not even close"

    me: "yeah, but it was fast"

    • By jtwaleson 2025-09-2210:137 reply

      There should be a language that uses "Almost-In-Time" compilation. If it runs out of time, it just gives a random answer.

      • By layer8 2025-09-2215:00

        "Progressive compilation" would be more fun: The compiler has a candidate output ready at all times, starting from a random program that progressively gets refined into what the source code says. Like progressive JPEG.

      • By phinnaeus 2025-09-2210:181 reply

        Best I can do is a system that gives you a random answer no matter how much time you give it.

        • By zelphirkalt 2025-09-2210:261 reply

          Great! 80-20, Pareto principle, we're gonna use that! We are as good as done with the task. Everyone take phinnaeus as an example. This is how you get things done. We move quickly and break things. Remember our motto.

      • By kleiba 2025-09-2218:231 reply

        This might be a similar but possibly more sensible approach? -> https://en.wikipedia.org/wiki/Anytime_algorithm

        • By jtwaleson 2025-09-2218:57

          Yes, the way I described it is actually a sensible approach to some problems.

          "Almost-in-time compilation" is mostly an extremely funny name I came up with, and I've trying to figure out the funniest "explanation" for it for years. So far the "it prints a random answer" is the most catchy one, but I have the feeling there are better ones out there.

      • By philipwhiuk 2025-09-2214:22

        When you get the wrong answer you can just say 'ah yes, the halting problem'

      • By mhuffman 2025-09-2215:541 reply

        You should send a pull request to DreamBerd/Gulf of Mexico[0], it's surely the only language that can handle it properly!

        [0]https://github.com/TodePond/GulfOfMexico

        • By jtwaleson 2025-09-2217:18

          Hilarious. I will actually do that :)

      • By fer 2025-09-2212:49

        Soft real time systems often work like that. "Can't complete in time, best I can do is X".

      • By bicepjai 2025-09-2315:37

        AIighT

    • By donohoe 2025-09-2211:423 reply

        function getRandomNumber() {
          return 4
        }

    • By nenenejej 2025-09-2210:08

      The lowest latency responses in my load tests is when something went wrong!

    • By poszlem 2025-09-2210:54

      https://www.youtube.com/watch?v=4SI3GiPihQ4

      “Is this your card?”

      “No, but damn close, you’re the man I seek”

    • By kqr 2025-09-2312:04

      This is one of my favourite images from a long-defunct proto-meme blog: https://entropicthoughts.com/image/doesntworkbutfast.jpg

  • By misnome 2025-09-229:407 reply

    I wonder where the balance of “Actual time saved for me” vs “Everyone else's time wasted” lies in this technological “revolution”.

    • By simsla 2025-09-2211:083 reply

      Agreed.

      I've found some AI assistance to be tremendously helpful (Claude Code, Gemini Deep Research) but there needs to be a human in the loop. Even in a professional setting where you can hold people accountable, this pops up.

      If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.

      I've seen some people (particularly juniors) just act as a conduit between the AI and whoever is next in the chain. It's up to more senior people like me to push back hard on that kind of behaviour. AI-assisted whatever is fine, but your role is to take ownership of the code/PR/report before you send it to me.

      • By palmotea 2025-09-2213:15

        > If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.

        And then add to that the pressure to majorly increase velocity and productivity with LLMs, that becomes less practical. Humans get squeezed and reduced to being fall guys for when the LLM screws up.

        Also, Humans are just not suited to be the monitoring/sanity check layer for automation. It doesn't work for self-driving cars (because no one has that level of vigilance for passive monitoring), and it doesn't work well for many other kinds of output like code (because often it's a lot harder to reverse-engineer understanding from a review than to do it yourself).

      • By M2Ys4U 2025-09-2213:18

        >but there needs to be a human in the loop.

        More than that - there needs to be a competent human in the loop.

      • By joquarky 2025-09-233:24

        We've going from being writers to editors: a particular human must still ultimately be responsible for signing off on their work, regardless of how it was put together.

        This is also why you don't have your devs do QA. Someone has to be responsible for, and focused specifically on quality; otherwise responsibility will be dissolved among pointing fingers.

    • By stahorn 2025-09-229:536 reply

      You're doing it wrong: You should just feed other peoples AI-generated responses into your own AI tools and let the tool answer for you! The loop is then closed, no human time wasted, and the only effect is wasted energy to run the AI tools. It's the perfect business model to turn energy into money.

      • By jsheard 2025-09-229:576 reply

        You joke, but some companies are pushing this idea unironically by putting "use AI to expand a short message into a bloated mess" and "use AI to turn a bloated mess into a brief summary" into both sides of the same product. Good job everyone, we've invented the opposite of data compression.

        • By gloxkiqcza 2025-09-2210:00

        • By throwaway0236 2025-09-2211:13

          Great cartoon with comment about this problem:

          https://marketoonist.com/2023/03/ai-written-ai-read.html

        • By taneq 2025-09-2210:31

          We could call it “bsencode.

        • By tsimionescu 2025-09-2210:02

          The next HTTP standard should include `Transfer-Encoding: polite` for AI-enabled servers and user agents.

        • By palmotea 2025-09-2213:20

          Sadly, it might not be ironic. I've encountered many people (particularly software engineers and other tech bros) who assume most written language is mostly BS/padding, and assume the only real information there is what you get get from a concise summary or list of bullet points.

          It's the kind of incuriosity that comes from the arrogance from believing you're very smart but actually being quite ignorant.

          So it wounds like one of those guys took their misunderstanding and built and sell tools founded on it.

        • By Groxx 2025-09-2215:57

          of course they are. that way they can sell both the shovels and the shit.

      • By q3k 2025-09-2212:391 reply

        Two economists are walking in a forest when they come across a pile of shit. The first economist says to the other “I’ll pay you $100 to eat that pile of shit.” The second economist takes the $100 and eats the pile of shit.

        They continue walking until they come across a second pile of shit. The second economist turns to the first and says “I’ll pay you $100 to eat that pile of shit.” The first economist takes the $100 and eats a pile of shit.

        Walking a little more, the first economist looks at the second and says, "You know, I gave you $100 to eat shit, then you gave me back the same $100 to eat shit. I can't help but feel like we both just ate shit for nothing."

        "That's not true", responded the second economist. "We increased the GDP by $200!"

      • By globular-toast 2025-09-237:06

        I invented a new technique that cuts down on the AI bill. I call it "just send me the prompt": https://blog.gpkb.org/posts/just-send-me-the-prompt/

      • By jiqiren 2025-09-2210:00

        that's still a huge waste of time and resources. Rather, Daniel has focused on promoting good use of AI that has yielded good results for curl: https://mastodon.social/@bagder/115241241075258997 https://joshua.hu/llm-engineer-review-sast-security-ai-tools...

      • By dandanua 2025-09-2210:001 reply

        And then alien civilization will wonder how humans went extinct.

    • By miroljub 2025-09-2210:091 reply

      Wasting time for others is a net positive, meaning jobs won't be lost, since some human individual still needs to make sense out of AI generated rubbish.

      • By VladVladikoff 2025-09-2214:05

        Isn’t curl open source? I was under the impression that they are all working volunteer. This isn’t a net positive. It will burn out the good willed programmers and be a net negative on OSS.

    • By sanex 2025-09-2212:30

      This is not unique to AI tools. I've seen it with new expense tools that are great for accounting but terrible to use, or some contract review process that makes it easier on legal or infosec review of a SaaS tool that everyone and their uncle already uses. It's always natural to push all the work off to someone else because it feels like you saved time.

    • By iLoveOncall 2025-09-2212:30

      Yeah when reviewing code nowadays once I'm 5-10 comments in and it becomes obvious it was AI generated, I say to go fix it and that I'll review it after. The time waste is insane.

    • By zaik 2025-09-2212:33

      How much time did they save if they didn't find any vulnerability? They just wasted someone's time and nothing else.

    • By duxup 2025-09-2212:30

      Arguably that's been a part of coding for a long time ...

      I spend a lot of time doing cleanup for a predecessor who took shortcuts.

      Granted I'm agreeing, just saying the methods / volume maybe changed.

HackerNews