Don't post generated/AI-edited comments. HN is for conversation between humans

2026-03-1119:2941711645news.ycombinator.com

Hacker News Guidelines What to Submit On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might…

Hacker News Guidelines

What to Submit

On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.

Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. If they'd cover it on TV news, it's probably off-topic.

In Submissions

Please don't do things to make titles stand out, like using uppercase or exclamation points, or saying how great an article is.

Please submit the original source. If a post reports on something found on another site, submit the latter.

Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.

If the title includes the name of the site, please take it out, because the site name will be displayed after the link.

If the title contains a gratuitous number or number + adjective, we'd appreciate it if you'd crop it. E.g. translate "10 Ways To Do X" to "How To Do X," and "14 Amazing Ys" to "Ys." Exception: when the number is meaningful, e.g. "The 5 Platonic Solids."

Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.

If you submit a video or pdf, please warn us by appending [video] or [pdf] to the title.

Please don't post on HN to ask or tell us something. Send it to hn@ycombinator.com.

Please don't delete and repost. Deletion is for things that shouldn't have been submitted in the first place.

Don't solicit upvotes, comments, or submissions. Users should vote and comment when they run across something they personally find interesting—not for promotion.

Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative.

Don't post generated comments or AI-edited comments. HN is for conversation between humans.

Please don't fulminate. Please don't sneer, including at the rest of the community.

Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

Eschew flamebait. Avoid generic tangents. Omit internet tropes.

Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

Please don't use Hacker News for political or ideological battle. It tramples curiosity.

Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".

Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead.

Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.

Please don't use uppercase for emphasis. Instead, put *asterisks* around it and it will get italicized. More formatting info here.

Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

Don't feed egregious comments by replying; flag them instead.

Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

Please don't comment about the voting on comments. It never does any good, and it makes boring reading.

Please don't post comments saying that HN is turning into Reddit. It's a semi-noob illusion, as old as the hills.


Read the original article

Comments

  • By kjuulh 2026-03-1122:356 reply

    I am 100% behind this. I've been browsing hackernews since I started in tech, it is the only forum i regularly browse, and partake in. Simply because the quality of submissions and conversations are so high. There has been more AI related articles this part year, and it only seems ramping. I personally haven't found the AI part of the comments as big of a deal but dang and tom might be doing more than I realize on that front.

    Though I do wish we'd see less AI related posts on the front page, they simply aren't sparking curiosity, it is the same wrapped in a different format, a different person commenting on our struggles and wins with AI, the 10th software "rewritten" by an AI.

    At this point there nearly should be a "tax" on category, as of this moment I count 8-10 related posts on the front page related to AI / LLMs. It is a hot field, but I come to hackernews, to partake in discussions about things that are interesting, and many of those just doesn't cut it, in my opinion.

    • By dang 2026-03-124:397 reply

      The dynamics of content production are shifting hard right now. Things that used to signal something interesting are being generated in minutes with little thought. It's getting democratized, but also commoditized.

      It's too soon to know how this is going to shake out, so we should resist the temptation to impose rules prematurely. And we should especially not do so out of resistance to change (when has that ever worked out?)

      But we'll do what we need to do to keep our heads above water. Example: https://news.ycombinator.com/showlim. I figure pragmatics are fine as long one keeps adjusting.

      • By zahlman 2026-03-1319:04

        > But we'll do what we need to do to keep our heads above water. Example: https://news.ycombinator.com/showlim. I figure pragmatics are fine as long one keeps adjusting.

        Is this page meant to be discoverable normally, or is it just there to host a message for those who encounter the restriction?

      • By Arkhaine_kupo 2026-03-1216:05

        > It's too soon to know how this is going to shake out, so we should resist the temptation to impose rules prematurely.

        alternative view. it is going way too quickly and premature rules can be reduced if the actual damage is less than theexpected model.

        You can always make things easier, its much harder to rebuild a community that hass been destroyed.

        > And we should especially not do so out of resistance to change (when has that ever worked out?)

        You saying that in a website with a UI straight out of the 90s is really fucking funny. Cause HN is a perfect example of resistance to change working out. Facebook chased every trend and failed (the social media, meta as an ad platform is doing ok), tech blogs chased trends and failed. This place said "nah this is good", and is still here.

      • By lelanthran 2026-03-124:591 reply

        > The dynamics of content production are shifting hard right now. Things that used to signal something interesting are being generated in minutes with little thought. It's getting democratized, but also commoditized.

        That's true, but it also means that Show HN has less value than it used to: the SNR is falling off a cliff :-(

        I planned to post a Show HN for a new product I want to launch (all human written by myself, with only the GEO docs vibed currently), but not sure now that any decent/quality product will ever get air. All the oxygen is being sucked out by low-effort products.

        • By dang 2026-03-125:094 reply

          That's what I mean about doing things to keep our heads above water. For example, we're restricting Show HNs for now.

          If you (or anyone) have ideas about other pragmatic measures we could take, we're interested.

          • By pamcake 2026-03-1218:47

            > If you (or anyone) have ideas about other pragmatic measures we could take, we're interested.

            Suggestion: Make it clear and explicit in guidelines and FAQ that this forum is for human conversation and that writing/editing post or comment by LLM or automated posting is bannable offense.

            Second and similarly, "vibe-coded" should have no place on Show HN and this could be made much more explicit.

          • By trinsic2 2026-03-132:26

            maybe you guys already do this, but what about having a line of text near the submission fields that says "If you are submitting a Show HN post, please do not post an AI generated version, it degrades the quality of submissions (or it makes it harder for others to submit high quality content, or something like that)

            I know when I see those guidelines show up in reddit submission forms, i respect that because I see what the sub exactly wants..

          • By lelanthran 2026-03-125:451 reply

            > For example, we're restricting Show HNs for now.

            This is promising; in what way is it restricted? Are there any extra hoops for me to jump through before (eventually) posting my ShowHN?

            • By dang 2026-03-1217:15

              You'll be fine. I don't want to say much specifically because it'll just end up as extra steps on some "how to promote your project on HN" checklist somewhere.

          • By akomtu 2026-03-126:511 reply

            Invisible text that will serve as a honey pot for LLMs is one thing to try. Imagine a comment where half of the words are marked as invisible by CSS, the other half has letters rearranged, but at the HTML level all the words look the same. LLMs will have to render pages which is a lot more expensive.

            • By jstanley 2026-03-129:29

              That won't help.

              1.) Rendering pages is table stakes for an AI headless browser tool, and 2.) most of the LLM comments probably come from copy and pasting to ChatGPT, not from autonomous agents.

      • By smusamashah 2026-03-126:101 reply

        Will removing the incentive, which is the upvotes, help reduce this spam? You can disable public access to the points gained by a new account (or may be for every account).

        Or if the ranking that's attractive to spammer, may be try experimenting with randomizing order of comments in a discussion.

        • By WarcrimeActual 2026-03-1215:46

          What I hope not to see is the Reddit method of "Oh you made a new account? Cool. You can't post anywhere and you can't post until you've posted catch 22"

      • By cobbzilla 2026-03-125:10

        I appreciate the thoughtful approach. It must be a deluge.

      • By stingraycharles 2026-03-1210:541 reply

        Isn’t that going to cause more spam, though, from people that start using AI to comment until their account is mature enough to post a Show HN?

        • By dang 2026-03-1217:41

          That's a risk, yes.

      • By lll-o-lll 2026-03-125:52

        We need some human based version of “proof of work”.

    • By rurp 2026-03-1216:21

      I feel the same and find myself extending it beyond forums. I've started skipping over articles about AI more and more from authors I normally enjoy reading because so few of those articles end up being particularly interesting or insightful.

      AI is obviously an important topic but it has been discussed to absolute death the past couple years and very few people have anything useful to add at this point. Things will of course evolve and change in the near term but someone speculating that maybe this will happen or that will happen isn't very useful.

      Given the risks and unknowns I think we should collectively be treating it as a major risk to our economic and national security, and figuring out how to mitigate the downside risks without stifling the upside. But most of the people in power have zero interest in doing that so we're all going to YOLO this in real time.

    • By Hendrikto 2026-03-1211:35

      > Though I do wish we'd see less AI related posts on the front page, they simply aren't sparking curiosity, it is the same wrapped in a different format, a different person commenting on our struggles and wins with AI, the 10th software "rewritten" by an AI.

      Exactly. I feel like HN has never been this boring. Enough of the slop, let’s talk about interesting stuff again!

    • By davidguetta 2026-03-128:59

      I've been on HN for 15 years and most of the times 80% of the content is not interesting to me, but i come for the 20%

    • By blank_dvth 2026-03-1216:16

      If you haven't yet checked it out, I'd recommend taking a look at Tildes for similarly high quality submissions/conversations as on HN. It really is such a breath of fresh air compared to most other platforms.

    • By iso-logi 2026-03-1122:431 reply

      I personally joined HN because of various AI discussions.

      Comparatively, other sites such as Reddit, Twitter and YouTube just shill content, applications or products. A ton of the posts on Reddit are just AI written ffmpeg wrappers which no one should care about but apparently people do...

      • By verdverm 2026-03-1122:50

        Upvoting rings on Reddit are likely not policed like they are here. That is to say, I wouldn't assume there is real interest based on Reddit points.

  • By Freebytes 2026-03-1216:259 reply

    Using AI to write content is seen so harshly because it violates the previously held social contract that it takes more effort to write messages than to read messages. If a person goes through the trouble of thinking out and writing an argument or message, then reading is a sufficient donation of time.

    However, with the recent chat based AI models, this agreement has been turned around. It is now easier to get a written message than to read it. Reading it now takes more effort. If a person is not going to take the time to express messages based on their own thoughts, then they do not have sufficient respect for the reader, and their comments can be dismissed for that reason.

    • By mitchdoogle 2026-03-1318:21

      When I have AI write things for me, I'm spending a good amount of time on it - certainly longer than it takes to read. I'm also usually editing it quite a bit. Maybe I'm an outlier, but I still don't think it's appropriate to make a blanket statement about using AI to write content violating this social contract you described.

    • By kouunji 2026-03-1219:01

      This is very well put, and captures my feelings on it. I take it as disrespect that someone would have any expectation for me to read something they can’t be bothered to write. LinkedIn is a great example - my entire professional network is just spamming at this point, which drowns out others that DO put in any effort.

    • By stefap2 2026-03-1218:541 reply

      If it takes longer to read, it's not an AI problem, but the author failing to catch that the comment is too drawn out. I don't see how it is a problem to have AI write a comment if you agree with the content. If it is bad content, it will eventually reflect badly on the author anyway.

      • By sean2 2026-03-1220:42

        I skim 100 comments here everyday. Good comments/bad comments, overly long comments, whatever, time to read is low. I assume all those authors have a strong opinion / expertise on the subject that urged them to take the time to write that comment, which makes skimming hacker news to keep a pulse on the world (imho) a valuable task. If, instead, most of those comments are composed by molt-bots, then I'm not getting a "real" view of the world, I don't care how good and concise the comments are, I'd be wasting my time reading about news that may not matter to anyone and opinions that may not exist.

    • By waterhouse 2026-03-1217:242 reply

      I guess, in theory, this can eventually be countered by people using LLM browser integrations to tell them whether comments are worth reading (and maybe to summarize long comments). Is anyone currently working on that? It might be interesting to see.

      • By pardon_me 2026-03-1218:14

        First we would run into the spam-filter problem no different to email. Then we have to choose: do we concede to viewing the world through a lens of WhatEverAI, or train it locally on our own thoughts/views on the world, and hope that AI model is never compromised.

      • By ljm 2026-03-1218:10

        I don't believe that delegating reading comprehension to an LLM is really any better than delegating writing ability. In fact I'd argue it's worse to have an automation advising on what's worth reading or not.

        There are a lot of people who have no time for something like Infinite Jest and even getting through the first few chapters is an effort. But at least they tried. An LLM excluding the possibility of reading this book because it is 1000 pages of postmodern absurdity effectively optimises away the fringes of human creativity and leaves only the average stuff behind.

        AI slop detectors already exist and are no better than snake oil, because a person can have an LLM-smelling writing style without actually using AI. After all, LLMs were originally trained on human input.

    • By eslaught 2026-03-136:11

      It's not just about the increase in volume, it's about the delta between the prompt and the generation.

      If the generation merely restates the prompt (possibly in prettier, cleaner language), then usually it's the case that the prompt is shorter and more direct, though possibly less "correct" from a formal language perspective. I've seen friends send me LLM-generated stuff and when I asked to see the prompt, the prompts were honestly better. So why bother with the LLM?

      But if you're using the LLM to generate information that goes beyond the prompt, then it's likely that you don't know what you're talking about. Because if you really did, you'd probably be comfortable with a brief note and instructions to go look the rest up on one's own. The desire to generate more comes from either laziness or else a desire to inflate one's own appearance. In either case, the LLM generation isn't terribly useful since anyone could get the same result from the prompt (again).

      So I think LLMs contribute not just to a drowning out of human conversation but to semantic drift, because they encourage those of us who are less self-assured to lean into things without really understanding them. A danger in any time but certainly one that is more acute at the moment.

    • By mlhpdx 2026-03-1216:323 reply

      Where does the line fall? I can use an LLM to help form new and novel thoughts into prose, right? To structure and present it in conventional language rather than stream of thought. Is that disrespectful? It doesn't feel so.

      • By Aurornis 2026-03-1216:551 reply

        > I can use an LLM to help form new and novel thoughts into prose, right? To structure and present it in conventional language rather than stream of thought.

        Better to post your stream of thought.

        Using LLMs to turn stream of thoughts into prose is mostly just adding fluff and expanding the text to make it look more like thoughtful prose. What you get looks nice to the creator because they agree with what it's saying, but it wastes other reader's time as they have to dissect the extra LLM prose to get back to the author's stream of thought.

        Just post what you're thinking, even if it's not elegant prose. Don't have an LLM wrap it in structures and cliches that disguise it as something else.

        • By mlhpdx 2026-03-1217:006 reply

          I strive to be understood, and my streams of thought are often weird and generally intractable. Nobody really wants to read that; nobody wants the deep threads required to explain it.

          I value reading novel and interesting thoughts and ideas. I don't feel "tricked" when I read something of substance or thought provoking, even if LLM generated and decorated with the platitudes and common forms for dull readers.

          • By dgacmu 2026-03-1218:43

            Something I try very hard to impress on my PhD students is that the process of writing is part of the process of thinking. We often have cool things in our head that don't sound right when we write them down, and that's usually because the thing in our head was more amorphous than we realized. The time you put in getting the written expression of it to work is actually helping you crystallize what you're thinking in the first place.

          • By petetnt 2026-03-1218:011 reply

            I guarantee you that I would endlessly rather read your streams of thought about amateur boat building than read another AI-generated Hacker News comment ever again. Don't sell yourself short.

            • By mlhpdx 2026-03-1218:21

              Thank you for that.

          • By phatskat 2026-03-130:17

            I get that feeling, and I’ll echo my sibling comment: I’d much rather read your stream of thought and get on that brain train with you than see some fluffed up and sterilized version.

            I also think that having that authentic voice, while it does open us up to criticism and maybe being misunderstood, also gives us a way to receive actionable feedback to improve.

            I think we all want to be understood, and for me part of that understanding is seeing the person. How you write is a part of who you are, and I hope you don’t feel like you need to suppress that.

          • By jart 2026-03-1217:571 reply

            Feel bad for the people who used to do that for you. Many people have difficulty expressing what they're thinking in words. Those people always feel happy when they see someone else say what they're thinking. If AI can do that now then you don't need them. No point in coming onto Hacker News and using AI to participate in playing that role when you can just talk to the AI. If too many people do this then Hacker News won't even be able to play a vestigial role.

            • By mlhpdx 2026-03-1218:281 reply

              Is it really that dire?

              Is it more awful to expect every reader to decipher my rambling, disjoint thoughts? Yes, it is. And, it undervalues the substance of what I'm trying to say because the willing audience dwindles to triviality.

              • By jart 2026-03-137:59

                You're being self-deprecating. You might believe the way you think and formulate ideas isn't good enough but it's at least you. The more you filter your thoughts through AI the more that signal is lost. If I'm not talking to you then I might as well talk to the robot myself, and honestly, that's what I spend most of my time doing these days. So when I come to Hacker News hoping for human connection the last thing I want is to talk to the robot even more. You should also show more respect for your peers whose writing talents you envy. People who are good at writing prose are usually good at deciphering it too.

          • By trinsic2 2026-03-132:01

            I sucked at writing myself. It's been my experience that over time practicing to becoming a better writer helped me structure my thoughts into something cohesive on the page. And I got better over time.

          • By lenkite 2026-03-138:45

            Sorry, but I prefer original human streams of thought. I now have a pretty darn good filter for ignoring AI gen text just like a filter for skipping over page ads.

      • By davorak 2026-03-1216:361 reply

        > Where does the line fall?

        For now I would argue when ai edits for you instead of helping you edit. Take a look at the examples that Dang posted if you have not yet: https://news.ycombinator.com/item?id=47342616

        The first 5 I looked at were pretty egregious and not subtle.

        • By mlhpdx 2026-03-1216:441 reply

          Yes, I have also done the search and found that the beta on "LLM!" objections is very high; often seeming wrong as right.

          • By davorak 2026-03-1221:091 reply

            As of this comment which ones are you finding wrong? 5 of the first 7 are confessed ai users, the other 2 look like ai to me too.

            • By mlhpdx 2026-03-1223:551 reply

              When I said "I have also done the search" I meant this simple one: https://hn.algolia.com/?dateRange=all&page=1&prefix=false&qu...

              • By davorak 2026-03-132:35

                Dang's search is much more clear cut and I think that is going to be better guide to what the enforcement will look like.

                Looking at your search though I think we have to exclude today or at least this thread to get a fair look how llm generated is thrown around or not https://hn.algolia.com/?dateEnd=1773187200&dateRange=custom&...

                Most of the comments I saw on the first page are not an accusation but there are some there 2 of the 3 I looked at looked pretty clear cut, while the 3rd was poorly written hype which looks like llm output, but I have seen similar from humans before at least from what I read, in either case it was flagged appropriately.

      • By ghurtado 2026-03-1217:25

        > Is that disrespectful

        It is, by way of being extremely dishonest in at least two ways:

        - there's no way you would do this if you were required to disclose that you used an LLM to write your comment.

        - therefore, if your primary goal isn't communication, then you must be doing it to look smart and "win" the conversation

        Same reason people desperately post links to scientific papers they don't understand in a frantic attempt to stay on top of some imaginary debate.

    • By ericmcer 2026-03-1217:28

      Well just have an AI read it for you then!

      That reminds me of the gmail LLM usage where AI can writes your emails for you and also summarize incoming ones. Maybe we lost the thread somewhere...

    • By kindkang2024 2026-03-134:32

      [dead]

    • By strangattractor 2026-03-1216:323 reply

      This reads as an AI comment to me. Anybody else?

      • By Freebytes 2026-03-1216:481 reply

        AI has not been used to write any comment that I have ever posted on Hacker News. You can observe my previous comments over the years, even prior to the adoption of modern LLMs, which demonstrate how I communicate.

        (While the patterns may be similar, I have a tendency to be more loquacious due to my larger token limit! %)

        • By strangattractor 2026-03-1223:40

          Just goes to show I'm a poor judge of what is written by AI.

      • By ghurtado 2026-03-1217:28

        On 4chan, a long time ago, comments like these would invariably get the reply "not ur personal army"

        Think about that for a minute. 4chan would make fun of the comment you just made.

      • By dredmorbius 2026-03-1217:35

        <https://news.ycombinator.com/item?id=46832601>

        Email mods instead: hn@ycombinator.com

  • We've all heard the phrase "the sum of all human knowledge".

    I've been feeling more and more that generative AI represents the average of all human knowledge. Which has its place. But a future in which all thought and creativity is averaged away is a bleak one. It's the heat death of thought.

    • By dang 2026-03-124:322 reply

      Thought and creativity won't be averaged away because human beings have a drive for these things. This just raises the bar for it. And why not? We get complacent when not pushed.

      Dostoevsky said that if all human knowledge could ever be reduced to 2 + 2 = 4, man would stick out his tongue and insist that 2 + 2 = 5. That was a 19th century formulation—he was a contemporary of Boole. I wonder what the equivalent would be for the LLM era.

      • By frm88 2026-03-126:18

        Thought and creativity won't be averaged away because human beings have a drive for these things.

        That may or may not be true, but the expression of thought and creativity matters to transfer meaning. If you average that out, it loses momentum. Example: https://news.ycombinator.com/item?id=47346935. Compare the posters first and second, LLM assisted, paragraph. The second one is just bleak. If I had to read several pages like that, my eyes would glaze over. It cannot hold attention.

      • By palmotea 2026-03-1213:35

        > Thought and creativity won't be averaged away because human beings have a drive for these things. This just raises the bar for it. And why not? We get complacent when not pushed.

        The why not is: human beings are valuable in and of themselves, not just because of what they can do. If you raise the bar too high, you kick people out. And our society just isn't setup for that, and is unlikely to ever be in our lifetimes.

        And I'm talking about a radical shift in the concept of ownership, where shareholding is radically democratized. Basically every random Joe needs the option to live comfortably on passive income generated by things he owns.

    • By kruffalon 2026-03-125:302 reply

      But it's a weird kind of average... Not the 3 from 1, 2, 3, 4 & 5 but rather like the bland tv-dinner which tastes non-upsetting for most people.

    • By altairprime 2026-03-125:14

      Perhaps closer to “the mean vector point such that all outbound vectors to different training tests are in sum the smallest”? I assume that’s a property of neural networks anyways, though I’m out of date on current math for them.

    • By ModernMech 2026-03-123:451 reply

      The soft gaussian blur of all human knowledge.

    • By ludicrousdispla 2026-03-127:39

      If you want a more accurate measure then you should subtract "the sum of all human ignorance" before taking the average.

    • By red_hare 2026-03-125:421 reply

      I feel the same about Claude Code. It's a fast but average developer at just about everything and there are some things that average developers are just consistently bad at and therefore Claude is consistently bad at.

      • By Cthulhu_ 2026-03-1213:00

        I'm not sure, I think you overestimate the average developer. But then, the average code doesn't end up in public repositories, it spends decades in enterprise codebases rotting.

        At this point I'd rather review LLM generated code than a poor developer's.

    • By baxtr 2026-03-126:56

      Yes, it’s the "sum" of which you extract an average.

    • By ninjagoo 2026-03-124:041 reply

      > I've been feeling more and more that generative AI represents the average of all human knowledge.

      Have you tried the paid versions of frontier models? They certainly do not feel like they spew the average of all human knowledge. It's not uncommon for them to find and interpret the cutting edge of papers in any of the domains that I've asked them questions about.

      • By fuzzer371 2026-03-124:203 reply

        Yup. And they all sound like slop. Read the papers, comprehend the papers, don't make someone else's computer do it for you.

        • By Otterly99 2026-03-1214:15

          Every scientist I ever met (and myself included) has a backlog of papers to read that never seems to shrink. It really is not trivial to stay up to date on research, even in niche fields, considering the huge volume of research that is being produced.

          It is not uncommon for me to read a recently published review and find 2-3 interesting papers in the lot. Plus the daily Google scholar alerts. It can definitely be beneficial to have a LLM summarize a paper. Of course, at this point, one should definitely decide "is this worth reading more carefully?" and actually read at least some parts if needed.

        • By codemog 2026-03-125:304 reply

          Anti-tech contrarian sentiment happens with every new technology. Someone older than you probably said the same thing about the internet.

          • By BuddyPickett 2026-03-126:282 reply

            Yep. Even windows, the most widely used OS on the planet has a fringe group of contrarians still today. Amazing.

            • By Xfx7028 2026-03-1214:29

              I grew up using windows and was a fan of it, but now I am a contrarian because of how shitty it has become. The fact that it is widely used is not an argument that it is good. It is widely used because of existing market share and reluctance of change by people.

            • By xigoi 2026-03-1310:30

              Even tobacco, the second most widely used drug, has a group of contrarians still today. Amazing.

          • By selcuka 2026-03-126:131 reply

            True, and they were right about it when they said that. They wouldn't be right anymore, because the Internet has evolved. The same might happen to LLMs, but currently one would be right to call LLM output "slop".

            • By darkwater 2026-03-127:34

              Depending on the criticism at the time, they were probably wrong at the time and are correct now. There were always trolls and bad people but at least there were no mega-corp playing with people's minds.

          • By jibal 2026-03-1212:331 reply

            What's sad is that there's so much of that at this site. This page in particular is a disaster, and what we're actually seeing a lot of at HN is claims that real humans are bots. And the people who make these accusations are certain of their validity.

            • By toraway 2026-03-1215:581 reply

              Have you considered that this suspicion is because the number of obvious bots has exploded in the last half year or so, particularly after OpenClaw became the latest fad?

              Start going to the profiles of every comment from a green account you see for a week and you’ll see how bad it is.

              There will be friendly fire but unfortunately that’s to be expected when you click the top comment in a thread and realize an account has been posting 100% slop for months.

              • By jibal 2026-03-132:42

                What I see is massive intellectual dishonesty, like this comment that doesn't engage with my actual points and instead attacks strawmen.

                I won't comment further.

          • By streetfighter64 2026-03-128:12

            And they were right, the internet does make us dumber and less human.

        • By ninjagoo 2026-03-1210:152 reply

          > Read the papers, comprehend the papers, don't make someone else's computer do it for you

          Why not?

          Personally, I don't have the specialized knowledge, nor the time needed, to read and understand papers outside my own 2-3 domains. LLMs do. And I appreciate what they can do for me. They do it better, faster, and more accurately than most 'popular science', provide better coverage and also provide the ability to interact with the material to any degree or depth that I care to, better than any article.

          It would be silly to pass up this capability to make my life better simply because random folks on the Internet disparage the quality of the output (contrary to my own experience) and make hand-wavy points about 'someone else's computer) while offering no credible or useful alternative :)

          • By framapotari 2026-03-1214:341 reply

            How do you evaluate the quality of a summary of a paper you do not have the knowledge to read and understand?

            • By ninjagoo 2026-03-1215:14

              > How do you evaluate the quality of a summary of a paper you do not have the knowledge to read and understand?

              Tough question. I think the straightforward answer is that you can't.

              That said, there is some confidence gained in an LLM's abilities based on its performance on papers in domains that I do understand. Yes, it's not going to be the same across all domains, but the frontier labs do publish capability scores across different domains, and that helps scrutinize the answers it provides, and how much salt to take with those.

          • By kruffalon 2026-03-1212:211 reply

            I wonder if you have asked the same LLMs to explain or summarize a paper in one of your fields and see if it still makes sense.

            It could be that the LLMs are good at stringing words together in a way that seems reasonable when you are not an expert yourself, much like people from other fields seem very knowledgeable until you compare many of them or hear/see them talk with each other.

            • By ninjagoo 2026-03-1215:061 reply

              > I wonder if you have asked the same LLMs to explain or summarize a paper in one of your fields and see if it still makes sense.

              I have, and it does, hence my confidence in its ability to do the same in other domains. Depending on what you're using it for, it is advisable to maintain some level of quality control (spot checks, sampling, deep dives, more rigorous continuous review) as in any process control.

              • By kruffalon 2026-03-1223:01

                Nice, that's good to hear and from the Zeitgeist that I get kind of new if I understand it correctly.

    • By larodi 2026-03-128:55

      pooling as it is called, is, well the same as averaging. has nothing to do with swimming really. it happens all the time in latent space. it is a tool, not a side effect.

    • By pessimizer 2026-03-1214:16

      > I've been feeling more and more that generative AI represents the average of all human knowledge.

      No, it's far worse. It's the mode of all human knowledge. The amount of effort you have to put into an LLM to get it to choose an option that isn't the most salient example of anything that could fit as a response is monumental. They skip exact matches for most common matches; it's basically a continuity from when search engines stopped listening to your queries and just decided what query they wanted to respond to - and it suddenly became nearly impossible to search for people who had the same first name as anyone who was famous or in the news.

      I've tried a dozen times to get LLMs to find authors for me, or papers, where I describe what I remember about them fairly exactly. They deliver me a bunch of bestsellers and popular things, over and over again, who don't even match at all large numbers of the criteria I've laid out.

      It's why they're dumb and can't accomplish anything original. It's structural. They're inherently biased to deliver lowest common denominator work. If you're trying to deliver something original or unusual, what bubbles up is samplings of the slop that surrounds us every day. They're fed everything, meaning everything in proportion to its presence in the world. The vast majority of things are shit, or better said, repetitions of the same shit that isn't productive. The things that are most readily available are already tapped out. The things that are productive are obscure.

      You can't even get LLMs to say some words by asking them to "say word X." They just will always find a word that will fill that slot "better." As I said, this is just google saying "did you mean Y?" But it's not asking anymore, it's telling.

      edit: It's also why asking it to solve obscure math problems is a dumb test. If the math problem is obscure enough, and there's only one way to possibly solve it, and somebody did it once, somewhere, or referred to the possibility of solving it that way, once, somewhere, you're going to have a single salient example. It's not a greenfield, it's not a white sheet of paper: it's a green field with one yellow flower on it, or a piece of white paper with one black sentence on it, and you're asking it to find the flower or explain the sentence.

      edit: https://news.ycombinator.com/item?id=47346901 - I'm late and long-winded.

    • By permo-w 2026-03-125:24

      You're falsely conflating knowledge with intelligence

    • By oblio 2026-03-123:49

      > I've been feeling more and more that generative AI represents the average of all human knowledge.

      It's literally what it is. Fairly sure that mathematically it's a fancier regression/prediction so it's a form of average.

HackerNews