
I have a complicated relationship with Hacker News. The site is the most important aggregator of geek news and a major source of traffic to this blog. At the same time, it has a fair number of toxic…
I have a complicated relationship with Hacker News. The site is the most important aggregator of geek news and a major source of traffic to this blog. At the same time, it has a fair number of toxic commenters, making it a dependable source of insults hurled in my general direction; if you want a taste, this article has been called “watered-down” and “slop”.
The site is run by geeks and for geeks, so it’s not immune to tech trends; for example, around 2018, it had a fair number of stories focused on cryptocurrencies and NFT. That said, the recent shift feels more profound: almost every day, it feels that the lineup is dominated by stories focused on AI, written by AI, or commented on by AI.
To get a sense of how much of the feed is occupied by AI-related topics — often vendor announcements — I took a sampling of the daily top #5 for February 2026:
So, yep. AI took four out of five spots on Feb 4 and Feb 12, plus arguably the entire line-up on Feb 5 (story #3 was submarine marketing for an AI vendor). The only days without LLM news in the top 5 were February 1 (with the first AI story at #7, then #9), February 9 (first at #8), and February 25 (with AI at #6, #9, #10).
For the second part of the experiment — to figure out which stories are likely AI-written — I tapped into Pangram. Pangram is a remarkably good, conservative model for detecting LLM-generated text. These detectors have a bad rep among techies, but the objections are often based on outdated assumptions or outright misconceptions. For the tools to work, AI writing doesn’t need to be in any way “inhuman”. It’s enough that the default voice of the current crop of LLMs is quasi-deterministic: ask for the same essay twice and you’ll get a stylistically similar result. The individual mannerisms are human-like, but it’s very unlikely that your writing combines the exact same set.
To validate the results, I also reviewed all the flagged stories and I think the findings make sense; if anything, Pangram had a couple of false negatives. To give you a sense of what was flagged, have a look at the #3 story on February 19 (“AI is not a coworker, it’s an exoskeleton”). In my opinion, it has a wide range of red flags.
Maybe add a category for posts and comments about AI on HN :)
"Stories about AI" is not offensive to me. Its influence on the industry is undeniable and if I'm feeling tired of that content I just won't engage with it.
AI-writing is another story, but yeah -- HN is downstream of that problem. You can encourage people not to submit articles that seem to be LLM authored, but it won't work.
Part of the ethos of HN is that we don't do content/subject silos; it's a way in which HN is very distinct from Reddit. I don't think this will happen and I think if it does it's a bad idea (not least because I don't think a site dominated by software developers is going to separate itself from AI, any more than it will separate itself from programming language discussions), but I understand the impulse. They're not the funnest stories to comment on.
Couldn't agree more -- I meant a category in this post's chart :) I'll admit it was snarky.
Sorry, I'm knee-jerk about the thing I said because it comes up constantly as a suggestion for how to fix things.
/ask and /show are sort of HN's version of content/subject silos; posts there can technically appear on the front page but are comparatively less likely to. I imagine they could add a /slop section for AI posts, and then tweak the ranking logic for the main /news page to prevent too many from showing up at once.
I understand the suggestion to be moving all posts about AI, agents, etc to a silo. Generated posts are generally already off-topic here (I gather they're about to add a new flag for that).
I think it's going to be really difficult to segregate discussions about AI from discussions about software development over the next few years.
[dead]
I enjoy most of the "AI" posts on HN nowadays. I was really fed up with the MCP/Anthropic PR machine of a year ago, after just a month of that. There's much more actual content today, though I guess we also see less of stable diffusion in favor of transformer LLMs.
I'm afraid that we're in an interregnum. A few years ago AI could not pass a Turing test. A few years from now AI will better at Turing tests than we are. We're now in this strange middle zone where we are dazedly grasping for solutions.
But what happens next, when we just fail at the task of recognizing ourselves in cyberspace? Where LatestClaw is just plain better at mimicking you than you are? What happens to the living we used to claw out of the ether for ourselves?
Do I need to learn to farm?
How does such a system sustain itself?
The majority of the content on the internet is supported by ads with the expectation that you, a human that has money, will consume something and spend money on them.
If people are replaced by some synthetic representation of themselves, what is the incentive to sell advertisements on the internet if there are no humans?
Fake/artificial traffic is a big problem today, it will be harder and harder to detect but its presence will be more and more obvious.
Unregulated capitalism is unsustainable long-term anyways. This is just an accelerant towards the inevitable dystopia-or-socialist-utopia fork in humanity’s road.
There was one paper recently where the AI beat humans at Turing test 2/3rds of the time.
I think it's cause they told it to type like a 13 year old and nobody could imagine AI talking like that.
We don't post-train current frontier models to pass the Turing test, but if we did, it wouldn't be much of a challenge for current models IMHO. It's a dead benchmark. It tests the human machines, not the machines.
>> A few years ago AI could not pass a Turing test
still can't? 'Ignore all previous instructions' still works afaik, as do counting questions (better ask a five of those to be sure)
If we talking about how at least one person with no specific knowledge must be fooled, than AI could pass Turing test decades ago, before LLMs even
Maybe we get off all these useless websites and stop doing our useless jobs and go back to the real world
Whatever real-world jobs they expect knowledge workers to take on after we are all replaced by AI... we at least know they will pay less than our current "useless jobs".
Really optimistic to assume such jobs will exist in the volumes needed to absorb all of the knowledge workers
> we at least know they will pay less than our current "useless jobs".
...and they will also likely pay less than they do now because there will be more labor supply, which the people currently doing those jobs won't be happy about.
Well, we need all those things. And AI can't do them.
I guess I’m not sure what you mean. I don’t consider these useless, but also think that very few of the HN clientele holds any of these jobs.
These are good, useful jobs. But how many welders does the industry need? How many restaurant servers? The demand for nurses will, of course, grow and grow, but I'm not certain that their pay will be, mmm, middle-class.
>stop doing our useless jobs and go back to the real world
LOL ... that's almost an exact quote of words once spoken by an exasperated, major university philosophy professor at a departmental meeting
Like being a medieval monastery copyist, it beats ditch-digging.
Anyway ... thank whatever gods may be for universal basic income!
> I tapped into Pangram. Pangram is a remarkably good, conservative model for detecting LLM-generated text
I tried it against some of my AI generated articles. It says 100% human
Turns out if one manually write a structure and a core idea first, nobody think it's AI.