
AI posts are becoming indistinguishable from human posts, and we can see it here on HN. The conventional response by website operators is to put in progressively tighter verification systems to distinguish bots and humans, but that eventually leads to the end of anonymity.
This is not an anti-AI r...
This is not an anti-AI rant. If a future AI agent truly has high quality posts and wants to use the site normally, that's fine. I'm talking about spam campaigns with hundreds of new accounts. We need new solutions to this problem.
I'll start by proposing a solution that could work for HN and similar forums. Feel free to iterate on it or propose your different ideas in the comments. Here goes:
For logged-in users, instead of ranking posts and comments on the server-side, the server only delivers a chronological feed + the current logged-in user's voting history.
Using the chronological feed as the base, each of your past votes changes the ranking of your feed by a tiny bit, and that's calculated client-side. You're more likely to see posts and comments from users you've upvoted in the past at the top.
In short, this means a new account will see a completely chronological feed, while an established account will see a feed modified by only their own past votes.
The public feed for non-logged-in users would still be ranked by the server. No changes there.
So each user gets a fully personalized bubble when logged in, except it's not a bubble because n=1. And it's really easy to break out of the bubble by logging out.
Spam bots can post and vote all they want, but they won't change the core userbase's experience that much, because the bots will only have access to a chronological feed. It has no taste, which is accumulated over time, and therefore can't spam votes and replies on real conversations nearly as much.
We don't solve it. What happens is the people who don't like it will eventually leave, and everyone else will normalize it because this is a tech forum and within tech AI has already won. There's no solution that allows humans to post which doesn't allow humans to post through an LLM, and as LLMs mature, all of the "tells" people think they have that can distinguish them from normal human speech will vanish.
I predict that within a year most comments on HN will be run through LLMs, and that someone will create a service specifically for doing so (not just on HN but on multiple platforms.) Entirely vibe-coded, of course. The mods won't like it, they've been very clear that LLM generated comments aren't welcome. Unfortunately I don't think they can stop it any more than King Canute could stop the tide.
The biggest signal I have noticed over time is consistency, not just one good post. Accounts that participate normally for weeks build a kind of trust naturally. Maybe weighting activity history more than identity verification could help without hurting anonymity.
> The biggest signal I have noticed over time is consistency
I am a real human and I am not consistently writing, I reply occasionally, to the topics I am interested in.
Agents are very consistent, just setup this schedule: between 9:00-21:00 every 4 hour interval pick a random time and reply to 10 topics
My mind goes to simple solutions like established communities having a $1 entry fee, for privacy use a privacy crypto maybe but that's a decent amount of friction for average folk with the current UX.
Another interesting idea that comes to mind is that every post/comment made needs the user to physically use their fingerprint scanner on their device which I assume plenty of devices have already. As long ad it can't be spoofed it works but not sure about the details about reliably securing that.
It would be some friction but I feel like it would be fine?