Digg is gone again

2026-03-1318:52411456digg.com

Thanks for being part of our beta. Stay tuned for what comes next.

We're not giving up. Digg isn't going away.A small but determined team is stepping up to rebuild with a completely reimagined angle of attack. Positioning Digg as simply an alternative to incumbents wasn't imaginative enough. That's a race we were never going to win. What comes next needs to be genuinely different.We're also announcing something we're excited about: Kevin Rose, Digg's founder who started the company back in 2004, is returning to join the team full-time. Starting the first week of April, Kevin will be putting his focus back on the company he built twenty+ years ago. He'll continue as an advisor to True Ventures, but Digg will be his primary focus. We couldn't think of a better person to help figure out what Digg needs to become.

Lastly, Diggnation, our official Digg podcast, will continue recording monthly while we work on the re-reboot.


Read the original article

Comments

  • By ThalesX 2026-03-148:4310 reply

    I recently activated my account on there and went to the forum for my country. It was already taken over by moderators. Then I looked at the mod and he took all real estate that is already available on Reddit that is related to said country. So in a way, he was probably the first account on there and became god-king for eternity for the subreddits related to the country. I had no idea who he was, what he stood for, what his plans were for his newfound digital real estate etc.

    I feel like the moderated subforum is a fundamentally broken system for dealing with content. I much prefer the Federated / X / Instagram approach where I can deal with users and have the tools needed to curate my own content, instead of relying on some ideologically captured no-name account that chooses what I can or cannot see based on whims.

    • By vintermann 2026-03-149:301 reply

      Your country wouldn't be Norway by any chance? I remember that on Reddit there was one powermod who was dead-set on owning every Nowegian-language forum, and every name that could potentially be a base for people trying to escape him.

      • By Razengan 2026-03-1413:391 reply

        wow, is there more on this?

        Also, honestly, with AI/LLMs now, do we even need human moderators anywhere anymore

        • By mrweasel 2026-03-1414:071 reply

          You need both. LLMs can, I think, do the bulk of removing posts that break community guidelines, but you need moderators to define and adjust the guidelines. Most would also like to have a human to escalate a dispute to.

          Google is famous for having almost solely automated support, at it absolutely sucks at doing almost anything. AI only moderation would go the same way.

          • By 9rx 2026-03-1419:03

            > but you need moderators to define and adjust the guidelines

            The comments above you are suggesting that global guidelines are unnecessary. Instead, they suggest you don't need moderation at all when LLMs now give us the technology to filter out the stuff individual users don't want to see based own their own personal policies. I am sure you can come up with reasons to dispute that, but "you need moderators to do the thing you say is no longer necessary" doesn't add to the discussion.

    • By p2detar 2026-03-149:073 reply

      The absolutely broken moderator system of Reddit made me leave it forever after being a regular user for more than a decade. The “god-king” thing simply doesn’t work.

      • By asmor 2026-03-1411:521 reply

        And even if it does, the mods don't have real control to moderate communities either, so you get the worst of both worlds. I don't go to most queer reddit communities anymore because a lot of them have bots that downvote trans-positive posts, even if the community is specifically meant to be inclusive. There's nothing to couple active participation to voting weight or anything of that kind and voting is not considered "brigading" by reddit if the coordination happens off-site (at least not in a way that'd lead to any enforcement action).

        It's makes a great propaganda machine though, given humans have a tendency to measure their own opinions on social clues.

        • By gzread 2026-03-1412:231 reply

          I still haven't been able to figure out how to make an account without it being immediately shadowbanned or normalbanned. Tried again the other day, it was something in between where logged-out users could see it was banned but I couldn't.

          • By mschuster91 2026-03-1412:595 reply

            You need to ditch and replace all your devices and acquire a new phone number. I'm serious. Virtually all large websites these days employ a lot of fingerprinting and persistence technologies.

            And yes, ditch them. Even well over a decade ago, Wikipedia of all places already employed IP address matching to link sockpuppet accounts. You must be extremely careful of never using any device that was associated with your old accounts on the same network as the devices associated with your new account. And that includes devices only seen by association.

            • By happyopossum 2026-03-1414:242 reply

              > and acquire a new phone number

              > Wikipedia of all places already employed IP address matching to link sockpuppet accounts

              That’s… well, that’s just not how tcp/ip works. Your phone number has nothing to do with your device IP…

              • By beachy 2026-03-1417:25

                It does when your phone number is used for 2fa in a session running on tcp/ip

              • By mschuster91 2026-03-150:20

                Phone numbers are available to many apps if they target older SDK versions and serve as an additional unique identifier.

            • By allajfjwbwkwja 2026-03-1416:151 reply

              It happens to all new accounts. It's known that new account are shadowbanned almost everywhere until they are 30 days old and farmed some karma on a very small set of subreddits that don't shadowban new accounts. It's shocking they ever get any new users, really; as far as a non-technical new user knows, nobody ever reads their comments for some reason.

              • By gzread 2026-03-1516:50

                It's full of bot slop pushing political propaganda, it's possible those bot farms have monetary agreements with Reddit to allow them to create accounts.

            • By gzread 2026-03-1414:581 reply

              How contagious is it? Can I get other people banned from Reddit by logging into my instantly banned account on their wifi network?

              • By ssl-3 2026-03-1419:071 reply

                Not that contagious, I'm afraid.

                My boss uses Reddit some. I'm banned. At the shop, we use the same IP address (and we do not use ipv6 there).

                I tried to log in with a ~10-year-old account that I'd never commented with. A perfect Beetlejuicing moment had arrived and I just wanted to play the game with a short, snarky comment.

                It logged in fine, and then: Insta-ban, just like that. (Maybe I should have used a new browser on a new network that I've never used before, but whatever -- nothing of value was lost here.)

                Meanwhile, the boss man's access continued unimpeded; this suggests that it is a rather targeted contagion.

                And it seems to follow the systems, not the networks.

                (If anyone wants banned, just let me know. I seem to have a well-poisoned system to play with.)

                • By gzread 2026-03-1516:51

                  If you have some free time you could try making a GDPR request for explanation of automated processing.

            • By kevin_thibedeau 2026-03-1417:551 reply

              Just don't use apps. Then the only association is a discardable cookie and IP.

              • By code_duck 2026-03-1419:47

                There’s also browser fingerprinting

            • By boca_honey 2026-03-1415:08

              [flagged]

      • By leptons 2026-03-1417:24

        Same here. The power-tripping of mods ruins reddit. Most don't care about the community as much as they care about exercising their absolute power over users.

      • By alex1138 2026-03-1415:213 reply

        [flagged]

        • By papyrus9244 2026-03-1419:55

          >They banned the_donald (which, yes, was spammy, but it seemed to be organic

          I used to frequent /r/t_d when it was created, before the Republican primaries for the 2016 election. I visited every day because I was absolutely astonished at the gigantic marketing effort behind it. I had never seen anything like that before, and haven't since. It probably had a team of dozens or hundreds of Russians behind it, creating memes and shitposting on a payroll. And it obviously was 100% inorganic.

        • By throwaway27448 2026-03-1416:11

          I'm actually ok with reddit banning it and taking sides in political conflict. I just wish they didn't pretend to be unbiased when it's made it a useless site for discussing current reality.

          Edit: to be clear, I'm more concerned about how russia was basically banned from the site but worldnews itself seems like the primary fountain of western astroturfing on the internet. No matter your opinion of putin, that is extremely unhealthy for productive discourse. I don't care about american domestic politics.

        • By koolala 2026-03-1415:291 reply

          I don't think the problem was spam content it was hate content. Hate can be organic

          • By nichos 2026-03-1415:425 reply

            Who decides what "hate" is though? Does it switch with every administration? Free speech, including "hate speech" should be allowed, as long as it doesn't violate the law (calls to violence, etc)

            • By tokyobreakfast 2026-03-1417:48

              "Everything I disagree with is hate speech"

              Reddit is filled with calls to violence, I would say it's gotten quite worse since. What's changed is that it all comes from one side now.

              When you curate the echo chamber, the calls start coming from inside the house.

            • By pixl97 2026-03-1417:352 reply

              The particular problem is said speech quite often leads to calls of violence. And when a few people get banned for that you get dog whistles, sentences that are encoded calls for violence. Eventually the new slang is recognized for being violent and then it looks like the site has allowed calls of violence for months.

              A short version of this is, if you let a nazi come to your bar, you have a nazi bar.

              • By pessimizer 2026-03-1418:471 reply

                Calls for violence are free speech. Calls for "imminent" violence that serve to coordinate it have been decided not to be.

                When you claim that calls for violence are not freedom of speech, it's a slippery slope that leads you to absurdities like speech that could "lead" to calls of violence are not freedom of speech, or that secret codes that could be interpreted as speech that would lead to calls to violence are not freedom of speech, or that violent-sounding slang that is eventually recognized as being encoded speech that would lead to calls of violence isn't freedom of speech, or that people who own bars who host people who use violent-sounding slang that is related to secret codes for speech that could lead to calls for violence are nazis.

                And since nazis deserve to be violently suppressed...

                • By pixl97 2026-03-1419:23

                  So which right do you believe in more, private property or freedom of speech?

            • By malfist 2026-03-1418:581 reply

              It's really not that hard to identify hate. We don't have to engage in epistemology

              • By nichos 2026-03-154:321 reply

                It is hard to identify though, someone is always offended by something. Free speech is about protecting speech you hate, not speech you agree with.

                • By malfist 2026-03-1512:38

                  Nobody said anything about offensive words.

            • By pwndByDeath 2026-03-1416:011 reply

              I agree that free speech is free speech, the private org that runs the platform has a veto, the assumption that these platforms are the equivilant of stepping into the street to stand on a box is a not realistic.

              Even HN is only quasi-free speech, there are rules that will get one censored.

              If you love freedom, there are mailing lists and other platforms but they arnt as high on dopamine and the audience gets a little bit more sketch.

              • By throwaway27448 2026-03-1416:111 reply

                Even the US never had free speech—there was always stuff you could/can say to get you gagged by the courts or thrown in prison. Your freedoms always stop at impacting other people.

                Somehow we jut gave business owners more freedoms than we gave everyone else....

                • By pwndByDeath 2026-03-1521:29

                  I guess I don't have a problem with a social media site blocking speech, we don't have to use them, if they are too draconian, nobody will.

                  But IRL it gets harder if ISPs get involved. I'm more interested in democratized platforms with privacy baked in, if you want free speech you might have to at least give the orgs you depend on for access plausible deniability

            • By tensor 2026-03-1417:092 reply

              "Free speech" means you have freedom from retribution from the government. It doesn't mean your fellow citizens need to stand there and listen to your shit, nor does it mean you are entitled to any sort of platform or megaphone. It means you can scream on the side of the road into the ether and you won't be arrested for it.

              • By nichos 2026-03-154:36

                I agree on all counts. But the Donald was banned for mostly on topic posts. Reddit is a private business and they can do what they want, but there's consequences to their actions too. reddit has become an echo chamber now.

              • By pessimizer 2026-03-1418:34

                > "Free speech" means you have freedom from retribution from the government.

                No, it doesn't. The concept of "free speech" isn't limited to prior restraint, you're mistaking it for the dominant precedent in judicial interpretations of the the 1st Amendment of the US constitution.

                > It doesn't mean your fellow citizens need to stand there and listen to your shit,

                Nobody asked you, or claimed this.

                > nor does it mean you are entitled to any sort of platform or megaphone.

                You should look up common carrier previsions. If we had to depend on your interpretation of law or morality, they'd be able to shut off your electricity for speech violations.

                > It means you can scream on the side of the road into the ether and you won't be arrested for it.

                If that's all it meant, it would be dumb and useless. What's more, it doesn't mean that, you can be arrested for screaming on the side of the road.

                You're wrong in every way you could be wrong.

    • By randerson 2026-03-1415:222 reply

      A well moderated forum (like HN) is great. I don't have time for the signal-to-noise ratio of X.

      IMHO Reddit would be better if it had AI moderators that strictly follow a sub's policies. Users could read the policies upfront before deciding whether to join. new subs could start with some neutral default policy, and users could then propose changes to the policy and democratically vote on those changes.

      • By taikahessu 2026-03-1415:501 reply

        > users could then propose changes to the policy and democratically vote on those changes.

        Which, in fact, would open up the same rat race with determining which accounts are real and so forth.

        Not disagreeing with you, just circling around this same problem. Feels like the world still isn't ready yet.

        • By Rebelgecko 2026-03-1417:452 reply

          If the policies are public, there's a lot more transparency. eg my city of millions of people has a subreddit. The head mod bans people for criticizing a certain dog breed. This "policy" is pretty opaque, but if the AI enforced subreddit rules say "thou shalt not mention the dog's breed when commenting on articles about someone being mauled to death", more people would be familiar with the rule (and perhaps there would be more organized discussion).

          I was on a subreddit for a while that voted on rules and had a rotating dictator to facilitate them. It worked decently well, although it never got to the point where the sub was brigaded. This was also pre-LLM so moderation was still a big time sink and the sub eventually fizzled out

          • By 7bit 2026-03-1510:30

            Bad mods will ignore the rules, regardless how transparent they are.

            We're talking about a multi-layer problem where every solution open another multi-layered problem.

          • By TheCleric 2026-03-1419:071 reply

            That’s because certain dog breeds aren’t more likely to maul and saying otherwise is ignorant fear mongering.

            • By Rebelgecko 2026-03-163:15

              Then the subreddit can codify that by having a policy that ignorance is a bannable offence!

      • By tokyobreakfast 2026-03-1417:52

        Try criticizing Apple or China or other sacred cows and see how quickly your post gets flagged.

    • By pndy 2026-03-1419:361 reply

      It's either some personal unquenched thirst for power or he thought that new digg will be as popular as these ~20 years ago, and that he'll be able to control content submitted and get paid for "promoting" it.

      I've seen something similar over the last ~17 years: a bunch of same terminally online accounts uploading content from our local media outlets on country-related subs and local digg-like sites - both active and long defunct for 10 years now. Some of those users even appeared on mastodon and bsky.

      The social link aggregators were created for people to share their favorite links, places from the Internet so others could see these and have fun, expand their knowledge and so on. For me it was the cherry on top of the web2.0 period where everything was fresh, beta and innocent. That lasted for a while up until other people, entities figured out that such sites can be used to promote their content, insert ads. The next stage was and remains till today opinion control by "curating" the content and/or reactions in discussions - still done by humans but more prevalent presence of convincing bots.

      Reddit itself lost its impartial and independent status a while ago. Big subs related to media franchises or big corporations are heavily controlled to the point it's impossible to submit content that's critical. It's all happy world seen by pink glasses, or as some say toxic positivity. There are still niche places where moderation is limited but as I said last time, from my own experiences: such subs were targeted by bad actors who by submitting forbidden content tried to impose lockouts so later they could take these in their control.

      hn isn't free of some of these issues either. while discussions still remain on good levels (tho degradation to reddit levels already happens), there's no control over content: there are accounts who do nothing but upload links every few minutes, hours.

      I'm not sure if it's possible to have link aggregators or multi-thematic forums that could be free of such... issues. The similar problem with establishing "real estates" happened on lemmy when some part of userbase decided to abandon reddit due to controversial changes.

      • By bigbadfeline 2026-03-1421:14

        An outstanding summary of the most important trends on the web, yes, it's being turned into a one-way propaganda-pushing machine much like the mass media before it. AI and bot-farms made that transformation cheap and ubiquitous, the profit motive, aka bribery, takes care of the rest.

        I don't think it's an unsolvable problem although new legislation is continuously being considered in order to make the solution harder. Still, not impossible.

    • By jayd16 2026-03-1418:213 reply

      Has any popular site tried an approach where you dynamically select your mods as more of a content filter than global moderation?

      Most places can hide posts and block users at the user level, so why not select which mods can do that for you?

      • By dredmorbius 2026-03-161:40

        Kind of?

        On Google+, it was possible to individually block specific profiles.

        This meant that the blocker wouldn't see the blockee's posts and the blockee wouldn't see the blockers, which is pretty much expected behaviour.

        But on third-party threads, if a blocker/blockee were both commenting, others could see their comments but they'd be mutually invisible. As the platform matured and the number of such blocks increased, this reached a point where that platform behaviour became common enough that it was frequently commented on. If the thread host isn't sufficiently diligent in their own moderation (effectively each post author is moderator of that thread), it's also possible for such discussions to devolve quickly.

        I guess Usenet would be another case where individual killfiles were often applied.

        This isn't quite the same as your proposal, but it does raise the challenge that if there are multiple moderation regimes occurring, there is no canonical view of a discussion, leading both the potential confusion over what has or hasn't been said, and potential derailment (or similar behaviours) if a sufficiently disruptive participant is not universally blocked. The canonical flamefest after all is often just two profiles / participants responding endlessly.

        Diaspora* is similar to G+, except that on third-party threads the blocks don't work, so that if A blocks B but C does not block B, then A and B will see one anothers' comments on C's posts / threads. This ... can be frustrating.

        Oh, and the post-author-as-moderator model also somewhat resembles what you'd suggested, in that you could choose to participate on a particular profile's posts given that profile's moderation practices. I found that there were several people who did an excellent job of this, and who were quite affective, in effect, salon hosts, which was how I saw the G+ moderation model over time. This differs from what you suggest in that every participant on those threads had the same moderation experience, but it was possible to choose moderation practices based on which profiles' threads you chose to participate on. And I'd definitely avoid poorly-moderated hosts.

      • By DougN7 2026-03-1422:141 reply

        How do you make sure each human gets only one vote?

        • By jayd16 2026-03-1422:43

          It's not a vote. You select your own set filters/ mod lists

    • By guerrilla 2026-03-149:105 reply

      Yes. Subforums should elect mods democratically.

      • By dijit 2026-03-1410:282 reply

        sadly, a nice idea that is painfully naive with how computers are used in reality.

        One need only remember how easy it was to take over IRC channels with a few hundred bots to see the endgame of this rationale… it cannot be patched out, it’s inherent to the internet.

        That which would make a vote valid; can (and will) be gamed.

        • By 9rx 2026-03-1418:15

          > it’s inherent to the internet.

          Who said the election needs to take place on the internet?

          A paper ballot-style election, while not perfect either, works well enough in practice.

        • By leptons 2026-03-1417:301 reply

          It could work depending on how it is set up. Maybe only accounts with n-number of years get 1 single vote, and maybe don't let any random 2-day old account get a vote.

      • By johannes1234321 2026-03-149:292 reply

        As long as sub forums can be created easily, users may pick their sub forum and thus indirectly moderator.

        In this setup having users elect the moderator leads to cases where small groups create their special interest group and then some trolls challenge the moderator.

        Their may be some oversight on the large sub forum, but not all.

        • By gzread 2026-03-1412:241 reply

          Necessary for this is that subforums can't have unique names. If a bad mod can squat all the words like "computers", "programming", "coding", newcomers aren't going to know the best subforum is called "RealProgNoBadMod"

          • By johannes1234321 2026-03-1412:471 reply

            Yes, the "important" ones need some special attention. If "democracy" where anybody can create arbitrary amount of accounts is however questionable.

            The vast majority of sub forums however are more targeted and smaller to begin with.

            • By gzread 2026-03-1413:421 reply

              Squatting is bad no matter how niche the topic

              • By pixl97 2026-03-1417:38

                Squatting also invites corruption and selling rights to control what is posted to a sub.

        • By leoedin 2026-03-1412:59

          You see this in city-focused subreddits. But the reality is the name is power. New users type in their city and join the original one. The hostile mods suppress mention of the new one. It never manages to get critical mass.

      • By twic 2026-03-1411:452 reply

        Stack Overflow does this and it works far better than arbitrary tyrant style moderation.

        • By Shog9 2026-03-1415:45

          Crucially, SO's election system needs to be bootstrapped: users aren't eligible to vote until they have a history of participation. The level of participation is fairly trivial, but it provides enough signal to allow a reasonable detection (and elimination) of bot / sock puppet networks without resorting to crude measures like blacklists or "bot tests".

          For new sites, this meant that the bulk of moderation was done by employees, followed by employee-appointed temporary moderators. This dramatically reduced abuse, but also reduced the explosion of new sub-communities that sites like Reddit thrived on.

        • By gzread 2026-03-1412:251 reply

          Stack Overflow is dead now.

          • By PaulHoule 2026-03-1413:442 reply

            I don’t think it was ever very good.

            • By freedomben 2026-03-1416:511 reply

              It was pretty decent in the mid and late 00s. The community started turning toxic in the very early 10s and by about 2015 was quite poisonous. The saddest part is that the problem was known and spoken about frequently, but the response to that from staff and/or high-level mods was to just double down and dig in.

              • By 9rx 2026-03-1418:471 reply

                I'm too old, but it seemed like it would be decent for a beginner in the mid-to-late 00s. But it never handled advanced, difficult topics very well.

                • By freedomben 2026-03-1423:31

                  For sure, advanced difficult topics were never really their forte', although it was really common to get great book or blog recommendations via comments. For me, the golden combination was a good book on the language/framework/topic I was stuyding, supplemented with specific Q&A from Stack Overflow. I have extremely fond memories learning C++ and Qt that way (although that Qt book was a little rough, but at least there was a Qt book. Nowadays every book just seems too outdated to be helpful).

            • By gzread 2026-03-1414:57

              Probably, but now it's actually dead by all the metrics. People ask LLMs instead because they won't close their questions.

      • By Gud 2026-03-149:121 reply

        Why? Genuinely curious.

        I am a big proponent of (direct) democracy in general.

        • By mavhc 2026-03-1410:11

          Internet is way behind on democracy. In general everyone likes democracy until they're in charge, then they realise they're the best person to be in charge and the idiots who vote don't have a clue, and should probably be banned if not beheaded for speaking out of turn.

          You'd have to weight votes by some kind of participation metric to solve the problem of very little authentication of the voters

      • By 9rx 2026-03-1418:33

        A democratic election requires that the elected be your employee, where you work with him on a regular basis to direct him in his job. That works (ish) in government where people doing the hiring have heavily invested life interests in it succeeding.

        Does a subforum offer the same? Once the mod is elected, are you going to sit down with him each day to make sure he is doing the job to your wishes and expectations? I say (ish) in government because it often doesn't even work there, even where people have heavily invested life interests, with a lot (maybe even the vast majority!) of people never getting involved in democracy. A subforum? Who cares?

        If there were to be elections, it is unlikely they could be anything other than authoritarianly, with the chosen one becoming the ultimate power.

    • By Contax 2026-03-1418:362 reply

      I've always thought than on Reddit (or Digg, or Lemmy or others) common words, brands, names... should be broad "topics" or categories that nobody can claim (first come, first served). You should be able to add a sub/community under a topic, but just like everyone else, and then users interested in said topic could add and exclude different subs to taste.

      • By recursivecaveat 2026-03-150:15

        I always thought it would be interesting to separate the post-side and the read-side in such a manner. You'd post to #programming, and then the reader would subscribe to #programming/user_xyz to pick up the moderation feed with xyz as the god-mod. This solves the bootstrapping problem where new subs have nothing to read. Unfortunately it's hard to do persistent standards keeping that way. If xyz has a no-memes policy do you ban all posts from everyone who ever posts one to the global tag, or do you individually inspect every post?

      • By creamyhorror 2026-03-154:04

        100%. I'm building a discussion system with this approach, so that no one forum/community can claim a topic exclusively.

    • By napolux 2026-03-1418:49

      Same for italian forums. I don't believe bot and spam are to be blamed fully.

      It was just a copy of reddit. How useful?

    • By reactordev 2026-03-1414:43

      This is why moderation choice should be based on metrics, not first cone first served.

    • By basisword 2026-03-1411:571 reply

      >> I recently activated my account on there and went to the forum for my country. It was already taken over by moderators. Then I looked at the mod and he took all real estate that is already available on Reddit that is related to said country.

      Are you sure? My understanding is that accounts were only allowed to create two communities.

      • By gzread 2026-03-1412:26

        On Reddit? It's horribly intransparent but there seems to be a special class of people to whom the normal rules don't apply.

        That limit wouldn't stop you creating more communities with more accounts anyway.

  • By mikeocool 2026-03-1321:3517 reply

    Kinda seems like we’re rapidly headed for the complete collapse of the internet as we know it.

    Every site that is driven by user posting seems to be headed towards being overrun by AI bots chatting with each other, either for sake of promoting something or farming karma.

    And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.

    Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle. Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?

    • By deanc 2026-03-147:375 reply

      The bot problem cannot be solved. Even if you strongly authenticate, people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future. Build your identity and reputation autonomously with the benefits that come with that.

      This happens now on Onlyfans too. Content creators hire agencies which in the best case outsource chatting to "customers" to armies of cheap labour in Asia, and the worst case use bots.

      The dead internet theory [1] is probably not just a theory anymore. HN recently made a policy to not allow AI posting and posters, but do you honestly think that's going to work? I would place a bet that a top HN poster within the next year is outed as using AI for posting on their behalf.

      [1] https://en.wikipedia.org/wiki/Dead_Internet_theory

      • By tlonny 2026-03-1410:374 reply

        Indeed - the future is RL meet-ups and small, intimate online communities.

        Perhaps not the worst thing in the world?

        • By heavyset_go 2026-03-1416:061 reply

          Counterpoint: https://reddit.com/r/MyBoyfriendIsAI/

          People will prefer the bots that give them head pats and tell them they're so smart and that they love them

          • By 0dayz 2026-03-1416:26

            I don't necessarily think that is a stop-gap against people socializing more offline/being socially productive online.

            Especially considering the fact that it seems more the case that the bigger stop-gap is what we already have:

            In asian (especially Japan) it's host(ess) clubs.

            Globally for friends it's influencers exploiting loneliness.

            Those are things I think has to go for people to embrace offline socialization or using their online time better.

        • By Waterluvian 2026-03-1415:56

          This is the optimistic take I’ve held.

          Bots get so good that they become indistinguishable from humans. If that’s true then it doesn’t actually matter if your community is all bots. But it does matter because authenticity matters to humans. They will seek authenticity where they can successfully sense it, which will be in-person.

          Human simulacrums will one day cause a repeat of this issue. Then we’ll have a whole Blade Runner 2049 issue about what exactly is authenticity?

        • By jihadjihad 2026-03-1414:28

          > Perhaps not the worst thing in the world?

          Definitely not. “Terminally online” is as deleterious as it sounds.

        • By deanc 2026-03-1419:40

          Yeah, you're completely right. Maybe this will be the impetus a lot of people need to detach from online.

        • By margalabargala 2026-03-1418:27

          It's the same freelance advertisers who optimistically refer to themselves as "influencers".

        • By echelon 2026-03-1413:042 reply

          The word "content" is gross.

          "Creator", on the other hand, is beautiful. It means you don't have to pick a lane. Anything can be creative. Documentary filmmaking, stop motion, dance, costume work, historical reenactment, indie animation, economics essays, game dev...

          The problem is we don't have a nice word that holistically captures the output of creators. They're not all making films or illustrations. So what do you call it? "Art" is awkward.

          "Content" works, but it sounds like slop. We need a better alternative word that elevates creative output.

          • By gzread 2026-03-1413:421 reply

            Since it's OnlyFans, I'd think something like "porn stars" or "online girlfriends"

            If it were YouTube, "YouTuber" is a start, but you could also be a "YouTube science communicator" or something

            • By echelon 2026-03-1416:391 reply

              Creator is a fine word to use in place of YouTuber. And vice versa.

              But what do you call their output?

              What do you call an illustrator's output? A photographer? What about when all of that shows up on your feed collectively?

              Content is a gross word.

              • By gzread 2026-03-1516:48

                You can call it video essays, Let's Plays, news reports, slop videos, and so on.

                To repeat fgiesen/ryg: to a content plumber, it's all content, just like the mail system delivers packages and doesn't really care what's in them. The video engineers at YouTube don't care whether it's a news report or a slop video, as long as the frames get on your screen. However the sender and receiver had better care what's in the package or something's gone horribly wrong.

          • By majicDave 2026-03-1417:54

            Creations?

      • By JimDabell 2026-03-149:207 reply

        > people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future.

        Verifiable credentials; services can get persistent pseudonymous identifiers that are linked to a real-world identity. Ban them once and they stay banned. It doesn’t matter if a person lets a bot post inauthentic content using their identity if, when they are caught, that person cannot simply register a new account. This solves a bunch of problems – online abuse, spam, bots, etc. – without telling websites who you are or governments what you do.

        • By dom96 2026-03-149:59

          This is exactly right. The problem is the friction that this kind of system adds.

          Even so, I implemented this and I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...

        • By Balgair 2026-03-1414:30

          There is the other side of this too: Real people - fake posts.

          So, you have other folks on here already saying that the code their bots write is better than their own, right?

          How long until someone who is karma focused just uses a bot to write their comments and post their threads? I mean, it's probably already happening, right? Just like a bot doing your homework for you, but with somehow even less stakes. I imagine that non native speakers will take their posts and go to an AI to help clean them up, at the very least. At the worst, I can imagine a person having a bot interact fully under their name.

          So even if we have some draconian system of verification, we will still have some non-zero percentage of bot spam. My out-of-my-butt guess is somewhere near 40%.

        • By gzread 2026-03-1412:28

          The ability to make a new account is an important defense against abusive bans. You don't want it to be possible for Google to unperson you.

        • By deanc 2026-03-1419:43

          You kinda skipped the bit I wrote alongside this about strong authentication. There are numerous ways to do this. For example, in Finland you have to physically identify yourself to open a bank account and you can then use that to authenticate. It's used for all public sector services and a few others with strict accreditation.

          The issue is that it solves nothing if you can't distinguish between text that is written by AI and isn't, regardless of strong authentication.

        • By flomo 2026-03-1410:281 reply

          IMO this is inevitable. HN is freaking about about the end of the anonymous internet, but it's already over and we're just figuring it out. Eventually the bots will find their 90s cyberpunk cosplay IRC channel too.

          • By hrimfaxi 2026-03-1414:101 reply

            > Eventually the bots will find their 90s cyberpunk cosplay IRC channel too.

            How do you figure? If these bots are driven by commercial interests that seems an unlikely outcome.

            • By flomo 2026-03-150:53

              Because they can. There's no real commercial motivation for these HN bots either.

        • By chasebank 2026-03-1414:271 reply

          I've talked about this on here before, but we think the solution is an auth layer built on top of credit score through an intermediary like creditkarma. The score itself doesn't really matter but it does solve big problems.

          Plus, if you wanted to implement a filtering system for users, I personally would rather trust reviews / comments from credit scores over 650, they have less incentive to be astroturfing.

          But yes, I think your conclusion is correct. This is the only way.

          • By taikahessu 2026-03-1416:24

            How is that creditkarma accumulated? By other "users"? Does the intermediary guarantee, the this account is a valid person now and always, and not sold the account or not stolen? I mean, we will always need some middlemen I guess?

        • By Terr_ 2026-03-149:295 reply

          I'd rather have a system where there's a small investment cost to making an account, but you could always make another.

          Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."

          Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.

          • By gzread 2026-03-1412:291 reply

            Something Awful made you pay $10 for an account. Directly to the forum. If you got banned you could pay another $10 to try again. Somehow this didn't lead to that bad incentives even though you'd think it would.

            • By pardon_me 2026-03-1415:071 reply

              Ban reason and the moderator name were public on Something Awful, which allowed the community to respond (actively or passively), and for more senior moderators/admin to take public action against rogue moderators. The transparent audit trail countered the incentive to ban somewhat, but a lot of people also treating getting banned as a game.

              • By koolala 2026-03-1415:392 reply

                Did they ban for this rule often?

                "Am I making a post which is either funny, informative, or interesting on any level?

                I hate how Reddit mods ban any post they don't like as being 'low effort / shit / spam' when it is completely vague.

                • By AlexeyBelov 2026-03-157:23

                  It's because you can't reasonably put everything in the rules. They would be thousands of words and still have holes and special carve-outs, _and_ users will still argue about rules application if you say your rules cover everything.

                  It's more reasonable to have "a spirit of the law", so to speak.

                • By ranger_danger 2026-03-1419:172 reply

                  Lemmy is even worse on the moderation front, even with public logs: https://a.imagem.app/G3R9xb.png

                  • By gzread 2026-03-1516:451 reply

                    Lemmy is software, like phpBB. You wouldn't say phpBB has bad moderators.

                    • By ranger_danger 2026-03-1517:07

                      In my experience, the people across different lemmy instances are not as diverse or unique as you might think. You might disagree but that's ok.

                  • By emaro 2026-03-1419:491 reply

                    Lemmy isn't simply Lemmy since it's federated. A screenshot like this is somewhat meaningless without specifying on which instance this happened. There are instances with very lax or even no moderation at all.

                    • By ranger_danger 2026-03-1420:321 reply

                      For the majority of large, well-federated instances, I don't think it's meaningless, because deletions also propagate to other instances.

                      If a mod on one server doesn't like something I say, and they delete my comment, all the other (well-behaved) federated instances will also delete my comment.

                      Of course this also creates problems in the other direction, like servers that ignore deletion requests.

                      That combined with a large amount of blocked instances across the board, I feel like you get into this "which direction would you like to piss into the wind" situation where you have no idea how many people/instances will actually see your message if at all.

                      • By gzread 2026-03-1520:37

                        Only on sublemmys owned by that server.

          • By tlonny 2026-03-1410:321 reply

            I’d love something like this implemented for email.

            Sending an unsolicited email to a random person X requires you to pay a small toll (something like 50p).

            Subsequent emails can then be sent for free - however person X can “revoke” your access any time necessitating a further toll payment.

            You would of course be able to pre-authorise friends/family/transactional emails from various services that you’ve signed up for.

            This would nuke spam economics and be minimally disruptive for other use cases of email IMO…

            • By jimmydorry 2026-03-1412:37

              >transactional emails from various services that you’ve signed up for

              These are one of the main culprits of unwanted emails... and a toll system would make them all the more valuable for the even worse actors to take advantage of.

          • By JimDabell 2026-03-149:421 reply

            Do you think there is a price point that locks out spammers without locking out poor people?

            • By bobthepanda 2026-03-1410:041 reply

              probably not, the problem is that spammers/scammers are looking for whales, and if you are talking about draining the retirement accounts of an American who's been saving all their life, that's quite a big payout in the six or seven figures.

              • By PaulHoule 2026-03-1413:471 reply

                In the case of the 415 scams I used to ask “who would expect $20M to fall out of the sky?” The obvious answer is “someone who already had $20M fall out of the sky”

          • By Angostura 2026-03-1415:08

            When Digg restarted, you had to pay $5 to create an accoun

          • By deanc 2026-03-1419:52

            What does it matter? If there is incentive enough people will just pay and let their bot act on their behalf.

      • By abetusk 2026-03-1420:411 reply

        The bot problem can be solved.

        Anubis is one such answer [0]. Cryptocurrency and micro transactions are another.

        In the last few decades, spam was a problem because the marginal transaction costs of information exchange were orders of magnitude lower than they had been. Note that physical mail spam was, and still, is an issue. Focusing on perceptual or fuzzy computation as the limiting factor, through captchas and other 'human tests', allowed for most spam to be effectively mitigated.

        Now that intelligence is becoming orders of magnitude cheaper, perceptual computation challenges no longer work, but we can still do computation challenges in the form of proof of work or proxies thereof. Spam will never wholly go away but we can at least cause more friction by charging bot networks to execute in the form of energy or money.

        [0] https://github.com/TecharoHQ/anubis

        • By deanc 2026-03-1421:041 reply

          I don't see how Anubis solves anything. If a human lets the bot control a completely vanilla computer (which there is now a lot of tooling for), then how is it going to stop that?

          • By clouedoc 2026-03-1423:20

            At most, PoW makes it a bit annoying to scale: you need to add some form of RPC that delegates solving to a beefy+cheap Hetzner server. If you're really scaling and it's getting expensive, you can rent a GPU to do batch solves.

            PoW systems like Anubis are self-soothing.

      • By IAmGraydon 2026-03-1414:374 reply

        The bot problem can easily be solved. It’s just that no one likes the cure. Think about this for a minute: what would happen if you had a country where all its citizens could act anonymously with no consequences, no reputation, no repercussions, and no trace? Would you want to go there? Live there? No, because it would be a lawless wasteland dominated by the worst of the worst.

        Yet people act like the internet is somehow different. The internet is a massive society. Social networks are very much like virtual countries, or even continents. We’ve all enjoyed the benefits of living in this society of zero consequence, but it’s now been overrun by the very worst people, just like the imaginary country above.

        You claim we can’t solve this problem, but we already have solved it here in the physical world with identities, laws, and consequences. The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity. Unfortunately, there won’t be a choice for much longer. The internet will certainly be dead without a system that ties IP addresses and online identities to real people.

        No, it’s not the internet we all wanted, but humanity has ruined the one we have.

        • By dandaka 2026-03-1416:441 reply

          I can imagine a "anonymity" or "reputation" filter attached to every interaction in the internet. Enabled by default, but you can disable safe mode and see bots having fun.

          Also for me problem is not in the anonymity itself, but in the lack of reputation. If I have a signal that entity can be trusted, I don't care much about its real identity.

          • By davedigerati 2026-03-1419:20

            I think this is a great way to frame the conversation and possible solution: reputation. things like accumulated karma or credits and IRL connections (big data will love this) all begin to feel dystopian whereas reputation I believe is something that everybody can get behind. It can absolutely remain anonymous, while still benefiting from IRL meetups for big reputation bumps (just use your handle). We all hang out in lots of places online, let that rep build and be used everywhere. Pretty sure they were trying to do something like this in the fedverse but haven't touched base on it in a long time ...

        • By pixl97 2026-03-1418:441 reply

          So you are missing something here. Up until recently IRL was anonymous by the nature of capturing all that data of what people are doing was expensive and difficult to process. Cameras weren't everywhere either.

          • By IAmGraydon 2026-03-1420:21

            If you lie to me in the real world, I know what you look like and won’t trust you again. You cannot change your face. If you punch me in the real world, I can punch you back. If you stab me in the real world, you’re likely going to jail once the police catch up to you. You don’t do any of those things because the lack of anonymity imparts consequence. There is no anonymity in the real world unless you run around in a full face mask, in which case no one will trust you anyways.

        • By Levitz 2026-03-1419:42

          >The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity.

          Anonymity is not the problem though. We've gone with anonymity for a long while and it has worked fine. Would a removal of anonymity suddenly fix all this? No, absolutely not. Astroturfing and PR campaigns happened before AI comments were a concern, same as bad actors.

          The problem here is the "recent" development of trusting whatever you read online. Of insisting that content should be personal, trustable and real, when none of this can ever be ensured. The separate, but related problem of engagement-based economy makes it way worse.

          And remember: social media sites don't actually want to get rid of bots, for the most part. That's not in their interest, as long as bots increase engagement, does anyone trust them to actively hurt their bottom line in order to promote honest, productive discourse? Please.

        • By deanc 2026-03-1419:45

          I suppose reshaping the fundamental social contract with the internet and the computers we use to access them would solve the problem.

    • By kdheiwns 2026-03-146:455 reply

      With AI running rampant, it seems security through obscurity is basically the best thing we have. Everyone knows reddit, facebook, xitter, etc so any clown can and does have bots running loose. HN is "obscure" in that most normies don't know about this place, and so it's relatively safe from the floods of spam. But I think it's just a matter of time until non-tech people start looking for those few bastions of human comments online, come across this place, and a great flood begins and it'll never be undone. After that, I guess it'll be a rise of invite-only forums like we had in the early 2000s all over again.

      • By tlonny 2026-03-1410:241 reply

        HN may not be “mainstream” but it is certainly _very_ vulnerable to bot spam given the topics discussed and the make-up of the audience.

        You can already see it happening now - at least the bots that write like vanilla Claude/ChatGPT. Presumably there is a much larger hidden cohort of bots that are instructed to talk more naturally and thus are better adept at flying under the radar…

        • By dangus 2026-03-1414:483 reply

          I would say that HN has a lot of features that would be seen as draconian in how much they limit your interaction by other platforms.

          You can barely comment before you are rate limited.

          You can’t upvote until you’ve been around a pretty long time.

          New accounts are given a green badge of dishonor that makes users scrutinize their comments more.

          I’m not saying these are bad things but they’re probably too restrictive for a social media network that’s just meant to be a good fun time.

          • By gzread 2026-03-1415:021 reply

            If you are rate limited, a moderator has manually applied a rate limit to your account. Accounts are not rate limited by default. You can appeal the decision by emailing hn@ycombinator.com.

            • By Cpoll 2026-03-1417:42

              I think there's a short-term rate limit applied to everyone, e.g. you get a message if you try to post three replies in the same minute. I've seen it once, and I don't think I'm active enough to have earned a manual flag.

          • By HughParry 2026-03-1420:11

            I’ve never seen people on the likes of blackhatworld selling hacker news accounts or services. The glass half full take on this is that hn is surprisingly robust in its ability to deal with vote manipulation.

          • By qingcharles 2026-03-1415:151 reply

            The karma points you get on HN are worthless, which I think is a bonus. They don't buy you anything. On Reddit, for instance, many parts of the site are walled off until you have "farmed" enough karma to participate.

            • By 0dayz 2026-03-1416:301 reply

              Not exactly true.

              You get the right to down vote and if I promote my totally not a scam product on HN, people will check my user account and see: on wow over 9000 karma? Gotta be trust worthy, when in truth it's just been karma farming.

              HN does limit some of it, but it's not a panacea.

              • By himata4113 2026-03-1417:281 reply

                I don't know, never found much value in karma. I recreate an account at least once a year for no particular reason and it roughly takes me a week to get enough karma to do what is important (flagging posts).

                • By wafflemaker 2026-03-1418:451 reply

                  My account is literally 4 years old and I'm not even halfway there.

                  How do you do it?

                  And I'm trying to limit myself from saying unwanted things like criticizing ** or saying something nice about **. (Self censoring to avoid downvotes).

                  Maybe I should be more active.

                  • By himata4113 2026-03-1510:38

                    I don't know. Just have something niche to share, be interesting. Don't be afraid of downvotes.

      • By r721 2026-03-148:101 reply

        Dang told me in 2019 that HN gets 150M page views a month, so it's not that obscure actually:

        https://news.ycombinator.com/item?id=21201120

        • By ahofmann 2026-03-148:301 reply

          150m page views a month is peanuts and very far away from the "social" networks numbers. I don't have those numbers, but I know how many page views we had 2011 while running a german browser game community.

          • By armchairhacker 2026-03-1410:331 reply

            The internet seems to have grown massively within the past couple years (unfortunately, almost certainly because of bots). I bet the number today is orders of magnitude higher.

            • By troosevelt 2026-03-1414:591 reply

              I would bet money that HN's traffic is not orders of magnitude higher than 2020. HN is not as popular as HNers think it is.

              • By armchairhacker 2026-03-1419:31

                We don't disagree. The extra traffic is almost if not entirely bots (especially scrapers)

      • By Mountain_Skies 2026-03-147:382 reply

        I've asked ChatGPT a question about something I read in a thread here and it responded with a comment from that thread, even though the thread was less than an hour old. HN is well known in the tech community and there are certain subjects, especially anything involving Israel or India, that nearly instantly result in a flood of comments from bad actors. HN isn't Reddit but it's also a shadow of what it once was, which is driving away more of the productive participation in favor of agenda-based posting.

        • By gzread 2026-03-1412:311 reply

          Search engines seem to index HN in near real time. They must have custom scraping code to follow the incrementing post IDs.

          • By enoint 2026-03-1415:09

            Claw plugins for HN APIs arrived pretty early.

        • By WesolyKubeczek 2026-03-149:221 reply

          Note that these topics often involve comments which you can predict very easily. Internet users are like that, agenda or no. Wasn’t it in the heyday of forums that you could recognize the most prolific/annoying members by their style and vocabulary? A model should have no problem pulling such things off.

          • By WarmWash 2026-03-1415:28

            It pretty regular that for major post, you can find the same few highly upvoted comments on all the platforms with the story

      • By firecall 2026-03-149:091 reply

        > After that, I guess it'll be a rise of invite-only forums like we had in the early 2000s all over again.

        Which would be totally fine with me TBH.

        Rather amusingly, invite-only torrent sites might be the only semi-public authentically human hangouts left on the internet!

        • By eikenberry 2026-03-1419:34

          I was thinking the same thing, that this wouldn't necessarily be a bad thing. I'm curious how far it will go.. if we'll get invite-only mesh networks with self-contained mini-internets and the like.

      • By ultratalk 2026-03-147:351 reply

        Eternal AI september.

    • By october8140 2026-03-148:317 reply

      The future is human curated content. Provide the same experience people get today but without the noise. Give them just the good stuff and don't let just anyone make a post. A book has an author, a movie has a director, maybe websites can have webmasters again who filter through the garbage for you.

      • By spicymaki 2026-03-1412:302 reply

        The future is meeting in person and watching performers actually perform live.

        • By SoftTalker 2026-03-1416:43

          You've nailed it. Social media is no longer and will never again be a substitute for real human interactions. It sort of worked when it was mostly real humans, but that era is ending and not coming back. Algorithms are now controlling what you see, and bots and agents are increasingly creating and posting most of the content.

        • By pixl97 2026-03-1418:49

          Currently the biggest places with live performances are swamped and tickets get scalped for huge upcharges.

      • By Gud 2026-03-149:141 reply

        It’s what I’m trying to accomplish with my website(link is in my profile). Just trying to crank up the signal to noise ratio.

        • By rambambram 2026-03-1410:341 reply

          Nice. I like how clicking a tag also makes the word 'tag' light up.

          • By Gud 2026-03-1412:222 reply

            Thanks for the kind words!

            I got encouraged by another HN poster a few days ago, let me know if you have any suggestions.

            I’m always open to criticism.

            • By rambambram 2026-03-1418:58

              Everything clicks nice, so to speak. A nice UI you have there.

              I would suggest you explain what it's about in one sentence, just like you explain in your HN profile. The About-page says not so much. You can add some explanation there, or even just one sentence at the top of the homepage (or other pages).

            • By Torwald 2026-03-1413:402 reply

              I got:

              > Failed sending verification e-mail to XXX@XXmail.XXX, please contact administrator on stonky@stonkys.com

      • By nicbou 2026-03-149:56

        AI is sucking up that content and denying traffic to its creators. This model is becoming obsolete.

      • By NetMageSCW 2026-03-1418:46

        How did Yahoo work out compared to Google?

      • By kaizenb 2026-03-149:58

        A curator with a great taste and judgement is king.

      • By b112 2026-03-148:512 reply

        Yes, precisely.

        This means that only sites which verify identity will have any value in the future. And by verified, that means against government ID and verified as real.

        No amount of sign up fee works as an alternative.

        Note that a site can verify identity, prevent sock puppets, ban bad actors and prevent re-registration, all while keeping that ID private.

        You still get a handle and publicly facing nick if you want it.

        The company which handles this correctly will have a big B after it. Digg actually has a chance at this.

        It has no users, so the outrage won't exist in the same capacity. Existing platforms will be pummeled in the market if they try to convert to this type of site, as their DAU will likely drop a thousandfold, just due to the eliminated bots.

        But Digg could relaunch this way. And as exhibited, this is now the only way.

        The age of the anonymous internet is over, it's done. People not realizing this are living in the past.

        Note, I don't like this, but acknowledging reality is vital. Issues with leaked databases, users, hacking of Pii are all technical and legislative issues, and not relevant to whether or not this happens.

        Because it will happen, and is happening.

        It should be noted that falsifying ID is a crime. Fake ID coupled with computer fraud laws will eventually result in hefty jail time. This is sensible, if people want a world where ecommerce, and discourse is online... and the general public does.

        And has exhibited a complete lack of care about privacy regardless.

        • By lotsoweiners 2026-03-1418:48

          I think people who want to stay anonymous just will not participate anymore. Like I’ve enjoyed using this site, Reddit etc but couldn’t care less about dropping them if I need to have an id verification to access. Someone will probably create a new communication method to replace this.

        • By pixl97 2026-03-1418:541 reply

          >No amount of sign up fee works as an alternative.

          Simply put money is worth too much, at some point someone will want access to this human audience and offer too much to be resisted.

          >It should be noted that falsifying ID is a crime

          Lol, no one gives a shit on the internet. People will use stolen ID'S to get accounts. If the network is lucrative enough, governments will provide fake IDs to spread propaganda.

          • By b112 2026-03-1420:02

            Lol, no one gives a shit on the internet.

            This is now, not the future. Times change.

      • By kolinko 2026-03-149:561 reply

        human curated -> human moderated. I, for one, don't care if it's ai, or human-written. I care if it's interesting/useful.

        • By kaizenb 2026-03-1410:001 reply

          results are important, not the tools or process. (on this matter)

          • By jacamera 2026-03-1411:53

            Results over time are important. Or at least they should be.

    • By dbmikus 2026-03-1414:402 reply

      Every website needs to add the "friend or foe" system[0] so that I can mark bots to avoid their content and mark good posters so I can filter just to theirs.

      [0]: https://hackersmacker.org/

      • By troosevelt 2026-03-1414:581 reply

        This should be seperate from marking bots because what this really wil do is embed people into hearing only what they want, making discussion worse.

        • By well_ackshually 2026-03-1415:221 reply

          no, I truly do not want to read IHeartHitler88's opinion on jews, or donttreadonme09's bright opinions about how the economy would be better if we listened to Ayn Rand. I'll be very happy when they're out of my sight. If I want to have a miserable day, sure, I'll turn it off.

          Fact of the matter is, most posts on the internet are already dogshit. Now they're also populated by AI, but the point stands. Most of what you will say online is at best useless.

          • By NetMageSCW 2026-03-1418:471 reply

            >Most of what you will say online is at best useless.

            If that is true, you are saying far too much.

            • By well_ackshually 2026-03-1419:43

              I know, it hurts. Most of what I say in this website doesn't matter. Even if it did, it's about the same thing as screaming into the void. And it applies to you too.

              The vast majority of what we post is vapid, useless bullshit.

      • By kevin_thibedeau 2026-03-1418:13

        On /. I would only mark obnoxious people as friends so I could see the friend-of-a-friend indicator and be cautious of anyone aligned with them.

    • By diacritical 2026-03-140:513 reply

      > And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.

      You just published good content knowing AI will slurp it up and not give you any traffic in return. I'm now replying to you with more content with the same expectations about AI and traffic. Why care about AI or traffic or recognition? Isn't the content the thing that matters?

      It's like answering technical questions in an anonymous/pseudonymous chat or forum, which I'm sure you've done, too. We do it to help others. If an AI can take my answer and spread it around without paying me or mentioning one of my random usernames I change every month or so, I would be happy. And if the AI gives me credit like "coffeecup543 originally posted that on IRC channel X 5 years ago", I couldn't care less. It would be noise to the reader. Even if the AI uses my real name, so what?

      The people who cared about traffic and money from their posts rarely made good content, anyway. Listicles and affiliate marketing BS and SEO optimizations and making a video that could be 1 minute into 10 minutes, or text that could've been 5 articles into a long book - all existed from before AI. With AI I actually get less of this crap - either skip it or condense it.

      • By wibbily 2026-03-143:581 reply

        It's two different problems. People who run review sites and blogs and such care about traffic, and not getting attribution will kill their desire to participate. People who post here and on Reddit etc. care about talking with other human beings, and feeling ignored in a sea of botspam will kill *their* desire to participate.

        • By NitpickLawyer 2026-03-146:481 reply

          > feeling ignored in a sea of botspam will kill their desire to participate.

          The bots are not really that bad, they're (still) pretty easy to spot and not engage with. I'm more perplexed about the negativity filled comments sections, and I'm pretty sure most posters are real grass-fed certified humans.

          I don't get why negative posts get so upvoted, get so popular on the front page, and people still debate with outdated arguments in them. People come in and fight other deamons, make straw-man arguments and in general promote negative stuff like there's no tomorrow. I think you can get so much more signal from posititve examples, from "hey I did a thing" type posts, and so on. Even overhyped stuff like the claw-mania can still be useful. Yet the "I did a thing" get so overwhelmed by negativity, nitpicking and "haha not perfect means doa" type of messages. That makes me want to participate less...

          • By Defletter 2026-03-147:501 reply

            Oh that's just human nature: there's a reason why trashy tabloids continue to exist despite how public sentiment seems to universally agree that they're awful spreaders of rumour and insecurity. More people are Skankhunt42 than we'd like to admit.

            • By solus_factor 2026-03-156:542 reply

              Shouldn't we try to do something about it and not just give up by saying "oh, that's just human nature, nothing you can do here"?

              • By Defletter 2026-03-157:52

                Sure, just be aware of what you're up against: if religion teaches us anything it's that even concerted, systematic efforts over millennia to conquer human nature (eg: libido) still fail. But if you want to give it a go, by all means: one can only imagine Sisyphus happy.

              • By chickensong 2026-03-1510:18

                What do you suggest?

      • By Terr_ 2026-03-149:32

        That's a little bit apples to oranges, because I'm not monetizing this content, or paying to host it, or trying to make a personal brand, etc.

      • By intended 2026-03-148:31

        Yes and no.

        In the most simple sense - Yes, it is the content that matters.

        In the more practical sense - cognitive and emotional resources are limited and our brains are not content agnostic.

        We have different behaviors, expectations and capacities for talking to machines and talking to humans.

        For example, if I am engaging with a human I can expect to potentially change their minds.

        For a machine? Why bother even responding. It’s of no utility to me to respond.

        Furthermore, all human communication comes with a human emotional context. There are vast amounts of information implied through tone, through what we choose not to say. Sometimes people say things in one emotional state that is not what they would say on another occasion.

        To move the conversation forward, addressing the emotional payload behind the words used, matters more than the words used themselves.

        There are a myriad reasons why humans are practically poorer for these tools.

    • By shellfishgene 2026-03-148:431 reply

      This could be positive. So far things were gamed and manipulated to some extent, with some fake content, but it was never too obvious, and a bit of a cat and mouse game with filters and whatnot. Now, it's so easy to fake content that robust systems will have to evolve, or most social media sites will become worthless, and advertisers will catch up eventually when they are paying for bot-only sites. The downside of course is that these robust systems are hard to imagine without complete loss of anonymity of the users.

      • By armchairhacker 2026-03-1410:27

        Web of trust weakens anonymity, but doesn’t eliminate it.

        - You know who your online invitees are, but not your invitees-of-invitees-of-…

        - You can create an account, get it invited, then create an alt account and invite it. Now the alt account is still linked to you, but others don’t know whether it’s your friend or yourself. (Importantly, you can’t evade bans with alts; if your invited users keep getting banned, you’ll be prevented from inviting more if not banned yourself)

    • By phtrivier 2026-03-1413:031 reply

      Asking money to people in order to read stuff, and promoting the one people are actually ready to part with real money to read, is a first interesting step. (See: substack, Patreon,etc...)

      I know this is going to sound horrible, but : how about asking money to contribute, period ? Maybe have a free tier of a couple comments, etc... But if you want to build a troll factory, sure... Show us the cash ?

      • By rglullis 2026-03-1413:391 reply

        I do believe that charging for it is one way to create some friction, but it's not enough.

        Twitter is full of blue checks that are just bots and automated reply guys.

        I'm treating now all these bots as a stressor on our defense systems, and we will end up having to learn how to build a real Web of Trust, and really up our game on the PKI side. We also need some good Zero Knowledge proof of humanity that people can tie to their Keyoxide profile, so that we can just filter out any message that is not provably associated with a human.

        • By solus_factor 2026-03-157:051 reply

          Instead of building a Web of Trust, a better solution might be to find efficient ways of clustering people.

          Look how civil and insightful this discussion is. Why? Because people have different "quality", even if it's not politically correct to talk about it, we all know it intuitively.

          Imagine a forum, where all the HN-level people are gradually clustered together, all the rednecks with conspiracy theories, all the leftists dreaming of communism, all form their own independent echo-chambers.

          Yes, echo-chambers are bad, but let's face it, you won't be able to change most people's opinions anyway. Don't agree with me? Go on reddit, try saying something "controversial", like "men can't get pregnant", see how many people you would be able to convince :)

          • By rglullis 2026-03-157:40

            > magine a forum, where all (...) form their own independent echo-chambers.

            That would be a horrible nightmare. You are falling for the same identity politics trap of the ones you are implicitly criticizing. You being on HN makes you no better than the "others".

            > you won't be able to change most people's opinions anyway.

            Who cares about "changing most people's opinions"?

    • By nicbou 2026-03-149:54

      Every website that was driven by traffic is also dying. I have put nearly a decade of work into mine, and AI overviews and ChatGPT have reduced traffic by over 60%. At some point I will need to give up and find a job, and that corner of the internet will get no new original information, just rehashed slop.

    • By visarga 2026-03-1411:04

      > Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle.

      Creative loop moves inside the agentic chat room, where we do learning, work, art, research, leisure, planning, and other activities. Already OpenAI is close to 1B users and puts multiple trillion tokens per day into our heads, while we put our own tokens into their logs. An experience flywheel or extended cognition wheel of planetary size. LLMs can reflect and detect which of their responses compound better in downstream activities and derive RLHF-RLVR signalling from all our interactions. One good thing is that a chat room is less about posing than a forum, but LLMs have taken to sycophancy so they are not immune, just easier to deal with than forums. And you can more easily find another LLM than a replacement speciality forum.

    • By nipponese 2026-03-1416:08

      Collapse of the Internet or collapse of the visual world wide web? tbh, I am a little curious to see what comes after clicking a button on a web page.

    • By elphinstone 2026-03-1414:29

      Perhaps they migrate into Discord and Instagram once they acquire better visual and voice capabilities.

    • By dana321 2026-03-1322:431 reply

      That and most of the news being behind a paywall, which they can scrape anyway.

      The internet archive is my safe haven these days, i can go back and remember the old internet.

      • By mikeocool 2026-03-1322:46

        Ha yeah, I quite like the 2003 vintage.

    • By adam12 2026-03-1416:16

      Yeah, we need human verification more than age verification.

    • By nativeit 2026-03-1410:49

      As someone who came of age before “the internet as you know it”, I am looking forward to all of the cancerous Web 2.0 OG slop and narcissism factories succumbing to their own fates. Let me tell you, the internet as we know it sucks, and the internet it ate 25-years ago is a marked improvement. We should be so lucky. Now go write a personal blog in plain text, and rejoice.

    • By AlexandrB 2026-03-1413:43

      > Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?

      They will try and OpenAI will sell favorable placement to manufacturers.

    • By dartharva 2026-03-1411:481 reply

      You mean a complete collapse of social media, not the whole internet. The internet is a telecom ecosystem and has a lot more to it than just forums and link aggregators.

      I honestly believe it might not even be such a bad thing. People were arguably better without social networks and media, and it's perhaps better to let the cancerous thing just die and keep the internet just as a utility powering boring things like banking and academia.

      • By gzread 2026-03-1415:05

        What would you say are the major applications of the internet? It's used for business and academia in ways that aren't going away, yes. M2M communication will stay. Social media is the largest user-facing segment and it might not. I don't have a sense of how big these sectors are relative to each other. If the largest sectors of the internet disappear, the internet shrinks a lot.

    • By bobsmooth 2026-03-140:272 reply

      Unless you're allowed to say slurs without being banned, your forum will be overrun with bots. The sanitation of the internet is the perfect breeding ground for brand-safe AI promotion bots.

      • By georgeburdell 2026-03-143:25

        4chan has bots too.

      • By seattle_spring 2026-03-146:571 reply

        Curious how you came to that conclusion. Anecdotally, places where you can slur to your heart's content like /r/conservative seem far more inundated with bots than other areas of Reddit. I feel like that's really saying something too, because Reddit has a really bad bot problem overall.

  • By MildlySerious 2026-03-1319:276 reply

    I am kind of peeved. I started a community there and diligently posted links to topical news, and it kind of became a reference to me. Like many others, I've put in some amount of effort.

    Now it's gone, again. Without a head's up or a way to get a backup out of it, it seems like. Can't say I am a fan of that.

    • By calmbonsai 2026-03-1320:092 reply

      Cutting staff does in no way mandate a un-notified and abrupt "hard-reset".

      They could at least put it in read-only mode for a short time and allow downloading of extant community content prior to a scheduled "reset day".

      This smacks of flailing leadership and zero respect for their target user demographic.

      • By amarant 2026-03-1321:231 reply

        They say trust is their product, well,I guess they're sold out

        • By calmbonsai 2026-03-140:331 reply

          In Digg's non-defense, Kevin Rose has been a serial-rug-puller for his entire career. See also Pownce, Milk, and Moonbirds.

          The only sustained business I'm aware of is Hodinkee.

          • By al_borland 2026-03-140:48

            Kevin Rose didn't start Hodinkee, he started Watchville years after Hodinkee was already well established. Watchville merged with Hodinkee, at which point he became the CEO for 2 years.

            From what I can tell Watchville was abandoned a few years ago.

      • By idatum 2026-03-1322:411 reply

        > Digg's founder who started the company back in 2004

        Their plan is to make the internet what is was 22 years ago.

        • By dillona 2026-03-142:16

          I wonder how much it's possible to recreate some of the old magic.

          I'm sure it's impossible, but what if it's not?

    • By Jordan-117 2026-03-1416:05

      That's exactly what they did to the old Digg back in 2010 -- massive redesign that effectively deleted all old posts, comments, and favorites without warning or opportunity to back up. I pretty feel vindicated choosing not to trust them again, though it's wild they didn't even make an effort to do better here when they claim to want to keep going.

    • By RobotToaster 2026-03-1320:562 reply

      If you're looking for a new platform lemmy is probably your best bet, at least if a server goes down everything is still saved on federated servers.

      • By MildlySerious 2026-03-1321:27

        I do have a lemmy account, but have not really returned to it in a while. Maybe I haven't found the right communities yet, but it had nothing about it that felt engaging. People upvoted, but nobody talked. No interaction. Digg felt more alive from day one. I replied to a post in a niche community with ~100 members and only afterwards realized it was @justin.

      • By ranger_danger 2026-03-1321:385 reply

        My experience with lemmy has not been nice. A majority of people there are just downright awful, and the mods are often power-hungry and overzealous in their actions. Many times entire servers are defederated from many others due to how a large percentage of their users behave.

        Example: https://0x0.st/8RmU.png

        • By pmdr 2026-03-149:131 reply

          Despite its flaws, X seems to have a better balance between what's allowed and what's not than other non-niche social networks.

          • By LightBug1 2026-03-1416:05

            Fuck X. Various people can shove that 'better balance' completely up their jaXie.

        • By joeross 2026-03-141:59

          Lemmy has the same energy as ice: a bunch of rejects from other mod communities showing up to render their version of justice upon federated folks

        • By RobotToaster 2026-03-1322:142 reply

          Yeah, the primary instance (lemmy.ml) isn't the best.

          I use mander.xyz, it's science focused, but they also have a policy of only de-federating instances that host CSAM.

          • By Jordan-117 2026-03-1416:01

            Isn't the biggest instance Lemmy.world? I thought .ml was the oddball fringe dominated by tankies.

          • By ranger_danger 2026-03-143:061 reply

            Where is that policy located? I could not find it.

            Their /instances page also only shows a single blocked instance, whereas something like programming.dev shows lots of questionable instances blocked.

        • By huhkerrf 2026-03-148:171 reply

          > A majority of people there are just downright awful, and the mods are often power-hungry and overzealous in their actions.

          If you're telling me it's _worse_ than reddit in this regard, I can only imagine how terrible it is.

          • By gzread 2026-03-1412:35

            Lemmy is server software, it's like saying you don't use phpBB because it has bad mods

        • By AlexeyBelov 2026-03-157:31

          Hm. It's unclear if the second comment is in good faith. If it's not, it's OK for it to be deleted. In my opinion there is a huge problem with Reddit mods: they are perceived as draconian, yet they actually remove only 1% of bad comments. It's pretty much impossible to moderate successfully on a per-comment basis in any sizeable community. So, the mods only moderate submissions.

    • By ekjhgkejhgk 2026-03-1320:361 reply

      You chose to put your effort into building something that someone else owns.

      Next time try doing it in a way that you control it.

      • By MildlySerious 2026-03-1321:15

        You're right, and that is one of the lessons to be reminded of here.

        My main point wasn't that, though. It's simply a bad and low-effort way to handle the situation, and like one of the other replies points out, there are better options. They could have just as well disabled posting and maybe even viewing of submissions and communities for the time being. Just shutting it all down immediately without notice leaves a bad taste in my mouth, and I will not be among the people returning for their next relaunch. I am sure others feel the same way, and I don't think it is a wise decision to needlessly put off your early adopters if you're hoping for them to come back "next time".

    • By snapetom 2026-03-1320:241 reply

      Argh. Also quite irritated. I had 50/50 transitioned over to it despite the lower traffic because it was a calm oasis. The thing about bots is believable, though, because you could already see it happening. Dead Internet has been real for a while, and I'd love to seem Kevin and Alex do a followup on this.

      • By MildlySerious 2026-03-1321:19

        Yeah. Sadly the default communities were flooded with blog spam, and that's just the part I noticed. A couple days ago a bunch of smaller communities also got a noticeable bump in members. That didn't change anything in my own community, but others apparently weren't so lucky.

        I can see why the team got overwhelmed. I wouldn't want to have to deal with that.

    • By the_gipsy 2026-03-1323:151 reply

      Will we never learn to stop. Building. On. Platforms.

      • By solus_factor 2026-03-157:18

        Not. Until. Federated. Services. Don't. Suck.

        :)

HackerNews