Online age-verification tools for child safety are surveilling adults

2026-03-1012:55648338www.cnbc.com

New age-verification laws and tools are designed for child safety on social media and the internet, but adults are in the crosshairs, say privacy experts.

New U.S laws designed to protect minors are pulling millions of adult Americans into mandatory age-verification gates to access online content, leading to backlash from users and criticism from privacy advocates that a free and open internet is at stake. Roughly half of U.S. states have enacted or are advancing laws requiring platforms — including adult content sites, online gaming services, and social media apps — to block underage users, forcing companies to screen everyone who approaches these digital gates.

"There's a big spectrum," said Joe Kaufmann, global head of privacy at Jumio, one of the largest digital identity-verification and authentication platforms. He explained that the patchwork of state laws vary in technical demands and compliance expectations. "The regulations are moving in many different directions at once," he said.  

Social media company Discord announced plans in February to roll out mandatory age verification globally, which the company said would rely on verification methods designed so facial analysis occurs on a user's device and submitted data would be deleted immediately. The proposal quickly drew backlash from users concerned about having to submit selfies or government IDs to access certain features, which led Discord to delay the launch until the second half of this year.

"Let me be upfront: we knew this rollout was going to be controversial. Any time you introduce something that touches identity and verification, people are going to have strong feelings," Discord chief technology officer and co-founder Stanislav Vishnevskiy wrote in a Feb. 24 blog post.

Websites offering adult content, gambling, or financial services often rely on full identity verification that requires scanning a government ID and matching it to a live image. But most of the verification systems powering these checkpoints — often run by specialized identity-verification vendors on behalf of websites — rely on artificial intelligence such as facial recognition and age-estimation models that analyze selfies or video to determine in seconds whether someone is old enough to access content. Social media and lower-risk services may use lighter estimation tools designed to confirm age without permanently storing detailed identity records.  

Vendors say a challenge is balancing safety with how much friction users will tolerate. "We're in the business of ensuring that you are absolutely keeping minors safe and out and able to let adults in with as little friction as possible," said Rivka Gewirtz Little, chief growth officer at identity-verification platform Socure. Excessive data collection, she added, creates friction that users resist. 
 
Still, many users perceive mandatory identity checks as invasive. "Having another way to be forced to provide that information is intrusive to people," said Heidi Howard Tandy, a partner at Berger Singerman who specializes in intellectual property and internet law. Some users may attempt workarounds — including prepaid cards or alternative credentials — or turn to unauthorized distribution channels. "It's going to cause a piracy situation," she added. 

Where adult data goes 

In many implementations, verification vendors — not the websites themselves — process and retain the identity information, returning only a pass-fail signal to the platform. 

Gewirtz Little said Socure does not sell verification data and that in lightweight age-estimation scenarios, where platforms use quick facial analysis or other signals rather than government documentation, the company may store little or no information. But in fuller identity-verification contexts, such as gaming and fraud prevention that require ID scans, certain adult verification records may be retained to document compliance. She said Socure can keep some adult verification data for up to three years while following applicable privacy and purging rules.  

Civil liberties' advocates warn that concentrating large volumes of identity data among a small number of verification vendors can create attractive targets for hackers and government demands. Earlier this year, Discord disclosed a data breach that exposed ID images belonging to approximately 70,000 users through a compromised third-party service, highlighting the security risks associated with storing sensitive identity information. 

In addition, they warn that expanding age-verification systems represent not only a usability challenge but a structural shift in how identity becomes tied to online behavior. Age verification risks tying users' "most sensitive and immutable data" — names, faces, birthdays, home addresses — to their online activity, according to Molly Buckley, a legislative analyst at the Electronic Frontier Foundation.  "Age verification strikes at the foundation of the free and open internet," she said.

Even when vendors promise to safeguard personal information, users ultimately rely on contractual terms they rarely read or fully understand. "There's language in their terms-of-use policies that says if the information is requested by law enforcement, they'll hand it over. They can't confirm that they will always forever be the only entity who has all of this information. Everyone needs to understand that their baseline information is not something under their control," Tandy said. 

As more platforms route age checks through third-party vendors, that concentration of identity data is also creating new legal exposure for the companies that rely on them. "A company is going to have some of that information passing through their own servers," Tandy said. "And you can't offload that kind of liability to a third party." 

Companies can distribute risk through contracts and insurance, she said, but they remain responsible for how identity systems interact with their infrastructure. "What you can do is have really good insurance and require really good insurance from the entities that you're contracting with," she said. 

Tandy also cautioned that retention promises can be more complex than they appear. "If they say they're holding it for three years, that's the minimum amount of time they're holding it for," she said. "I wouldn't feel comfortable trusting a company that says, 'We delete everything one day after three years.' That is not going to happen," she added. 

Legal battles are not over

Federal and state regulators argue that age-verification laws are primarily a response to documented harms to minors and insist the rules must operate under strict privacy and security safeguards. 

An FTC spokesperson told CNBC that companies must limit how collected information is used. While age-verification technologies can help parents protect children online, the agency said firms are still bound by existing consumer protection rules governing data minimization, retention, and security. The agency pointed to existing rules requiring firms to retain personal information only as long as reasonably necessary and to safeguard its confidentiality and integrity. 

We've seen a 'TikTokification' of social media over the last few years, says NYU's Joshua Tucker

According to Rae Pickett, a spokesperson from the Virginia attorney general's office — one of the states that has been actively enforcing age-verification laws — officials view strong verification and data-handling standards as inseparable parts of protecting young users and ensuring age-appropriate online experiences. She pointed to litigation against Meta and TikTok as evidence that inadequate safeguards can expose young users to harmful content and experiences. Under the Virginia law, companies collecting verification data cannot use it for purposes beyond age determination and must maintain security practices appropriate to the sensitivity of the information under the state's Consumer Data Protection Act. 

However, Virginia's effort suffered a legal setback when a federal court at least temporarily blocked enforcement of its law last week, siding with a First Amendment challenge brought by a trade group representing major social media companies. Virginia Attorney General Jay Jones said in a statement to CNBC after the court decision that the AG's office "will use every tool available to us to ensure that Virginia's children are protected from the proven harms of unlimited access to these addictive feeds. We look forward to being able to fully enforce the law to keep families safe."

Buckley says legislators do not need to sacrifice their constituents' First Amendment rights and privacy to make a safer internet and address many of the harms these proposals seek to mitigate. In fact, according to the EFF analyst, many lawmakers have recognized these approaches, such as data minimization, in existing age-verification proposals. But if legislators want to meaningfully improve online safety instead of building new systems of surveillance, censorship, and exclusion, she said they should pass a strong, comprehensive federal privacy law that protects and empowers all internet users to control how our data is collected.

'A permanent feature of online life'

In some countries, age verification laws may already require platforms to use methods like facial age estimation or ID checks, including in the UK, Australia, and soon in Brazil.

Major platforms based in the U.S. are staking out positions on how age verification should be implemented, though not without controversy, as the Discord example suggests, and coming after years of lawsuits alleging weak efforts to keep their sites safe for children. 

Discord said in explaining its delayed global rollout that other than in countries where national laws require certain methods of verification, over 90% of users will never need to verify their age by any methods other than its existing internal safety systems that do not require user action. Though its CTO noted in the recent blog post, "We know many of you believe the right answer is not to do this at all."

Discord said it is using the additional time this year to add more verification options, including credit cards, more transparency on vendors and technical detail of how age verification will work, and once the system goes into effect, it will publish details on the percentage of users asked to verify age in its existing transparency reports.

Snap, which operates Snapchat, said it supports alternative approaches that reduce the need for platforms to collect identity information directly. "We believe there are better, more privacy-conscious solutions such as mandating age verification at the primary point of entry — the device, operating system, or app store level," a Snap spokesperson told CNBC. 

Meta and Google did not respond to requests for comment. 

According to Tandy, as more states adopt age-verification mandates and companies race to comply, the infrastructure behind those systems is likely to become a permanent fixture of online life. Taken together, industry leaders say the rapid spread of age-verification laws may push platforms toward systems that verify age once and reuse that credential across services. 

"The way the trend is moving is definitely toward some kind of persistent verification of a user's age," Kaufmann said. In other words, a digital proof of age that travels with the user across platforms. 

Tandy said over time, once a system confirms someone's age, it may not need to ask again. She compared the model to ecosystems such as Disney accounts, where a user's age is established once and then recognized across its services rather than being rechecked every time they log in, even years later. 

For adults, that means an internet where identity verification is no longer occasional friction but a built-in layer of everyday access. 


Read the original article

Comments

  • By john_strinlai 2026-03-1013:326 reply

    >An FTC spokesperson told CNBC that companies must limit how collected information is used. [...] The agency pointed to existing rules requiring firms to retain personal information only as long as reasonably necessary and to safeguard its confidentiality and integrity.

    the very same rules that have allowed literally every single piece of my data to be leaked several separate times, and now i have free credit monitoring instead of privacy? and all of those companies still operate normally, as if nothing ever happened? very neat.

    >Discord said it is using the additional time this year to add more verification options, including credit cards, more transparency on vendors and technical detail of how age verification will work

    and why didnt we start with credit cards? instead of facial recognition with peter thiel? (this is a rhetorical question)

    • By tmaly 2026-03-1014:264 reply

      I have gotten several notices of medical data being leaked over the last two years. I thought HIPPA law had very harsh fines for this, but I guess they just look the other way.

      • By SoftTalker 2026-03-1014:361 reply

        Seems like if you just disclose and make assurances that "you take security seriously" then it's fine.

        • By abustamam 2026-03-110:21

          I once worked with both PCI and HIPAA at a consulting firm. Neither had very high bars. PCI compliance was just a yes/no questionnaire that said something like "I do not store unencrypted CC numbers in my DB." No one validates the questionnaire. I just submit it and I got a shiny badge to put on my clients site.

          HIPAA compliance was just a half hour webinar.

          To be fair, I think HIPAA works in offline contexts (employers can't ask your doctor about your health) but as far as how easy it was for me to get access to customer CCs and medical information... Let's just say the barrier was basically nonexistent.

      • By jimz 2026-03-1018:03

        HIPAA doesn't have a private cause of action so if a violation happens, it's a wealth transfer to the government, it doesn't mean anything to you or any individual.

        And most companies can simply price it in as cost of doing business at this point.

      • By john_strinlai 2026-03-1014:371 reply

        unfortunately, even if the fine seems harsh, if it is less than the profits generated the fine is an operating expense and not a deterrent.

      • By chimeracoder 2026-03-1022:53

        > I thought HIPPA law had very harsh fines for this

        Not at all. The maximum fine a company has to pay is capped at $2 million per calendar year for a violation, and that's assuming it's even eligible for the highest tier of penalty.

    • By raxxorraxor 2026-03-1112:081 reply

      I prefer facial recognition. Probably good that there isn't a popular face-spoofer app, since it would put attention to the efficiency of these verifications.

      Credit card info is a bad solution. In many countries they still are rare and it would very likely exclude as much adults as children.

      The real answer is that while we should expect children to be online, it is the responsibility of the parents to moderate their usage. Add a standard header to every http request to support filter solutions. That is the only viable solution that is of a technical nature.

      The whole ZKP idea is futile, no model works if you also have connection information, which states tend to save too.

      Anonymous internet usage is pretty awesome. For adults and kids for that matter. And that doesn't change because there are also bad advertisers that practically can identify and track you. If you want to protect kids, have those be imprisoned first. Strong opinion, I know, but also the correct one.

      • By Forgeties79 2026-03-1112:53

        I definitely agree that we need to pass strict laws around “advertising,“ which at this point is a misnomer for the assault we experience every day just by engaging with the Internet or simply walking into a store now. It’s data fracking, not “targeted advertising” anymore.

        The semi-anonymous Internet is so important especially for people in repressive households/societies.

    • By mghackerlady 2026-03-1114:20

      There's even precedent for credit cards! I remember my dad needing to make a small transaction to get me a nintendo online account back in the 3DS and Wii U era

    • By PaulKeeble 2026-03-1013:512 reply

      Some of the accounts being blocked from certain access are themselves 18! You would think Reddit would consider that, but nope it doesn't.

      • By washadjeffmad 2026-03-1016:232 reply

        Age of account was sufficient for Google and third-party services for verification until recently. My gmail account is almost 22 years old, in continuous use. I have a credit card on file with Google Pay. Why would I need to submit a photo to engage with a private service, outside of volunteering to help train a surveillance apparatus?

        Is there any forum short of a senate subcommittee that the public can ask companies these questions? The silence is deafening.

        • By limagnolia 2026-03-110:24

          Contact your senator and ask them. Call your senator's offices and ask to meet with them or a representative in person, they can schedule an appointment, and most maintain offices in major population centers in their states.

        • By salawat 2026-03-1016:51

          ...That would be a cost center, sir. If you don't like our product, you are free to not use us and make your own while foregoing doing any business in anywhere with either of one of the two major political parties.

          There is a reason why I don't accept private enterprise as something separate from Government. The nature of the incorporation legal fiction makes them proxies of Government power and influence, hence why I believe private enterprise should in some ways be as heavily restricted by Constitutional guardrails as the Government itself (allegedly) is.

      • By hunter2_ 2026-03-1014:134 reply

        Probably because the transfer of accounts (typically for reasons of better spamming, but in this case for adult access) is possible.

        However, that makes me wonder what mechanism might "unverify" an account holder's age upon transfer. I suppose it's simply a need to re-verify (take a new photo) upon every login, but then folks could transfer the session cookie to avoid needing the new owner to perform a login (unless a new device ID/fingerprint makes the old cookie useless).

        • By Jeremy1026 2026-03-1014:261 reply

          Since you don't have to verify every time you use the account, transfer of verified accounts will still be a "problem" though. It's just a CYA to be able to say "we verified this account owner."

          • By DrJokepu 2026-03-1014:362 reply

            But… You could transfer the account after age verification too. The only way to be sure is to ask for ID every time people use the website / application, then children will be truly finally safe from the horrors of the Internet.

            • By jvuygbbkuurx 2026-03-1014:53

              The website will only function when webcam is turned on with passport next to your face. Session is immeditely revoked on failure.

            • By Jeremy1026 2026-03-1017:071 reply

              > You could transfer the account after age verification too.

              Isn't that what I said?

              • By hunter2_ 2026-03-1018:261 reply

                Yes, but you also said it's a CYA, when indeed it's not sufficient CYA if only a former account owner, but not "this account owner," had been verified.

                • By Jeremy1026 2026-03-110:35

                  It's definitely CYA. Because not transferring accounts is almost definitely in the TOS. So "we didn't know it was someone else using the account, thats against our TOS" will be the response.

        • By Razengan 2026-03-1015:102 reply

          > … I suppose it's simply a need to re-verify (take a new photo) upon every login …

          Clearly the only foolproof solution is a 3rd-party camera pointed at your face at all times whenever you use a computer.

          • By subscribed 2026-03-1016:31

            And a *plug to measure the heart rate at all times in the convenient and unobtrusive way, to ensure the face is of the mammal, and not the mannequin.

          • By throwway120385 2026-03-1017:21

            A sort of "telescreen" if you will.

        • By gowld 2026-03-1014:59

          SOTA is age inference: The platform studies your behavior to estimate your age.

    • By ErigmolCt 2026-03-1018:383 reply

      On the credit card point though, cards don't work perfectly as age verification either. Plenty of minors can access prepaid cards or family cards

      • By john_strinlai 2026-03-1019:20

        >cards don't work perfectly as age verification either.

        there are 0 "perfect" age verification systems.

        plenty of minors can have their brother/sister/parents supply their id, or do the verification video. the on-device verification discord rolled out was, within hours, broken. i remember news reports of kids submitting photos of their dogs and being verified as of-age.

        credit card solves most of the problem with much less downside than submitting my face (i am already okay putting my card info into most sites)

      • By eikenberry 2026-03-1018:531 reply

        Prepaid cards can't masquerade as credit cards as there are easy ways to differentiate them (the numbers have meaning) and a minor getting access to the family credit card is the parents giving them permission. I'm not convinced credit card for age verification is a good solution for all cases but for cases where you've already used a credit card to access the service it would be perfect.

        • By array_key_first 2026-03-1021:581 reply

          I agree, we shouldn't be optimizing for the case where a child steals a credit card. That's just not in the threat model. I mean, they could steal IDs too, and children can already steal credit cards and buy, like, vbucks or whatever. Which probably causes more tangible real-world harm than seeing a pair of boobies or whatever we're trying to protect against.

          However, I still think credit cards are overkill. They reveal way too much information, including addresses. I wouldn't trust most companies with my credit card either, at least not online. In person it's different, the scanners are secure especially if you use tap to pay. But online, you just have a pinky promise that your info isn't being stored.

          Frankly, I'm getting sick and tired of being put in the situation where I have no choice but to just blindly trust people to do the right thing. Obviously, it's not working, and we need real solutions.

          • By eikenberry 2026-03-1023:53

            I agree that CCs are overkill for every case except those where you have already given them a CC. There is no risk of revealing to much information for age verification when you already are giving them all that information.

    • By clumsysmurf 2026-03-1013:422 reply

      > now i have free credit monitoring

      Might not even matter ...

      "TransUnion and Experian, two of the three major credit bureaus, have started dismissing a larger share of consumer complaints without help since the Trump administration began dismantling the CFPB."

      https://www.propublica.org/article/credit-report-mistakes-cf...

      • By SoftTalker 2026-03-1014:38

        It's not like they were really doing a very good job anyway. My data has been leaking for two decades now.

      • By ArchieScrivener 2026-03-1013:523 reply

        How much money did the CFPB actually give back to wronged consumers?

        • By m4ck_ 2026-03-1013:582 reply

          Pre-trump's attempts to eliminate the department, almost $20 billion.

          https://www.consumerfinance.gov/enforcement/enforcement-by-t...

          • By gibspaulding 2026-03-1014:29

            And one would hope that the purpose of the CFPB would be to dissuade lenders from wronging consumers in the first place, meaning the net benefit to consumers was likely much higher.

          • By ArchieScrivener 2026-03-1015:16

            Thanks for the numbers!

        • By coffeefirst 2026-03-1015:02

          Their mere presence was effective. I know people who had trouble with banks refusing to fix their own screwups and demanding evidence that couldn’t exist.

          They changed their tune the second there was an open case on the matter.

        • By throwway120385 2026-03-1017:23

          Also of note is they were responsible for medical debt cases, which are particularly difficult for people to resolve because of the shared responsibility between the patient and the insurer, which allows the insurer to deflect responsibility until the bill ends up in collections.

  • By abalone 2026-03-1022:553 reply

    Here's a good interview with the director of the Free Speech Coalition on the consequences of these "protect the kids" moral panic laws, which include widespread surveillance, banning VPNs and raising the cost of running an independent website to unsustainable levels.

    Remember it's not just about pornography. It's anything deemed "harmful to minors" including platforms like Reddit, Bluesky or stuff conservative lawmakers think is harmful like discussion forums for LGBTQ people, sexual health information or dissident political opinions.

    They also examine how these laws, which are often backed by the religious Right, are getting support more broadly from people who see it as a way to rein in Big Tech who are creating "social media addiction" and so forth.

    And even within our industry there is a lot of money to be made by creating and selling compliance products, so even on forums like this you will find people advocating for them.

    "Another Internet Law That Punishes Everyone" - Power User Podcast 1/9/26: https://www.youtube.com/watch?v=8bnp3nmpK9g

    • By stephen_g 2026-03-112:591 reply

      This is so much bigger than the “religious right” though, UK and Australia have far less of that and parties from both sides of politics here and in the UK seem to be competing to out-do each other with surveillance, censorship and control of adults online under the guise of ‘child safety’.

      And all being pushed so, so much harder in just the last couple of years, all at the same time. I don’t know what’s the source…

      • By SturgeonsLaw 2026-03-114:01

        Governments around the world have sought to control the internet and strip away anonymity for years, they've now found their foot-in-the-door moment so they're all going for it in their own way.

        Some of it is governments watching and copying each other, some of it is dialogue happening at international events, being driven by groups like the Global Coalition for Digital Safety:

        https://initiatives.weforum.org/global-coalition-for-digital...

        It's probably not being driven by one single group, there are a number of private and government orgs whos interests in controlling information converge.

    • By alansaber 2026-03-110:10

      It's thinly veiled authoritarianism, just the state pushing for more control

    • By b00ty4breakfast 2026-03-113:40

      the religious right may be one faction in this push for digital surveilance but I don't think they're the ones behind the EU push for chat control and device lockdown or the insane 3d printer proposal in california.

      This is a trans-ideological movement.

  • By bilekas 2026-03-1013:344 reply

    The fact that these tools are 'active' centric, i.e : You must perform an action to validate you're NOT a child, these will never protect children. A predator simply needs not to verify anything and appear benign and ironically more anonymous than law abiding people.

    I'm not saying the inverse is the answer either, just that if anyone without an agenda of surveillance looked at this for a second, the penny would have dropped. So I can only assume that this was the purpose the whole time.

    • By kristopolous 2026-03-1013:485 reply

      Sob stories about children are always weaponized for oppression.

      It was used to bash interracial marriage, gay rights, suppress dissent, attack the first amendment, and now this.

      Whenever you hear some dramatic story involving kids about how you have to live a little less free, know the tactic.

      • By banannaise 2026-03-1021:28

        The best way to protect children is to educate them to protect themselves, but that argument generally falls on deaf ears, doubly so when there's an opportunity to use "but the children" as a political cudgel.

      • By tt24 2026-03-1015:134 reply

        Don’t forget the second amendment.

        • By tremon 2026-03-1021:311 reply

          I don't understand your point. Can you quote the part of the second amendment that specifically addresses children, or schools for that matter?

          • By tt24 2026-03-110:391 reply

            > Sob stories about children are always weaponized for oppression.

            This is common for opponents of the second amendment as well. "Think of the children!" etc etc.

            • By xethos 2026-03-111:40

              While I think you and I would agree if I argued it was more about culture than firearms per-capita, it's pretty hard to say children aren't suffering some pretty real harms from said firearms (to say nothing of adult suicide statistics when firearms are kept in the house)

              Reducing the number of guns doesn't solve the root issue (which I think we'd also agree on), but it should minimize the harms while being dramatically easier than changing the American ethos. Hell, America could likely get 80% of the results (no school shootings) with 20% of the effort (additional restrictions on firearms, more akin to Canada)

              I further think the second amendment is causing Americans more harm than it's worth, though that's a seperate discussion; some examples include suicide statistics, accidental discharge, a lack of protection even when carried legally (such as in Alex Pretti's murder) and the fact that, when firearms could be anywhere, police must treat every interaction as potentially fatal - with all the force that requires

        • By seanw444 2026-03-1016:32

          You're on HN. Expect downvotes.

        • By thesquandered 2026-03-1017:51

          [dead]

      • By cultofmetatron 2026-03-1013:562 reply

        whats incredible to me are how many useful idiots out there STILL fall for it.

        ___ said hamas beaheaded 40 babies and that turned out to be a complete fabrication. That fake info was used in part to justify killing thousands of kids in ____

        meanwhile the recent strike on Iran resulted in 80 little girls getting killed (with plenty of evidence) and its swept under the rug while we get blasted about the 7 soldiers that died.

        • By dennis_jeeves2 2026-03-1023:121 reply

          >whats incredible to me are how many useful idiots out there STILL fall for it.

          That's about 99% of the population.

          • By Y_Y 2026-03-1114:111 reply

            But none of us here, right?

            • By dennis_jeeves2 2026-03-1114:20

              We are special of course. Edit: Actually just me, I'm special

        • By hobs 2026-03-1014:002 reply

          More useful idiots are born every day, most of them never are educated and do not see their past blunders as anything wrong happening, they are completely blind to the real implication of their actions.

          • By a456463 2026-03-1018:14

            I know some idiots that read newspapers and technical papers and yet would rather have company like discord providing safety for their new born daughter but would vote for small govt republicans (or democrats, i don't care, it's just a label that is applicable now. they are mostly all the same) and do nothing about calling out the actual child predators and taking proper action against them. It is bonkers

          • By cultofmetatron 2026-03-1015:05

            [flagged]

    • By ej31 2026-03-1013:363 reply

      Age verification doesn't stop determined bad actors, it just builds a database of everyone who cooperated...........

      • By BLKNSLVR 2026-03-1013:39

        Know they sheep, the better to keep them penned.

      • By bilekas 2026-03-1013:41

        Exactly my point.. And all the industry experts who they must have consulted in to write the laws are coincidently invested in personal data harvesting. Who could have foreseen this happening.

      • By SV_BubbleTime 2026-03-1015:49

        Now do Covid… tracking the non-compliant is surely the smaller task.

    • By hedora 2026-03-1017:42

      A much better approach would be to hold platforms responsible if they allow a stranger that does not have explicit parental consent to communicate with or get information about a minor.

      This would block the most common classes of abuse on platforms like Roblox, Fortnight, Lego (kids) Fortnight, YouTube Kids, Minecraft, and "educational" social networks / games.

      Note that it doesn't require any centralized surveillance at all. Parents just need to control the kids' ability to create random accounts, by (for example) turning on parental controls as they already exist on most tablets/phones, and blocking app installation / email applications (or other 2FA vectors).

      When the parent allows an account to be created, they just tick the "kid mode" box. This even works with shared devices that don't support multiple accounts (so, iPads and iPhones).

    • By nomel 2026-03-1022:541 reply

      > A predator simply needs not to verify anything and appear benign and ironically more anonymous than law abiding people.

      Roblox takes the sane approach with this: communication features are completely disabled for "unverified" users.

      • By voxic11 2026-03-1116:45

        Minecraft is going this direction as well, at least for UK based users.

HackerNews