Privacy-preserving age and identity verification via anonymous credentials

2026-03-038:589061blog.cryptographyengineering.com

This post has been on my back burner for well over a year. This has bothered me, because every month that goes by I become more convinced that anonymous authentication the most important topic we c…

This post has been on my back burner for well over a year. This has bothered me, because every month that goes by I become more convinced that anonymous authentication the most important topic we could be talking about as cryptographers. This is because I’m very worried that we’re headed into a bit of a privacy dystopia, driven largely by bad legislation and the proliferation of AI.

But this is too much for a beginning. Let’s start from the basics.

One of the most important problems in computer security is user authentication. Often when you visit a website, log into a server, access a resource, you (and generally, your computer) needs to convince the provider that you’re authorized to access the resource. This authorization process can take many forms. Some sites require explicit user logins, which users complete using traditional username and passwords credentials, or (increasingly) advanced alternatives like MFA and passkeys. Some sites that don’t require explicit user credentials, or allow you to register a pseudonymous account; however even these sites often ask user agents to prove something. Typically this is some kind of basic “anti-bot” check, which can be done with a combination of long-lived cookies, CAPTCHAs, or whatever the heck Cloudflare does:

I’m pretty sure they’re just mining Monero.

The Internet I grew up with was always pretty casual about authentication: as long as you were willing to take some basic steps to prevent abuse (make an account with a pseudonym, or just refrain from spamming), many sites seemed happy to allow somewhat-anonymous usage. Over the past couple of years this pattern has changed. In part this is because sites like to collect data, and knowing your identity makes you more lucrative as an advertising target. However a more recent driver of this change is the push for legal age verification. Newly minted laws in 25 U.S. states and at least a dozen countries demand that site operators verify the age of their users before displaying “inappropriate” content. While most of these laws were designed to tackle pornography, but (as many civil liberties folks warned) adult and adult-ajacent content is on almost any user-driven site. This means that age-verification checks are now popping up on social media websites, like Facebook, BlueSky, X and Discord and even encyclopedias aren’t safe: for example, Wikipedia is slowly losing its fight against the U.K.’s Online Safety Bill.

Whatever you think about age verification as a requirement, it’s apparent that routine ID checks will create a huge new privacy concern across the Internet. Increasingly, users of most sites will need to identify themselves, not by pseudonym but by actual government ID, just to use any basic site that might have user-generated content. If this is done poorly, this reveals a transcript of everything you do, all neatly tied to a real-world verifiable ID. While a few nations’ age-verification laws allow privacy-conscious sites to voluntarily discard the information once they’ve processed it, this has been far from uniform. Even if data minimization is allowed, advertising-supported sites will be an enormous financial incentive to retain real-world identity information, since the value of precise human identity is huge, and will only increase as non-monetizable AI-bots eat a larger share of these platforms.

The problem for today is: how do we live in a world with routine age-verification and human identification, without completely abandoning our privacy?

Back in the 1980s, a cryptographer named David Chaum caught a glimpse of our soon-to-be future, and he didn’t much like it. Long before the web or smartphones existed, Chaum recognized that users would need to routinely present (electronic) credentials to live their daily lives. He also saw that this would have enormous negative privacy implications. To address life in that world, he proposed a new idea: the anonymous credential.

The man could pick a paper title.

Let’s imagine a world where Alice needs to access some website or “Resource”. In a standard non-anonymous authentication flow, Alice needs to be granted authorization (a “credential”, such as a cookie) to do this. This grant can come either from the Resource itself (e.g., the website), or in other cases, from a third party (for example, Google’s SSO service.) For the moment we should assume that the preconditions for are not private: that is, Alice will presumably need to reveal something about her identity to the person who issues the credential. For example, she might use her credit card to pay for a subscription (e.g., for a news website), or she might hand over her driver’s license to prove that she’s an adult.

From a privacy perspective, the problem is that Alice will need to present her credential every time she wants to access that Resource. For example, each time she visits Wikipedia, she’ll need to hand over a credential that is tied to her real-world identity. A curious website (or an advertising network) can use this to precisely link her browsing history on the site to an actual human in the world. To a certain extent, this is the world we already live in today: advertising companies probably know a lot about who we are and what we’re browsing. What’s about to change in our future is that these online identities will increasingly be bound to our real-world government identity, so no more “Anonymous-User-38.”

Chaum’s idea was to break the linkage between the issuance and usage of a credential. This means that when Alice shows her credential to the website, all the site learns is that Alice has been given a valid credential. The site should not learn which issuance flow produced her the credential, which means it should not learn her exact ID; and this should hold even if the website colludes with (or literally is) the issuer of the credentials. The result is that, to the website, at least, Alice’s browsing can be unlinked from her identity. Imn other words, she can “hide” within the anonymity set of all users who obtained credentials.

Illustration of a simple anonymous credential system. The “issuance” procedure reveals your identity to the issuer. A later “show” process lets you use the credential, without revealing who you are The goal is that the resource and issuer together can’t link the credential shown to the specific user who it was issued to. (Icons: Larea, Desin.)

One analogy I’ve seen for simple anonymous credentials is to think of them like a digital version of a “wristband”, the kind you might receive at the door of a club. In that situation, you show your ID to the person at the door, who then gives you an unlabeled wristband that indicates “this person is old enough to buy alcohol” or something along these lines. Although the doorperson sees your full ID, the bartender knows you only as the owner of a wristband. In principle your bar order (and your love of spam-based drinks) is untied somewhat from your name and address.

You can buy a roll of these for $7, just saying.

Before we get into the weeds of building anonymous credentials, it’s worth considering the obvious solution. What we want is simple: every user’s credential should be indistinguishable when “shown” to the resource. The obvious question is: why doesn’t the the issuer give a copy of the exact same exact credential to each user? In principle this solves all of the privacy problems, since every user’s “show” will literally be identical. (In fact, this is more or less the digital analog of the physical wristband approach.)

The problem here is that digital items are fundamentally different from physical ones. Real-world items like physical credentials (even cheap wristbands) are at least somewhat difficult to copy. A digital credential, on the other hand, can be duplicated effortlessly. Imagine a hacker breaks into your computer and steals a single credential: they can now make an unlimited number of copies and use them to power a basically infinite army of bot accounts, or sell them to underage minors, all of whom will appear to have valid credentials.

It’s worth pointing out that this eact same thing can happen with non-anonymous credentials (like usernames/passwords or session cookies) as well. However, there’s a difference. In the non-anonymous setting, credential cloning and other similar abuse can be detected, at least in principle. Websites routinely monitor for patterns that indicate the use of stolen credentials: for example, many will flag when they see a single “user” showing up too frequently, or from different and unlikely parts of the world, a procedure that’s sometimes called continuous authentication. Unfortunately, the anonymity properties of anonymous credentials render such checks mostly useless, since every credential “show” is totally anonymous, and we have no idea which user is actually presenting.

Many sites keep track of where individual account logins come from, and even lets the owner check if they’ve seen logins from weird places. This won’t work easily in anonymous-credential land.

To address these threats, any real-world useful anonymous credential system has to have some mechanism to limit credential duplication. The most basic approach is to provide users with credentials that are limited in some fashion. There are a few different approaches to this:

  1. Single-use (or limited-usage) credentials. The most common approach is to issue credentials that allow the user to log in (“show” the credential) exactly one time. If a user wants to access the website fifty times, then she needs to obtain fifty separate credentials from the Issuer. A hacker can still steal these credentials, but they’ll also be limited to only a bounded number of website accesses. This approach is used by credentials like PrivacyPass, which is used by sites like CloudFlare.
  2. Revocable credentials. Another approach is to build credentials that can be revoked in the event of bad behavior. This requires a procedure such that when a particular anonymous user does something bad (posts spam, runs a DOS attack against a website) you can revoke that specific user’s credential — blocking future usage of it, without otherwise learning who they are.
  3. Hardware-tied credentials. Some real-world proposals like Google’s approach instead “bind” credentials to a piece of hardware, such as the trusted platform module in your phone. This makes credential theft harder — a hacker will need to “crack” the hardware to clone the credentials. But a successful theft still has big consequences that can undermine the security of the whole system.

The anonymous credential literature is filled with variants of the above approaches, sometimes combinations of the three. In every case, the goal is to put some barriers in the way of credential cloning.

With these warnings in mind, we’re now ready to talk about how anonymous credentials are actually constructed. We’re going to discuss two different paradigms, which sometimes mix together to produce more interesting combinations.

Chaum’s original constructions produce single-use credentials, based on a primitive known as a blind signature scheme. Blind signatures are a variant of digital signatures, with an additional protocol that allows for “blind signing” protocol. Here a User has a message they want to have signed, and the Server holds the signing half of a public/secret keypair. The two parties run an interactive protocol, at the end of which the user obtains a signature on their message. Most critically, the server learns nothing about the message that it signed.

The “magic” part isn’t really magic, but we don’t need to get into the details right now.

We won’t worry too much about how blind signatures are actually constructed, at least not for this post. Let’s just imagine we’ve been handed a working blind signature scheme. Using this as an ingredient, it’s quite simple to build a one-time use anonymous credential, as follows:

  1. First, the Issuer generates a signing keypair (PK, SK) and gives out the key PK to everyone who might wish to verify its signatures.
  2. Whenever the User wishes to obtain a credential, she randomly selects a new serial number SN. This value should be long enough that it’s highly unlikely to repeat (across all other users.)
  3. The User and Issuer now run the blind signing protocol described above — here the User sets its message to SN and the employs its signing key SK. At the end of this process the user will hold a valid signature by the issuer on the message SN. The pair (SN, signature) now form the credential.

To “show” the credential to some Resource, the user simply needs to hand over the pair (SN, signature). Assuming the Resource knows the public key (PK) of the issuer, it can simply verify that (1) the signature is valid on SN, and (2) nobody has every used that value SN in some previous credential “show”.

This serial number check can be done using a simple local database at the Resource (website). Things get a bit more complicated if there are many Resources (say different websites), and you want to prevent credential re-use across all of them. The typical solution outsources serial number checks to some centralized service (or bulletin board) so that a user can’t use the same credential across many different sites.

Here’s the whole protocol in helpful pictograms:

Simple one-time use credentials from a blind signature scheme. Note that this provides privacy because the Issuer never learns SN, and can’t link a Credential Show to the one it issued to a specific user.

Chaumian credentials are about forty years old and still work well, provided your Issuer is willing to bear the cost of running the blind signature protocol for each credential it issues — and that the Resource doesn’t mind verifying a signature for each “show”. Protocols like PrivacyPass implement this using protocols like blind RSA signatures, so presumably these operations cost isn’t prohibitive for real-world applications. However, PrivacyPass also includes some speed optimizations for cases where the Issuer and Resource are the same entity, and these make a big difference.1

Single-use credentials work great, but have some drawbacks. The big ones are (1) efficiency, and (2) lack of expressiveness.

The efficiency problem becomes obvious when you consider a user who accesses a website site many times. For example, imagine using an anonymous credential to replace Google’s session cookies. For most users, this require obtaining and delivering thousands of single-use credentials every single day. You might mitigate this problem by using credentials only for the first registration to a website, after which you can trade your credential for a pseudonym (such as a random username or a normal session cookie) for later accesses. But the downside of this is that all of your subsequent site accesses would be linkable, which is a bit of a privacy tradeoff.

The expressiveness objection is a bit more complicated. Let’s talk about that next.

Simple Chaumian credentials have a more fundamental limitation: they don’t carry much information.

Consider our bartender in a hypothetical wristband-issuing club. When I show up at the door, I provide my ID and get a wristband that shows I’m over 21. The wristband “credential” carries “one bit” of information: namely, the fact that you’re older than some arbitrary age constant.

Sometimes we want to do prove more interesting things with a digital credential. For example, imagine that I want to join a cryptocurrency exchange that needs more complicated assurances about my identity. For example: it might require that I’m a US resident, but not a resident of New York State (which has its own regulations.) The site might also demand that I’m over the age of 25. (I am literally making these requirements up as I go.) I could satisfy the website on all these fronts using the digitally-signed driver’s license issued by my state’s DMV. This is a real thing! It consists of a signed and structured document full of all sorts of useful information: my home address, state of issue, eye color, birthplace, height, weight, hair color and gender. In this world, the non-anonymous solution is easy: I just hand over my digitally-signed license and the website verifies the properties it needs in the various fields.

This is a real digital driver’s license that I installed on my iPhone. I can’t really do anything with it, but you have to wonder why Apple and Google are making this available if not to support age verification laws.

The downside to handing over my driver’s license is that doing so means I also leak much more information than the site requires. For example, this creepy website will also learn my home address, which it might use it to send me junk mail! I’d really prefer It didn’t. A much better solution would allow me to assure the website only abiout the specific facts it cares about. I could remain anonymous otherwise.

For example, all I really want to prove can be summarized in the following four bullet points:

  1. BIRTHDATE <= (TODAY – 25 years)
  2. ISSUE_STATE != NY
  3. ISSUE_COUNTRY = US
  4. SIGNATURE = (some valid signature that verifies under a known state DMV public key).

I could outsource these checks to some Issuer, and have them issue me a single-use credential that claims to verify all these facts. But this is annoying, especially If I already have the signed license.

A different way to accomplish this is to use zero-knowledge (ZK) proofs. A ZK proof allows me to prove that I know some secret value that satisfies various constraints. For example, I could use a ZK proof to “prove” to some Resource that I have a signed, structured driver’s license credential. I could further use the proof to demonstrate that the value in each fields referenced above satisfies the constraints listed above. The neat thing about using a ZK proofs to make this claim is that my “proof” should be entirely convincing to the website, yet will reveal nothing at all beyond the fact that these claims are true.

A variant of the ZK proof, called the non-interactive zero-knowledge proof (NIZK) lets me do this in a single message from User to Issuer. Using this tool, I can build a credential system as follows:

A rough picture of a zero-knowledge-based credential system. Here the driver’s license is a structured document that the Issuer signs and sends over. The “Show” involves creating a non-interactive ZK proof (NIZK) that the User can send to the Resource. Generally this will be structured so that it’s bound to the specific Resource and sometimes a nonce, to prevent it from being replayed. (License icon: Joshua Goupil.)

(These techniques are very powerful. Not only can I change the constraints I’m proving on demand, but I can also perform proofs that reference multiple different credentials at the same time. For example, I might prove that I have a driver’s license, and also that by digitally-signed credit report indicates that I have a credit rating over 700.)

The ZK-proof approach also addresses the efficiency limitation of the basic single-use credential: here the same credential can be re-used to make power many “show” protocols, without making each on linkable. This property stems from the fact that ZK proofs are normally randomized, and each “proof” should be unlinkable to others produced by the same user.2

Of course, there are downsides to this re-usability as well, as we’ll discuss in the next section.

We’ve argued that the zero-knowledge paradigm has two advantages over simple Chaumian credentials. First, it’s potentially much more expressive. Second, it allows a User to re-use a single credential many times without needing to constantly retrieve new single-use credentials from the Issuer. While that’s very convenient, it raises a concern we already discussed: what happens if a hacker steals one of these re-usable credentials?

This is catastrophic for anonymous credential systems, since a single stolen credential anywhere means that the guarantees of the global system become useless.

As mentioned earlier, one approach to solving this problem is to simply make credential theft very, very hard. This is the optimistic approach proposed in Google’s new anonymous credential scheme. Here, credentials will be tied to a key stored within the “secure element” in your phone, which theoretically makes them harder to steal. The problem here is that there are hundreds of millions of phones, and the Secure Element technology in them runs the gamet from “very good” (for high-end, flagship phones) to “modestly garbage” (for the cheap burner Android phone you can buy at Target.) A failure in any of those phones potentially compromises the whole system.

An alternative approach is to limits the power of any given credentials. Once you have ZK proofs in place, there are many ways to do this.

One clever approach is to place an upper bound on the number of times that a ZK credential can be used. For example, we might wish to ensure that a credential can be “shown” at most N times before it expires. This is analogous to extracting many different single-use credentials, without the hassle of having to make the Issuer and User do quite as much work.

We can modify our ZK credential to support a limit of N shows as follows. First, let’s have the User select a random key K for a pseudorandom function (PRF), which takes a key and an arbitrary input and outputs a random-looking outputs. We’ll embed this key K into the signed credential. (It’s important that the Issuer does not learn K, so this often requires that the credential be signed using a blind, or partially-blind, signing protocol.3) We’ll now use this key and PRF to generate unique serial numbers, each time we “show” the credential.

Concretely, the ith time we “Show” the credential, we’ll generate the following “serial number”:

SN = PRF(K, i)

Once the User has computed SN for a particular show, it will send this serial number to the Resource along with the zero-knowledge proof. The ZK proof will, in turn, be modified to include two additional clauses:

  1. A proof that SN = PRF(K, i), for some value i, and the value K that’s stored within the signed credential.
  2. A proof that 0 <= i < N.

Notice that these “serial numbers” are very similar to the ones we embedded in the single-use credentials above. Each Resource (website) can keep a list of each SN value that it sees, and sites can reject any “show” that repeats a serial number. As long as the User never repeats a counter (and the PRF output is long enough), serial numbers should be unlikely to repeat. However, repetition becomes inevitable if the User ever “cheats” and tries to show the same credential N+1 times.

Brief sketch of an “N-time use” digital credential, based on zero-knowledge proofs.

This approach can be constructed in many variants. For example, with some simple tweaks, can build credentials that only permit the User to employ the credential a limited number of times in any given time period: for example, at most 100 times per day.4 This requires us to simply change the inputs to the PRF function, so that they include a time period (for example, the date) as well as a counter. These techniques are described in a great paper whose title I’ve stolen for this section.

The power of the ZK approach gives us many other tools to limit the power of credentials. For example, it’s relatively easy to add expiration dates to credentials, which will implicitly limit their useful lifespan — and hopefully reduce the probability that one gets stolen. To do this, we simply add a new field (e.g., Expiration_Time) that specifies a timestamp at which the credential should expire.

When a user “shows” the credential, they can first check their clock for the current time T, and they can add the following clause to their ZK proof:

T < Expiration_Time

Revoking credentials is a bit more complicated.

One of the most important countermeasures against credential abuse is the ability to ban users who behave badly. This sort of revocation happens all the time on real sites: for example, when a user posts spam on a website, or abuses the site’s terms of service. Yet implementing revocation with anonymous credentials seems implicitly difficult. In a non-anonymous credential system we simply identify the user and add them to a banlist. But anonymous credential users are anonymous! How do you ban a user who doesn’t have to identify themselves?

That doesn’t mean that revocation is impossible. In fact, there are several clever tricks for banning credentials in the zero-knowledge credential setting.

Imagine we’re using a basic signed credential like the one we’ve previously discussed. As in the constructions above, we’re going to ensure that the User picks a secret key K to embed within the signed credential.5 As before, the key K will powers a pseudorandom function (PRF) that can make pseudorandom “serial numbers” based on some input.

For the moment, let’s assume that the site’s “banlist” is empty. When a user goes to authenticate itself, the User and website interact as follows:

  1. First, the website will generate a unique/random “basename” bsn that it sends to the User. This is different for every credential show, meaning that no two interactions should ever repeat a basename.
  2. The user next computes SN = PRF(K, bsn) and sends SN to the Resource, along with a zero-knowledge proof that SN was computed correctly.

If the user does nothing harmful, the website delivers the requested service and nothing further happens. However, if the User abuses the site, the Resource will now ban the user by adding the pair (bsn, SN) to the banlist.

Now that the banlist is non-empty, we require an additional step occur every time a user shows their credential: specifically, the User must prove to the website that they aren’t on the list. Doing this requires the User to enumerate every pair (bsni, SNi) on the banlist, and prove that for each one, the following statement is true:

SNi ≠ PRF(K, bsni), using the User’s key K.

Naturally this approach requires a bit more work on the User’s part: if there are M users on the banned list, then every User must do about M extra pieces of work when Showing their credential, which hopefully means that the number of banned users stays small.

So far we’ve just dipped our toes into the techniques that we can use for building anonymous credentials. This tour has been extremely shallow: we haven’t talked about how to build any of the pieces we need to make them work. We also haven’t addressed tough real-world questions like: where are these digital identity certificates coming from, and what do we actually use them for?

In the next part of the piece I’m going to try to make this all much more concrete, by looking at two real-world examples: PrivacyPass, and a brand-new proposal from Google to tie anonymous credentials to your driver’s license on Android phones.

(To be continued)

Headline image: Islington Education Library Service

Notes:

  1. PrivacyPass has two separate issuance protocols. One uses blind RSA signatures, which are more or less an exact mapping to the protocol we described above. The second one replaces the signature with a special kind of MAC scheme, which is built from an elliptic-curve OPRF scheme. MACs work very similarly to signatures, but require the secret key for verification. Hence, this version of PrivacyPass really only works in cases where the Resource and the Issuer are the same person, or where the Resource is willing to outsource verification of credentials to the Issuer.
  2. This is a normal property of zero-knowledge proofs, namely that any given “proof” should reveal nothing about the information proven on. In most settings this extends to even alowing the ability to link proofs to a specific piece of secret input you’re proving over, which is called a witness.
  3. A blind signature ensures that the server never learns which message it’s signing. A partially-blind signature protocol allows the server to see a part of the message, but hides another part. For example, a partially-blind signature protocol might allow the server to see the driver’s license data that it’s signing, but not learn the value K that’s being embedded within a specific part of the credential. A second way to accomplish this is for the User to simply commit to K (e.g., compute a hash of K), and store this value within the credential. The ZK statement would then be modified to prove: “I know some value K that opens the commitment stored in my credential.” This is pretty deep in the weeds.
  4. In more detail, imagine that the User and Resource both know that the date is “December 4, 2026”. Then we can compute the serial number as follows:

    SN = PRF(K, date || i)

    As long we keep the restriction that 0 <= i < N (and we update the other ZK clauses appropriately, so they ensure the right date is included in this input), this approach allows us to use N different counter values (i) within each day. Once both parties increment the date value, we should get an entirely new set of N counter values. Days can be swapped for hours, or even shorter periods, provided that both parties have good clocks.

  5. In real systems we do need to be a bit careful to ensure that the key K is chosen honestly and at random, to avoid a user duplicating another user’s key or doing something tricky. Often real-world issuance protocols will have K chosen jointly by the Issuer and User, but this is a bit too technically deep for a blog post.

Read the original article

Comments

  • By lachiflippi 2026-03-0311:4611 reply

    I've been really enjoying all these articles proposing solutions to anonymous age verification, mainly because most of them are written as if this has never been implemented in the real world. German IDs support age verification that just returns a yes/no response to the question "is this user above the age of 18," and not a single service in the entire country supports it.

    Anonymous age verification isn't a technical problem to be solved, as it's already been solved, it's a societal problem in that either the companies or the politicians pushing for age verification don't want to support it.

    • By 3RTB297 2026-03-0313:502 reply

      This is immensely counter-intuitive to many Americans. They wrongly assume that digital IDs are some Biblical apocalyptic level invasion of privacy, when every state ID database is already 1) linked to Federal ones, and 2) full of the same data on your driver's license anyway.

      I've tried to explain this to people, that a digital ID done well is better than the fraud-enabling 1960's hodgepodge in use that has served fraudsters better than citizens for 30 years. They set their teeth and refuse based on use of the word "digital" in the title alone.

      It will take generational change for the US to get something as banal as a digital ID already in use in dozens of countries, for no other reason than mindless panic over misunderstanding everything about digital ID systems, how IDs even work, and how governments work.

      • By tsimionescu 2026-03-0314:251 reply

        Oh, that's not the half of it. In my own country, digital ID adoption was a political hot topic for a long time after the Orthodox Church realized that the new chips contain 12-digit long IDs that might contain the sequence 666. This despite everyone in the country having a legal ID with a number code that can also happen to contain this same sequence - but somehow the mere possibility of this happening in the digital IDs sparked a huge outrage and made politicians avoid the topic for quite a while.

      • By derbOac 2026-03-0314:46

        I agree that there's a lack of awareness of what happens in other countries with ID, but I think it is also a different situation in the US.

        States in the US in a lot of ways are more comparable to countries in the EU. It's not exactly like that but in many ways it is. So it would be like requiring an EU ID on top of a national ID.

        I also don't think privacy per se is the real issue of concern, it's concern about consolidation of federalized power. Privacy is one criterion by which you judge the extent to which power has been consolidated or can be consolidated.

        The question isn't "can this be federalized safely in theory", it's "is it necessary to federalize this" or "what is the worse possible outcome of this if abused?"

        As we are seeing recently, whatever can be abused in terms of consolidated power will be eventually, given enough time.

        I guess discussions of whether or not you can have cryptographic verification with anonymity kind of miss the point at some level. It's good to be mindful of in case we go down the dystopian surveillance route, but it ignores the bigger picture issues about freedom of speech, government control over access (cryptographic guarantees of credential verfication don't guarantee issuance of the id appropriately, nor do they guarantee that the card will be issued with that cryptographic system implemented in good faith), and so forth.

    • By AnthonyMouse 2026-03-0315:571 reply

      > German IDs support age verification that just returns a yes/no response to the question "is this user above the age of 18,"

      If the only thing that came out of the ID was those letters then you wouldn't need the ID, you could just type "yes" or "no" when the site asks you if you're over 18. So it's presumably not doing that, instead it's providing some kind of signature.

      And then the privacy implied by "just returns a yes/no response" isn't actually there, because it's actually returning more than that. Does the response have a fixed signature which is unique to the ID, therefore able to be correlated across sites? Does the ID have a unique public keypair that it uses to sign, with the same problem? If someone extracts the key from one ID, or just hooks it up to a computer, can they now set up a service to anonymously sign for everyone in the world? If they can't anonymously sign for everyone, can't the same mechanism used to identify them also be used to identify anyone else?

      "Someone attempted to do this but no one uses it" is no proof that their attempt was any good or addressed the concerns people have about doing this.

      • By lachiflippi 2026-03-0316:321 reply

        My understanding is that the responses are signed, but in a way that prevents linking signatures across vendors, so the same card being used for verification on different sites could not be linked, while the same card being used multiple times for the same vendor could.

        As I'm not an expert on the crypto underlying the protocol, feel free to check the eIDAS standard for more info (the documents are in English, even if the link is not): https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisati...

        • By AnthonyMouse 2026-03-0316:561 reply

          A cursory look implies they're using group signatures:

          https://en.wikipedia.org/wiki/Group_signature

          Which allow the group manager (presumably the government, or anyone who compromises them) to identify who signed something.

          If using the same card multiple times with the same site allows the site to correlate them then that obviously also allows the site to link two accounts you intended to be separate, or two sites to set themselves up as the same "vendor" and thereby correlate your accounts between them.

          • By chucklenorris 2026-03-0322:13

            ZKPs are mentioned in the technical specs but no implementation yet. Would go for lack of standardisation / lack of harware support for these protocols as the explanation but who knows..

    • By Hizonner 2026-03-0315:30

      The argument is that the mechanisms in use in the German IDs (and others like them) rely on trusted parties and/or trusted hardware, and therefore don't adequately assure anonymity. And this is in fact true; the trusted parties are among the ones you might want to hide the information from.

      Trust is bad in security. It's not complicated to understand this.

    • By nijave 2026-03-0313:201 reply

      I wish all governments would just run identity services and mandate usages that return anonymous attestations. Age being the most obvious attestation but something like residence status could also be useful.

      Something as simple as a JWT with claims (and random uuid id) would work

      • By hirsin 2026-03-0313:453 reply

        It can't be quite that simple because you have a couple additional problems to solve - (effectively restating bits of the article poorly and partially)

        1. You don't want these to be replayable (give your JWT to someone else to use) so they need to be bounded in some ways (eg intended website, time, proof it came from you and not someone else).

        2. You don't want the government to know which website you're going to, nor allow the government and the website to collaborate to deanonymize you (or have the government force a website to turn over the list of tokens they got). So the government can't just hand you a uuid that the website could hand back to them to deanonymize.

        The SD JWT and related specs solve for these, which is how mDL and other digital IDs can preserve privacy in this situation.

        • By AnthonyMouse 2026-03-0315:072 reply

          > You don't want these to be replayable (give your JWT to someone else to use) so they need to be bounded in some ways (eg intended website, time, proof it came from you and not someone else).

          But these are the things that make it non-anonymous, because then instead of one token that says "is over 18" that you get once and keep forever, everyone constantly has to request zillions of tokens. Which opens up a timing attack, because then the issuer and site can collude to see that every time notbob69 signs into the website, Bob Smith requested a token, and over really quite a small number of logins to the site, that correlation becomes uniquely identifying.

          Meanwhile we don't need to solve it this way, because the much better solution is to have the site provide a header that says "this content is only for adults" than to have the user provide the site with anything, and then let the user's device do what it will with that information, i.e. not show the content if the user is a minor.

          • By hirsin 2026-03-0315:091 reply

            Which is why you separate the credential issuance from the credential use, per the standard mentioned.

            • By AnthonyMouse 2026-03-0315:161 reply

              The cryptography provides nothing to establish that this separation is actually being maintained and there is plenty of evidence (e.g. Snowden) of governments doing exactly the opposite while publicly claiming the contrary.

              On top of that, it's a timing attack, so all you need is the logs from both of them. Government gets breached and the logs published, all the sites learn who you are. Government becomes corrupt/authoritarian, seizes logs from sites openly or in secret (and can use the ones from e.g. Cloudflare without the site itself even knowing about it), retroactively identifies people.

              • By hirsin 2026-03-0317:321 reply

                I'd review the setup here. You're missing the critical distinction that the cryptography supports - separating entirely (in time and space) the issuance of the cred to the user and the use of that cred with a website.

                Unless you're getting the device logs from the users device (in which case... All of this is moot) there is no timing attack. Six months ago you got your mobile drivers license. And then today you used it to validate your age to a website anonymously. What's the timing attack there.

                • By AnthonyMouse 2026-03-0318:161 reply

                  If the driver's license can generate new anonymous tokens itself then anyone can hook up a driver's license to a computer and set up a service to sign for everybody. If it can't, whenever you want to prove your age to a service you need to get a new token from a third party, and then there is a timing correlation because you're asking for the token right before you use the service.

                  The article proposes a hypothetical solution where you get some finite number of tokens at once, but then the obvious problem is, what happens when you run out? First, it brings back the timing correlation when you ask for more just before you use one, and the number of times you have to correlate in order to be unique is so small it could still be a problem. Second, there are legitimate reasons to use an arbitrarily large number of tokens (e.g. building a search index of the web, content filters that want to scan the contents of links), but "finite number of tokens" was the thing preventing someone from setting up the service to provide tokens to anyone.

                  • By LorenPechtel 2026-03-040:461 reply

                    Blocking said search indexes is probably a good thing.

                    I'm thinking perhaps a system where you feed it a credential, a small program runs and maintains a pool of tokens that has some reasonably finite lifespan. The server that issues the tokens restricts the number of uses of the credential. Timing attacks are impossible because your token requests are normally not associated with your uses of the tokens.

                    And when you use a token the site gives back a session key, further access just replays the session key (so long as it's HTTPS the key is encrypted, hard to do a replay attack) up to whatever time and rate limits the website permits.

                    • By AnthonyMouse 2026-03-047:441 reply

                      > Blocking said search indexes is probably a good thing.

                      I feel like "we should ban all search engines" is going to be pretty unpopular.

                      > And when you use a token the site gives back a session key

                      And then you have a session key, until you don't, because you signed out of that account to sign into another one, or signed into it on a different browser or device etc.

                      > The server that issues the tokens restricts the number of uses of the credential.

                      Suppose I have a device on my home or corporate network that scans email links. It's only trying to filter malware and scams, but if a link goes to an adult content barrier then it needs tokens so it can scan the contents of the link to make sure there isn't malware behind the adult content barrier.

                      If I only have a finite number of tokens then the malware spammer can just send messages with more links than I have tokens until I run out, then start sending links to malware that bypass the scanner because it's out of tokens.

                      • By LorenPechtel 2026-03-0422:20

                        Search engines should not be using website search capabilities. That's putting an undue load on the systems. A board I'm involved with recently had to block search for guests because we were getting bombarded with guest searches that looked like some bot was taking a web query and tossing it around to a bunch of sites. Many of them not even in English.

          • By AuthAuth 2026-03-0319:042 reply

            The government can already do this with the ISP. I dont think government should be part of the average person's threat model.

            • By AnthonyMouse 2026-03-048:01

              > The government can already do this with the ISP.

              This is what VPNs or public libraries are for.

              > I dont think government should be part of the average person's threat model.

              Tell that to the people in places with governments that are a threat to the average person.

              "It can't happen here" is a dangerous hubris.

              On top of that, do notice that there is more than one government. What happens when Salt Typhoon comes for this stuff?

            • By LorenPechtel 2026-03-040:34

              If the government can access it all too often bad actors can also access it. And all too often government and bad actors are one in the same.

        • By nijave 2026-03-0615:25

          Imo these are nice to haves. The physical system of ID cards already has these problems but works well enough.

          People can loan their ID to someone else (ask college kids with an older sibling...)

          When you use your physical ID, the government frequently can deanonymize you either through automated databases (especially when purchasing drugs) or subpoenaing for camera footage, visitor lists, etc

        • By Izkata 2026-03-0315:02

          But one overlooked advantage of manually copying JWTs is that the user doesn't have to blindly trust they're not hiding extra information. They can be decoded by the user to see there's only what should be there.

    • By 2Gkashmiri 2026-03-0312:171 reply

      I remember reading in tech magazines about the "foss" acheivement which went on to become Aadhar. Remember this was prior to 2007 I think.

      The idea was your id would be an autehnticator of sorts. You need to verify yourself, the website asks Aadhar if the person is genuine, the website returns binary yes no. Same for you, is gender male? Or ages above 18?

      They would not return any other data.

      In the end, it became just another "formality" and tool for politicians and to flex muscles.

      People ended up taking photocopies of your card "just in case" and "that's the norm" even when it was said that's a bad idea.

      People still do Aadhar kyc but it is in hands of politicians now and the bureaucracy.

      • By matthewdgreen 2026-03-0313:41

        The problem with these "yes/no" systems is that they also involve the websites you visit calling up a centralized party and asking if you're old enough. This is fine if the websites aren't interested (or if you really trust your government with your web browsing history), but gets unfortunate if you don't want to share that information.

    • By jeroenhd 2026-03-0314:49

      The age verification system is being developed with an EU-wide standard. It's supposed to become part of the EU digital wallet initiative.

      The trick with age verification is to do it in a way that doesn't allow tracking by the service itself (i.e. returning the same token/signature every time) or from the government (shouldn't see what sites you use when). That has pretty much been solved now, though.

    • By dabber21 2026-03-0313:59

      I just recently used my ID to register for a lottery website using the AusweissApp the first setup was a bit annoying, but once you are registered its actually easy to use and apparently you don't even need a phone you can use a card reader on your PC as well

    • By ReptileMan 2026-03-0314:121 reply

      >Anonymous age verification

      Anonymity from whom? Does the German government doesn't know that Gunter Shmidt has just verified his age to the site GreatBDSMPartiesInBerlinForDragQueens.com ? Even if they obtain the logs from the site?

      • By dabber21 2026-03-0314:371 reply

        afaik it comes directly from your ID's card chip, there is an App inbetween that temporarily stores that data so it can be submitted to the service you are registering to

        • By ReptileMan 2026-03-0315:10

          So the app could phone home if it so desires?

    • By PunchyHamster 2026-03-0313:39

      It's also gateway to push more. Once APIs are in place and databases are full, what's another "check" or a bit of info to add ?

      Surely the safety of children is worth it right ?

    • By chocmake 2026-03-0313:51

      If it is the case that German IDs supporting selective disclosure aren't seeing adoption for services then it needs to looked at what the friction is or even just because it's optional. It doesn't necessarily have to be an ulterior motive. It'd be easy to be called out as conspiratorial otherwise.

      Right now with age assurance laws and online services there has been no singular approach beside falling back to use of government ID that any country has required. Each country has just said 'here are the minimum criteria, choose what you want' and left it up to services to comply.

      So what have services chosen? The least friction and cheapest existing solution to be compliant. For most services that's been using readily available facial scanning services and government IDs as fallback. Not all of them of course but it's so scattered that it makes it difficult for a person to know what they'll need for one service vs another (and perhaps even avoid use of a service if their approach doesn't align with the person's values).

      Without mandating better minimum privacy criteria governments can just point to the fact they're not preventing such tech from being used and leave it at that. But solutions also need to be affordable to adopt for a wide range of sites/services and have good support (interfaces, etc) around them to catch on so it's not just entirely whether tech exists per se.

    • By squeefers 2026-03-0317:21

      [dead]

  • By imglorp 2026-03-0311:381 reply

    We all know these laws are about suppressing dissent and not about age.

    If anyone implemented this privacy preservation scheme, would all the laws flip to say "yeah we really did mean it govt id tied to your post".

    • By zug_zug 2026-03-0312:291 reply

      All the more reason for us to get out an actual implementation of age verification that IS anonymous first, so that when a law is pushed for or passed, companies can adopt the anonymous implementation.

      • By jaimex2 2026-03-0312:401 reply

        No, there's no compromise here. Anyone pushing for age verification or going along with it needs to get replaced by a service that is immune to government overreach.

        • By sanex 2026-03-0314:102 reply

          Some of us do see value in age and identity verification if the anonymity problem is solved so I very much disagree.

          • By fwn 2026-03-0314:57

            Might be vulnerable to classic salami tactics, though. Once we arrive at a general consensus on new norms that expect age verification online, we can just legislate it to ID users as a step 2.

            Maybe wait for the next terror-attack before pushing for it, but it's an easy fix to a culture that already accepted a layer control against the user. The end user will only perceive a small difference in whether they provide full ID or just verified age information.

            I want to believe that some supporters of age verification are not cynical. However, whatever good can be achieved through age verification seems such a small win, compared to the dangerous precedent it sets for the internet in general. I cannot get my head around it.

          • By LorenPechtel 2026-03-040:54

            And some of us do not believe the identity bit can be truly solved.

            In the real world it's always people looking to suppress information or dissent that are pushing for such schemes. It always masquerades as protecting minors (protecting them from what? The one proper attempt to prove sexual materials are harmful found no evidence of said harm.) or as hunting for CSAM (and if you do implement an effective system it will get circumvented by putting relays in hostile countries.)

  • By screwt 2026-03-0311:363 reply

    This article is a great explainer of the basics underlying anonymous credentials. I look forward to the promised follow-up explaining real-world examples.

    The key issue however is trust. The underlying protocols may support zero-knowledge proofs. But as a user I'm unlikely to be able to inspect those underlying protocols. I need to be able to see exactly what information I'm allowing the Issuer to see. Otherwise a "correct" anonymous scheme is indistinguishable from a "bad" scheme whereby the Issue sees both my full ID and details of the Resource I wish to access. Assuming a small set of centralized Issuers, they are in a position of great power if they can see exactly who is trying to access exactly what at all times. That's the question of trust - trust in the Issuer and in the implementation, not the underlying math.

    • By lwkl 2026-03-0312:31

      In Switzerland a digital identity like this will launch this summer and the underlying infrastructure and app is open source. And the issuer of the ID and the registry that holds and verifies credentials are separated. The protocol also isn't novel and is already used in other countries (Germany(?)).

      For more information check the out technology behind it: https://www.eid.admin.ch/en/technology

    • By Normal_gaussian 2026-03-0312:35

      This is exactly it. It is a huge issue if the authentication can trivially become non-privacy preserving in a way that is impenetrable to users.

    • By LorenPechtel 2026-03-040:55

      And a huge incentive for the black hats to undermine the issuers. They aren't going to remain secure.

HackerNews