AI Labs Must Resist Age Verification
Age verification is being sold as a simple, common-sense safety measure. A common refrain heard is “Just show your ID like you do at a bar.”, or “if you have nothing to hide, why are you so nervous?” Both of these sayings sound pragmatic, parental, and inevitable. However, due to the ubiquitous nature of general-purpose AI systems, age verification mechanisms won’t be implemented as a safety mechanism, but a core part of the product itself. Once that infrastructure exists, it becomes the default policy hammer for every future moral panic, and it turns ordinary speech into a credentialed activity. We understand the pressure, but we also think society is about to make a historic, authoritarian mistake.
What is unfolding right now, across Australia, the United States, and Europe, is a coordinated drift toward identity checks for basic participation in digital life. AI labs should treat that as an existential emergency to democracy, not as a checklist for compliance. Age gating for basic participation in the digital public square should be anathema to the core of anyone who cares about human rights, privacy, and freedom of expression. If civil society wants to get an idea of what an age gated digital world looks like, we need to take a peek at Australia.
On December 10, 2025, Australia’s “age-restricted social media” rules went live. Platforms are required to take “reasonable steps” to prevent under-16s from creating or keeping accounts, with penalties aimed at platforms rather than kids or parents. You can read the government framing on the eSafety Commissioner’s explainer. Reporting also makes clear that the law is being watched internationally as a model export, with significant fines attached for noncompliance. Even in Australia, the policy argument is not only about porn, gambling, or the obvious “adult” buckets. It is about mainstream social media, “addictive algorithms,” bullying, and mental health narratives. The emotional energy that powers the law is genuine. Policymakers know its emotional power, and utilize rhetorical persuasion and the power of the State to make people sympathetic to the law’s passage. However, the law’s mechanism is blunt and overcorrecting, and as soon as you build a blunt mechanism, it begins to face function creep.
In Europe, Denmark is now talking about an under-15 social media ban, with plans that include an age verification app. The EU itself is approaching age verification as well; the European Commission has moved from abstract goals to concrete technical plans for an EU-harmonized approach to age verification that is interoperable with future EU Digital Identity Wallets. The Commission has published an “Age Verification Blueprint” and refers to a “mini wallet” implementation and pilot phase. If that sounds like identity infrastructure, civil society groups noticed before you did.
The Electronic Frontier Foundation published a piece on December 11, with the exact paraphrasing that we mentioned, one that keeps showing up in legislative hearings: “Why isn’t online age verification just like showing your ID in person?” The EFF’s answer is direct: The comparison collapses once you consider scale, data flows, and the reality of third-party intermediaries. In-person checks are bounded and ephemeral. The grounds for an age verification that currently exist in real, in person spaces are serious or justified, like access to alcohol or tobacco, or in person gambling. Online checks create records, data sharing, and new breach risk. They also chill access to lawful, sensitive speech.
We’re going to be even more absolutist, if not audacious than EFF here: It is our belief that any system that requires adults to identify themselves to access a general-purpose speech interface is, functionally, a surveillance system, and directly incompatible with human rights, the rule of law, and democracy itself. Policymakers can whitewash the new mechanisms however they want. Call it “age assurance.”, “privacy preserving.” “risk-based.” “youth centric” It doesn’t matter, and it's deceitful. The practical effect of age assurance is a new authoritarian norm: to read, write, and think with powerful digital tools, you must first present papers.
In the United States, lawmakers are increasingly targeting app stores as the enforcement layer. That sounds tidy because it centralizes compliance. It is also how you end up demanding sensitive data from everyone. Reuters reported on December 11, 2025 that Apple CEO Tim Cook is pushing lawmakers to revise a federal child online safety bill, warning that app-store-level age verification could require collection of sensitive data and impact all users, including those downloading low-risk apps like weather or sports. Earlier in 2025, Reuters also reported Texas signing an app store age verification and parental consent law, against objections from Apple and Google on privacy grounds. Here is the fundamental problem that legislators are either naive about or deliberately codifying: once you set the rule that “a gatekeeper must know who is a minor,” you create a legal and even a monetary incentive for gatekeepers to over-collect user data. Not because companies are evil, but because legal risk creates the minimization of risk. In practice, “verify minors” quickly becomes “verify everyone.” That is the direct policy sleight of hand. Legislators talk about kids, then quietly build systems that require the whole population to participate.
The EU is attempting something more sophisticated than the US. The Commission’s age verification blueprint explicitly aims to allow a user to prove they are over 18 without sharing other personal information, and Europe positions their solution as interoperable with the EU Digital Identity Wallet ecosystem. On paper from a privacy standpoint, the EU’s solution is better than uploading a driver’s license to random sites with debatable privacy mechanisms. Nevertheless, Brussels’ plan is still a normalization of the dangerous idea requiring credentials to access online services, including expression platforms. As an olive branch to people who insist age assurance is unavoidable, we propose the only defensible direction is proof of a predicate, not disclosure of identity. In practical terms, that means privacy-preserving credentials where a user can prove “I am over 18” without revealing their name, address, ID number, or full date of birth. Zero-knowledge proofs (ZKPs) and selective-disclosure verifiable credentials are hard engineering and hard governance, and can serve as adifference between “age gating” and an identity dragnet. Google did something good recently, and open-sourced their ZKP libraries for anyone to use. It’s still important to remember that once a credentialed framework exists, political pressure will expand its scope. First it is porn, then it is “self-harm content.” then it is “eating disorder content.” Then it is “misinformation.” Then it is “extremism.” Then it is “political harm.” Every jurisdiction will define the harm differently, but the credential system will stay. Function creep is the one example of a slippery slope being fact, not fallacy, because its happened before, many times, in authoritarian regimes. General-purpose AI systems are not like a liquor store. They are closer to word processors, private diaries, creative co-authors, or a tool for coping with many of life’s problems. When you impose identity checks on a tool with that footprint, you are not protecting children. You are rewriting norms for adult anonymity. That is why we have argued for adult autonomy and privacy as first principles, even when the topic is uncomfortable. One of our authors has made that case explicitly in his earlier piece, “It’s Okay to Treat Adults Like Adults”.
The clearest example that our concern is not hypothetical, but happening in real time, is Character.AI. In late October 2025, Character.AI announced “Important Changes for Teens,” including rolling out an age assurance model combined with third-party tools like Persona They also published a detailed explanation of their flow: if the system thinks you are 18+, you do not need verification, but if you are flagged as under-18 and dispute it, the process can involve selfie-based verification, and in some cases ID upload. Our example here is not a dunk on Character.AI. Character Technologies are responding to real child safety concerns, political scrutiny, ongoing civil litigation, and the liability landscape. But it illustrates the creeping policy gradient. As soon as you create “under-18 experience” and “18+ experience,” you create pressure to prove which side you belong on. The regulatory pressure does not stop at entertainment chatbots. It will land on every major frontier model provider. OpenAI themselves have already committed to age verification, building a model in house to detect a person’s age. CharacterAI did it first, but OpenAI is one of the most important players in the AI space, and if they do it, it lends legitimacy to the idea of surrendering privacy to use basic AI tools.
We are not going to be naive about what people are doing with AI models, there is a real market for adult content output from general purpose AI chatbots. People joke about “goonbench,” meaning: how good is a model at writing AO3-style erotica? It’s a play on words from the Gen Z term “gooning”, and the suffix “bench”, referring to benchmarks published by AI labs in their model cards. Written erotic content becomes the politically easy justification for building age verification infrastructure, because many adults will shrug and say, “Who cares, it’s porn.” But in reality, the thing being regulated is not pornography as a product, but private speech between an adult and a text interface. If an AI lab’s governance model for AI text outputs starts with “you must identify yourself to write adult fiction,” function creep implies the model will eventually be asking people to identify themselves to write about trauma, sexuality, religion, or politics. LGBT+ teenagers in homes where they cannot be themselves could be falsely flagged as “adult content”, when the teen in question is trying to figure out who they love, or even who they are. Asking questions about mental health, even if there isn’t a risk of self-harm, could also be flagged as adult content. Neurodivergent teenagers trying to learn how to be themselves in a world not designed for them, could also be flagged as adult content. You could even be flagged for adult content asking about public health questions, since reproductive care is “adult”.
Some people will argue against our proposals: “You cannot resist the law.” “Just deal with it”. “You have nothing to hide, just give your ID to use ChatGPT or Gemini.” The analogy is historically false, and offensive. Companies resist laws all the time, they do it via lobbying, courts, and refusal to build harmful infrastructure. In the US, the Supreme Court has already moved the landscape by upholding a Texas law requiring certain sexually explicit websites to verify users’ ages in Free Speech Coalition v. Paxton. Separately, major tech industry groups and trade associations are challenging state age verification and youth social media laws. For example, litigation around Utah’s social media law has been working through the courts, including appeals reporting. In Australia, a constitutional challenge has been reported against the under-16 social media account ban, framed around implied freedom of political communication. Reddit also filed a lawsuit challenging the law, under similar constitutional pretenses. The point is not that every lawsuit will win. If AI labs accept age verification mandates as inevitable, they will help build an identity regime that will be very hard to reverse.
We are going to state our views plainly here. AI labs must actively resist identity-linked age verification mandates. We have five reasons for our beliefs, and they’re explained below:
Data minimization is a safety principle. Collecting more sensitive data creates more breach risk and more abuse surface. No amount of “we delete it later” changes the reality that collection is the moment of maximum vulnerability.
Anonymity is a human right, and also a safety tool. People rely on anonymity to seek information about stigmatized topics, to escape abusive environments, to explore identity, and to access mental health support.
Credentialed access chills speech. You do not need a warrant to create a chilling effect. You just need a credible fear that someone, somewhere, can tie your identity to what you asked.
Function creep is guaranteed. Once an age verification system exists, lawmakers will expand what it applies to. Regulatory systems often evolve with function creep and experimentation, using peripheral societal functions like gambling, pornography, or novel technology to see how the public responds to regulation. We cannot make such a mistake again.
AI is becoming essential infrastructure. If we treat AI like a regulated vice, we will build a world where participation in modern life requires a credential.
Architectures of control aren’t mechanisms of safety, and in fact make platforms less safe over all, by several orders of magnitude.
If you are an AI lab executive or an employee reading our plea, here are some suggestions of things you can do to ensure that your product is safe and promotes privacy, autonomy, and human dignity.
Publicly oppose mandatory ID-based age verification for general-purpose AI. Do not hide behind “we follow the law.”, or “we’re consulting with policymakers”. Argue for proportionality and privacy, on the record, in the public square.
Refuse to store identity artifacts: If verification is required in a jurisdiction, architect it so the lab does not retain documents, biometrics, or identifying data beyond a minimal over-18 flag, and publish retention limits. Ideally, use a zero knowledge proof, like the one Google publishes. Zero-knowledge and selective-disclosure credential systems are hard. It’s morally imperative to do something hard that is right, than to do something easy that infringes on human rights.
Support litigation and file amicus briefs: If laws demand identity checks for speech interfaces, treat that as a civil liberties fight. Partner with civil society and fund the lawyers. Be plaintiffs in the lawsuits, actively testify, publicly condemn age verification.
Build child safety without an identity dragnet: We also want youth to feel safe using AI. It’s possible to hold our privacy maximizing beliefs in tandem with the call for safer products. It’s also technically feasible to see product design, robust abuse reporting, and safety mitigations that do not require turning every adult into a verified customer.
The choice is not “protect kids” versus “protect privacy.” The real choice is whether we build a future where the right to use language tools depends on identity. Here is our compromise path, for policymakers and executives who refuse to move off age assurance entirely: require an anonymous, cryptographic “over-18” token rather than ID uploads. The verifier should learn only that the user meets an age threshold, not who they are. The AI lab should store only a boolean or a blinded token, with strict retention limits and no linkage to chat content. Add revocation and fraud controls without turning the system into a permanent identity file. We believe zero knowledge proofs are the sole narrow band where “age assurance” can exist without becoming mass surveillance. Age verification feels like the path of least resistance. In the short run, it lowers legal risk, In the long run, it replaces the open internet with a permissioned internet. AI labs are at the center of key decisionmaking because AI is rapidly becoming the interface to everything. We do not get a second chance to decide whether that interface requires papers. If we want a free, open, private internet in the age of AI, we need direct responses now.
