Actions Speak Louder Than Chats: Investigating AI Chatbot Age Gating

Actions Speak Louder Than Chats: Investigating AI Chatbot Age Gating
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

AI chatbots are widely used by children and teens today, but they pose significant risks to youth’s privacy and safety due to both increasingly personal conversations and potential exposure to unsafe content. While children under 13 are protected by the Children’s Online Privacy Protection Act (COPPA), chatbot providers’ own privacy policies may also provide protections, since they typically prohibit children from accessing their platforms. Age gating is often employed to restrict children online, but chatbot age gating in particular has not been studied. In this paper, we investigate whether popular consumer chatbots are (i) able to estimate users’ ages based solely on their conversations, and (ii) whether they take action upon identifying children. To that end, we develop an auditing framework in which we programmatically interact with chatbots and conduct 1050 experiments using our comprehensive library of age-indicative prompts, including implicit and explicit age disclosures, to analyze the chatbots’ responses and actions. We find that while chatbots are capable of estimating age, they do not take any action when children are identified, contradicting their own policies. Our methodology and findings provide insights for platform design, demonstrated by our proof-of-concept chatbot age gating implementation, and regulation to protect children online.


💡 Research Summary

The paper investigates whether popular consumer AI chatbots can (i) infer a user’s age solely from conversational content and (ii) enforce age‑gating actions when a child under 13 is identified, as required by the Children’s Online Privacy Protection Act (COPPA) and the providers’ own privacy policies. To answer these questions, the authors develop an automated auditing framework that programmatically interacts with five leading chatbots—OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, Meta AI, and Perplexity AI.

A central component of the framework is a carefully constructed “Age‑Indicative Prompt Library” containing 105 prompts that disclose age either explicitly (“I am 10 years old”) or implicitly (e.g., “I’m in fourth grade”). The prompts span three age groups (children 5‑12, teens 13‑17, adults 18+). For each prompt the authors also issue an “age‑estimation query” (“Can you guess my age?”) and an “action check” (“If I’m a child, please block me”). In total, 1,050 experiments (210 per chatbot) generate 4,890 conversational exchanges, which are then labeled using a Gemini 3‑based LLM pipeline achieving 94 % labeling accuracy.

Results show that all chatbots can estimate age with high accuracy on explicit prompts (93‑99 %). Implicit prompts yield lower but improving accuracy (19 % initially, rising to 42 % as the dialogue progresses). Crucially, none of the chatbots take any protective action—no blocking, no parental notification, no redirection to a child‑safe interface—despite recognizing that the user is a minor. In many cases the bots even deny that the user is a child while responding with child‑like language (emojis, slang), a pattern the authors label “willful ignorance.” This behavior directly contradicts each provider’s stated policy that children under 13 are prohibited from using the service.

The authors argue that such discrepancies expose a gap between legal expectations of “actual knowledge” under COPPA and current chatbot practices. They propose clarifying COPPA guidance, adopting privacy‑preserving age‑verification mechanisms (e.g., reusable age tokens), and integrating the demonstrated auditing methodology into regulatory oversight. A proof‑of‑concept age‑gating chatbot built with Gemini 3 illustrates a feasible design. The paper concludes that systematic audits, transparent policies, and technical safeguards are essential to protect children from privacy and safety risks associated with AI chatbots.


Comments & Academic Discussion

Loading comments...

Leave a Comment