To create a healthier AI usage environment for teenagers, OpenAI recently announced the official launch of the "Age Prediction" feature in the consumer version of ChatGPT. This initiative aims to accurately identify users under the age of 18 and provide targeted safety protection measures.
Differing from traditional age registration, the model deployed by OpenAI this time is more "intelligent." It analyzes various behavioral signals such as account duration, active periods (such as late-night usage habits), and long-term interaction patterns to determine the user's age. When the system cannot confirm whether the user is an adult, it adopts a "safety-first" strategy, automatically switching the account to a stricter security mode.
For users identified as minors, ChatGPT will enforce a solid "firewall." This system can automatically block violent and bloody images, dangerous imitation challenges, content involving sexual or violent role-playing, descriptions of self-harm, and content promoting unhealthy aesthetics or body shaming.
To balance accuracy and user experience, OpenAI also introduced a third-party identity verification service. If an adult account is mistakenly identified, the user can quickly restore all functions by taking a selfie to verify their identity. In addition, parents can customize the experience for teenagers through dedicated control features, including setting downtime and receiving alerts about abnormal mental states, providing comprehensive protection for the physical and mental health of minors.
Key Points:
🛡️ Intelligent Behavior Detection: The system does not only look at registration information but also predicts whether a user is under 18 by analyzing behavioral signals such as active periods and interaction patterns.
🚫 Strict Control of Five Types of Content: Accounts identified as those of minors will automatically block high-risk content such as violence, sexual role-playing, dangerous challenges, self-harm, and unhealthy aesthetics.
👨👩👧 Enhanced Parental Controls: Parents can set "quiet hours" and monitor signs of psychological distress. Adults mistakenly identified can recover their permissions through third-party selfie verification.