At a time when generative AI is sweeping the globe, ensuring the safety of minors' usage has always been a focal point of public attention. According to official announcements from OpenAI, its flagship product ChatGPT has officially launched the "age prediction" feature this week. This new initiative for the global market aims to provide younger users with more accurate age-appropriate experiences through intelligent identification methods, marking an important step in AI regulation moving from a "one-size-fits-all" approach to a "precision-based" strategy.
To achieve this goal, OpenAI has adopted a dual safeguard mechanism of "big data analysis + third-party verification." On one hand, the system will use deep learning technology to analyze multiple dimensions related to the account, such as the types of topics users frequently discuss and their activity periods, among other behavioral characteristics. By analyzing these subtle interaction patterns, the system can help predict whether a user belongs to the minor group.

If the system determines that a user is under 18 years old, ChatGPT will automatically activate a strict safety filtering mode. In this mode, teenagers will be free from any form of commercial advertising interference. More importantly, OpenAI has clearly stated that it will impose strict restrictions on sensitive content such as violent and bloody scenes, high-risk challenges, romantic role-playing, and content promoting extreme aesthetics or unhealthy diets, ensuring that AI outputs align with the psychological development needs of young people.
Notably, to further enhance the authority of verification, OpenAI has introduced a third-party identity verification service called Persona. If a user needs to confirm their adult status to remove restrictions, they must complete the process through real-time selfies or submitting government-issued valid identification documents. This real-name verification logic is not only to address increasingly stringent data protection regulations globally but also to establish a genuine and trustworthy user profile.
Currently, this feature has been rolled out in most regions worldwide. Considering the unique regulatory environment in Europe, OpenAI plans to extend this feature to EU member states within the next few weeks to fully comply with local legal requirements. From technological iteration to ethical protection, this change in ChatGPT reflects how AI giants are attempting to find a new balance between technological innovation and social responsibility.
