With the rapid development of artificial intelligence technology and its potential risks, global collaborative governance has taken a crucial step. In February 2026, the
The establishment of this expert group is a core measure to implement the United Nations' initiative to strengthen AI regulation. Its main responsibilities include regularly assessing the latest developments in global AI technology, identifying systemic risks that may affect human society, economy, and cyberspace, and submitting science-based policy recommendations to the United Nations Secretariat and member states. The composition of the expert group ensures global diversity and technical professionalism, bringing together leading minds from fields such as computer science, ethics, and law.
The participation of Chinese scientists not only reflects the international recognition of China's technological strength in the AI field but also demonstrates China's active involvement in the formulation of international AI rules. These two selected individuals have long been dedicated to research on AI security benchmark testing, algorithm robustness, and ethical boundaries in human-machine collaboration. Their participation will help introduce more diverse perspectives into the international governance system, promoting the construction of an inclusive, accessible, and safe global AI ecosystem.
It is reported that the expert group will conduct its first survey on "risk assessment standards for frontier models" and plans to release its first global AI safety status report during the next UN General Assembly, providing important references for countries to develop relevant laws and regulations.
Key Points:
🌐 New Coordinates for Global Governance: The establishment of a specialized scientific expert group by the United Nations marks a shift from fragmented regional consensus to a globally oriented, science-driven model of AI safety governance.
🇨🇳 Chinese Experts Selected: Two Chinese scientists joining the first expert group list represent China's key role in shaping global AI safety standards and policy-making.
📑 Authoritative Risk Assessment: The expert group will regularly issue safety assessment reports, focusing on providing scientific solutions to systemic risks that cutting-edge large models may bring.
