The non-profit organization the Future of Life Institute (FLI) released the "AI Safety Index 2025" report, conducting a safety assessment of over 20 leading companies including OpenAI, Anthropic, and Google DeepMind. The results show that leading manufacturers collectively failed in two core indicators: "Existential Risk Control" and "Safety Process Disclosure," with an industry average score of only 42/100, far below the requirements set by the EU's "AI Ethics Guidelines."

Key Findings of the Report

- Risk Assessment: Only 3 companies publicly disclosed systematic risk identification methods; OpenAI and DeepMind did not disclose technical details on "Superintelligence Alignment."

- Safety Framework: Lack of cross-departmental safety officers, red team exercise records, and third-party audits, criticized as "more promises than evidence."

- Existential Safety: None of the evaluated companies provided clear control and coordination plans for systems "smarter than humans," identified as "structural weaknesses."

- Information Transparency: The depth and measurability of information disclosure were on average 30 percentage points behind official guidelines.

FLI Initiatives

The report calls for the immediate establishment of an "AI Safety Transparency Registry," requiring companies to publicly disclose safety methods, evaluation results, and independent audits. It also recommends regulatory authorities implement a "pre-launch approval" system for General Artificial Intelligence (AGI) projects to avoid the "release first, govern later" approach.

Industry Response

An OpenAI spokesperson said, "We have received the report and will publish an updated safety framework within 90 days"; Google DeepMind stated, "We are currently evaluating the specific recommendations with our policy team." The European Commission's Internal Market Commissioner responded that the index will be referenced in the enforcement of the 2026 "AI Act," with violators facing fines of up to 2% of global revenue.

Market Impact

Analysts point out that safety compliance may become an "invisible gate" for the next generation of large models. It is expected that starting from 2026, top companies will allocate 10%-15% of their R&D budgets to safety and auditing to secure regulatory approval.