On May 11, the European Commission confirmed that OpenAI has proactively proposed to grant access to its latest cutting-edge model, GPT-5.5Cyber, marking a new phase of "proactive collaboration" in global AI regulation. This collaboration aims to allow regulatory authorities to monitor model deployment more directly and assess potential security risks. Thomas Renier, a spokesperson for the European Commission, revealed that both sides have already entered multiple rounds of negotiations on specific access details, with several agencies, including ENISA, the AI Office, and the Directorate General for Communications Networks, likely to gain access.

OpenAI

Although OpenAI has shown a strong willingness to be transparent, another major player, Anthropic, has taken a more reserved approach. The EU has held multiple meetings with Anthropic regarding its Mythos model, which has sparked widespread discussions on cybersecurity, but no access agreement similar to that with OpenAI has been reached yet. Currently, the Mythos model is only accessible through the "Glasswing project" to a limited number of technical partners, with the UK AI Safety Institute being the only one having direct testing privileges.

This situation highlights the practical challenges faced by European regulatory bodies when implementing the AI Act and the Cybersecurity Resilience Act: without top-tier domestic model companies, regulators heavily rely on the voluntary cooperation of non-domestic tech giants.

As GPT-5.5Pro demonstrates "doctoral-level" capabilities in mathematical research, and autonomous AI agents begin to penetrate sectors such as finance, the contradiction between regulatory lag and the technology gap is becoming increasingly pronounced. The EU's move is not only aimed at addressing current technological safety debates, but also seeks to establish a regulatory framework based on practical intervention before the AGI wave sweeps across global supply chains, in order to balance innovation efficiency with public safety.