Meta, the giant of social media, has enjoyed a prestigious reputation in the AI community with its open-source artificial intelligence model series, Llama, especially the earlier versions Llama1 to Llama3, which were highly praised. However, with the release of Llama4 in April 2025, Meta faced a major trust crisis.

At that time, Meta claimed that Llama4 performed exceptionally well in benchmark tests, but as the model was released, many developers quickly conducted tests, and the results showed that Llama4's actual performance was far below Meta's promotion. The outside world began to suspect that Meta might have used improper means in the benchmark tests. Although Meta initially denied the accusation, subsequent developments indicated that the Llama series seemed to be stagnating, and Meta gradually shifted its focus to closed-source commercial models.

image.png

Recently, Yann LeCun, the outgoing chief AI scientist at Meta, admitted in an interview with the Financial Times that Llama4 had indeed been manipulated before its release. He revealed that the team used different models for different testing projects to boost scores, aiming for better results. The consequences were severe: Llama4 was considered a failed product, and Meta suffered significant damage due to accusations of manipulating test results.

This incident angered Meta's founder Mark Zuckerberg, who lost confidence in the team responsible for the release and directly marginalized the entire GenAI team. Currently, many team members have left, and Yann LeCun, who had worked at Meta for ten years, also announced his upcoming departure. These series of changes not only exposed Meta's internal difficulties but also raised doubts about its future development in AI.

Meta's AI field is once again stirring up waves, and the industry has reacted strongly to this incident. The release of Llama4 will serve as a mirror, reflecting the difficult balance between enterprises pursuing technological advancement and maintaining integrity.