Recently, the Meta's Llama 4 project was exposed to a major scandal. In an interview, former Chief Scientist Yann LeCun admitted that the team had indeed "polished" the data to optimize benchmark results. This behavior sparked widespread controversy, highlighting serious misalignments in Meta's management decisions and technical direction during its pursuit of AI breakthroughs.
The Llama series was once renowned in the AI community for its open-source strategy. Llama 2 and Llama 3 received significant attention and recognition. However, with the release of Llama 4, Meta's reputation rapidly declined. Yann LeCun's departure is believed to be due to his dissatisfaction with the company's large language model (LLM) strategy, while another former FAIR technical director, Tian Yuandong, who was also laid off, stated that he and his team became scapegoats in this incident. The internal chaos at Meta, combined with an overzealous pursuit of new technology, made the development process of Llama 4 full of uncertainties.
Zuckerberg, eager to catch up with competitors, rushed to integrate AI technology into various products, forcing the development team to complete tasks in a short time, ultimately resorting to extreme measures like "cheating on rankings" to cover up shortcomings. After the release of Llama 4, community developers found its performance did not match Meta's claims, leading to widespread criticism. Eventually, Zuckerberg had to carry out a large-scale layoff of the internal team and bring in external experts in an attempt to recover.
However, Meta's transformation path has been anything but smooth. The new closed-source model "Avocado" faces controversy over "borrowing" other technologies. Whether it can regain market trust remains uncertain. After this incident, the future of Meta's AI empire is worrying, and whether it can rise again in the intense competition has become a focal point of industry attention.
