Recently, the new video generation platform Sora launched by OpenAI has sparked widespread discussion. The platform is based on OpenAI's latest Sora 2 model, which can generate highly realistic videos, even fabricating images of famous people such as Martin Luther King, Michael Jackson, and Bryan Cranston. These videos not only involve celebrities but sometimes use copyrighted characters like SpongeBob and Pikachu to create shocking or harmful content.
On the Sora platform, users have some awareness that the generated videos are not real. However, once these videos are shared on other social media platforms, it becomes difficult for viewers to determine their authenticity. The high level of simulation of Sora not only has the potential to mislead viewers but also reveals serious flaws in AI labeling technology, especially within the C2PA certification system where OpenAI is involved in regulation.
The C2PA certification, also known as "content credentials," is a technology led by Adobe, aiming to attach invisible yet verifiable metadata to images, videos, and audio. This metadata provides detailed information about when and how the content was created or edited. However, this system has failed to effectively protect users from misinformation in practical applications.
As a member of the Content Credentials Alliance (C2PA), OpenAI participated in the development of this technology, but the false content demonstrated by Sora undoubtedly questions the effectiveness of this certification system. For viewers who may be misled, the emergence of Sora is not only a technological breakthrough but also a major challenge to content authenticity.
Key Points:
🔍 Sora is a new video generation platform launched by OpenAI, capable of highly realistic fabrication of celebrity images.
⚠️ Users realize that the videos are not real, but once shared, it becomes hard to determine their authenticity, potentially misleading viewers.
📉 The C2PA certification technology has failed to effectively protect users and faces challenges from the spread of false content.
