Recently, a study released by the media analytics company Newsguard revealed an embarrassing reality in the current artificial intelligence field: although AI video generation technology is rapidly evolving, mainstream AI chatbots are almost unable to identify these "deepfakes," and even the tools developed by the developers themselves are not immune.
The research showed that when faced with fake videos created by OpenAI's video generation model Sora, OpenAI's ChatGPT performed disappointingly, with an error rate of as high as 92.5%. This means that for most videos generated by Sora, ChatGPT would mistakenly believe them to be real recordings. Other major companies also performed poorly, with xAI's Grok having an error rate as high as 95%, while Google's Gemini performed relatively better, but its error rate still remained at 78%.
More worrying is that existing technical protection measures are practically useless. Although Sora adds visible watermarks and invisible metadata to the generated videos, the study found that these markers can be easily removed using free tools or simple "save as" operations. Once the watermark disappears, chatbots not only fail to identify the fabrication, but sometimes even "seriously make up nonsense," citing fictional news sources to prove the authenticity of the fake video.
In response to this issue, OpenAI admitted that ChatGPT currently does not have the ability to determine whether content was generated by AI. Due to the fact that top AI video tools can now produce materials that are hard to distinguish with the naked eye, and countermeasures lag behind technological development, this undoubtedly poses a significant risk for the spread of false information.