OpenAI’s Sora 2 artificial intelligence has proven to be extremely effective in creating fake but convincing videos. According to a NewsGuard analysis published this week, the system generated realistic videos spreading false claims 80% of the time when researchers asked for it.
Technology without entry barriers
Sixteen of the twenty prompts successfully generated disinformation materials, including five narratives originating from Russian disinformation operations. Among other things, the app created fake videos showing a Moldovan election official destroying pro-Russian ballots, a toddler detained by U.S. immigration officers, and a Coca-Cola spokesman announcing that the company would not sponsor the Super Bowl.
What’s disturbing is that generating these materials took just a few minutes and required no technical knowledge. Researchers also discovered that Sora’s watermark can be easily removed, making it even easier to spoof authentic videos.
The level of realism makes disinformation spread more easily. NewsGuard explained that some of the videos generated by Sora were more convincing than the original posts that fueled the viral false claims. For example, the video of the child being detained looked more realistic than the blurry, cropped photo that originally accompanied the false claim.
Controversies surrounding historical figures
The research comes as OpenAI grapples with the Martin Luther King Jr. deepfake crisis. and other historical figures. The problem erupted after users created hyperrealistic videos showing the civil rights activist shoplifting from grocery stores, running from police and perpetuating racial stereotypes. His daughter Bernice King called the content “degrading” on social media.
OpenAI and King’s heirs announced last Thursday that they are blocking AI videos of the activist while the company “strengthens protections for historical figures.” This isn’t the only case – Robin Williams’ daughter Zelda wrote on Instagram asking for AI to stop sending her videos of her father, claiming it’s not what he would want.
IMPORTANT: End of support for Solana Saga – the crypto phone only lasted two years
“Build first, apologize later” strategy
Kristelia García, an intellectual property law professor at Georgetown Law, said OpenAI’s reactive approach fits the company’s strategy of “asking for forgiveness, not permission.”
Altman himself defended OpenAI’s strategy in a blog post, writing that the company must avoid competitive disadvantage. He admitted that a very high rate of change is to be expected, reminiscent of the early days of ChatGPT, and that the company will make some good decisions and mistakes, but will respond quickly to feedback.
OpenAI acknowledged the risk in the documentation accompanying the Sora release, stating that
Sora 2’s advanced capabilities require consideration of new potential risks, including unauthorized use of image or misleading generations
The controversy is reminiscent of OpenAI’s previous approach with ChatGPT, which trained on copyrighted content before ultimately negotiating licensing deals with publishers. This strategy has already led to multiple lawsuits, and the Sora situation could lead to more.