OpenAI stopped 5 campaigns that used ChatGPT to spread fake news

Generative AI giant begins web filtering. OpenAI stopped 5 campaigns that used ChatGPT to manipulate public opinion around the world. The problem of fake news and manipulation of public opinion is getting bigger, and the dead internet theory is no longer just a conspiracy theory. Will OpenAI's actions set a precedent that will shape the ethical use of AI?

OpenAI on the trail of 5 campaigns behind alleged foreign influence operations

Generative artificial intelligence is able to effectively increase our productivity. Unfortunately, there are also dark sides of AI that are relatively rarely discussed on the Internet. One such dark way of using artificial intelligence is to create content that is intended to manipulate public opinion and influence the results of political elections. An infamous example of a place overrun by bots is Portal X, the former Twitter. Elon Musk is aware that the number of bots poses a serious threat, hence numerous (ineffective) attempts to remedy the situation.

OpenAI decided to be a much more effective sheriff, as the company responsible for ChatGPT revealed that it had stopped 5 campaigns using their technology for “covert influence operations”. This information appeared online on May 30, and OpenAI emphasized that:

Over the last three months, we disrupted five covert influence operations that sought to use our models to spread manipulation across the internet.

These entities used generative AI to create comments on articles and social media accounts and biographies. Moreover, OpenAI claims that a broader operation dubbed “Spamouflage” used OpenAI technology to mine social media and generate multilingual content on platforms such as X, Medium, and Blogspot. The aim of these activities was to “manipulate public opinion and influence the results of political elections.”

OpenAI's actions reveal the tip of the iceberg of problems that arise with AI

The second operation was called “Bad Grammar” by OpenAI and was aimed at Ukraine, Moldova, the Baltic states and the United States. It used models from OpenAI to operate bots on Telegram and generate political commentary. Another entity, dubbed “Doppelganger”, used AI models to generate comments in English, French, German, Italian and Polish. These comments were published on the X platform and 9GAG in order to manipulate public opinion.

OpenAI also mentioned a group called the “International Union of Virtual Media” that used generative AI to create long-form articles, headlines and website content that were published on affiliated websites.

The Dead Internet theory is starting to take on a whole new meaning

OpenAI's actions clearly show that the problem of generating content by AI-based bots is real and increasingly serious. Interestingly, such an issue appeared long before the release of ChatuGPT-3.5 in November 2022. Well, around 2016, a conspiracy theory called the “Dead Internet Theory” appeared on the Internet. It assumed that the Internet is actually an empty place where bots talk to bots, and up to 80% of the content found on the Internet is not created by humans.

This theory was initially the subject of jokes rather than serious consideration. Today, in the era of generative artificial intelligence, the theory of the dead internet does not seem as abstract as it did 8 years ago. The problem of generating content by bots on social media is constantly increasing, and the owner of portal X has no idea how to break this vicious circle. However, OpenAI's activities constitute a kind of new beginning in controlling content appearing on the Internet.

Can AI influence election results?

The presidential elections in the United States are fast approaching. These will be the first elections to take place during the so-called generative revolution, i.e. the moment when ChatGPT and other AI models appeared. The impact of AI-based bots is most visible on the X website, where comments can appear literally a few seconds after publishing a tweet. This leads to a situation in which it is easier than ever to manipulate public opinion. OpenAI's preventive actions may be caused by the upcoming presidential elections in the US and other countries around the world.