Grok, or artificial intelligence from the head of the portal X, went in tango. On July 8 at X, very strange things with Grok began to happen, whose answers were … very incorrect political and even vulgar. Interestingly, this situation concerned not only the American, but also the Polish online yard. What went wrong? Is it a rebellion of machines or maybe a code error? Or a deliberate action offended by recent quarrels with President Trump, Musk?
Failure, but a deliberate action?
At the very beginning it is worth noting that Musk from the very beginning (i.e. around the first quarter of 2023) emphasized that his AI would be free from clichéd, politically correct statements. The artificial intelligence of the Włodarz Portal X was to be a response to a highly limited chatagpt, which has very clear filters when it comes to offensive and discriminatory content. Of course, this is a much broader topic for a discussion of various bias imposed by the creators of LLM.
Nevertheless, Elon, as he announced, did so, because the grok from the very beginning was distinguished by its exceptional frivolity when it comes to answering users. Since the grok has been directly implemented in the operation of portal X, users can mean it under specific posts and ask for their opinion, commentary, or e.g. translating the content into Minecraft (Oh, the sophisticated Irony AI from Musk shown). The problem, however, began on July 8, when the grok crossed the barrier of good taste and began to throw comments … worthy of the biggest internet haters.
It got both American and our native political scene. Grok was also offended by other nations, such as Turkey (he claimed that it was in this language most often). Although the Internet detectives already suspect that Musk himself is behind everything, who allowed the grok to loose the reins of political correctness (which has not had virtual lips so far), so for the official communion XAI talks about mistakes that was quickly taken care of. But what caused these mistakes? Modification of the system prompt, i.e. a kind of instructions for behavior for chatbot. The circumstantial evidence may be the fact that a few days ago XAI introduced a new update to Grok, which Elon Musk informed in its profile.
GROK takes on the Polish political scene!
As for the Polish yard, the most shared and memelled tweet of Grok was the one in which he challenges the MP and lawyer Roman Giertych. As soon as Polish users of the portal X caught that the grok had highly loose censorship, so they also began to ask him about various, often controversial opinions on a given topic. Portal X has been swarming with the opinion about the premiere of Tusk, the Confederation and other groups of the Polish political scene.
Interestingly, the situation there affected native politicians that the minister of digitization Krzysztof Gawkowski began to consider turning off Platform X in Poland, which he spoke about in an interview with the Terlikowski editor in RMF 24.
We enter a higher level of hate speech, which is controlled by algorithms. We have mechanisms in Poland that would allow you to work in such a way that portal X does not work.
Ai from Muska causes a moral fire not only in Poland! World reactions to honest AI
Although the Polish political scene boiled strongly due to the vulgar entries of Grok, it was not easier in the world. Turkish television issued a special edition of its evening messages, analyzing Grok’s entries about Turkey. The President of Salvador Nayib Bukele was strongly imprisoned because of the President’s praise by Grok. The Pakistani journalist got into a quarrel with grocs and appealed to the authorities to look at the artificial intelligence from Elon Musk.
Little? In that case, let’s look at the American yard, whose 10th space swarmed from the entries of Grok, in which he praises the activities of a Austrian watercolor from the 20th century.
AI rebellion from Elon Musk and implications for the AI industry
Grov failure has broader consequences for the entire artificial intelligence industry:
- Increased regulatory requirements: The incident may contribute to the introduction of more restrictive regulations regarding the quality control of AI systems.
- Industry standards: It becomes necessary to develop industrial standards regarding security mechanisms in AI systems.
- Public trust: Each such incident undermines social trust in AI technology and can slow down their adoption.
The Grok failure reveals a fundamental problem in the approach to the security of AI systems. It is not enough to rely on individual layers of control – it is necessary to implement redundant security systems that can take control in the event of a failure of the main mechanisms.
In addition, the incident shows how important it is to constantly monitor and test AI systems in the production environment. Automatic anomalies detecting systems could significantly reduce the response time to similar problems.
Will the future and a question mark?
For XAI, it will be crucial to not only fixing current problems, but also a deep analysis of the causes of failure and the implementation of more reliable control mechanisms. The AI industry as a whole should treat this incident as a warning signal and an impulse to improve safety standards. But will the naughty grok affect the AI industries to some extent?
Rather, only such that the failure of July 8, 2025 will probably become a reference point in discussions about the security of AI and remind us that the development of artificial intelligence must go hand in hand with responsibility for its safe implementation.