We live in an era when Genai knows our secrets better than his best friend. Although the present sounds more and more, as it was taken out of Cyberpunk or Ghost in the Shell alive, the threats that AI-Ziomki carry are the most real. Can you trust these digital trusters? It’s time to check how giants of artificial intelligence (Claude, Chatgpt and GroK) cope with the protection of our privacy.
Secrets in the era of Gena – you may be surprised what Bigtechy knows about you
At the very beginning of the generative revolution (November 30, 2022), i.e. the premiere of ChatgPT-3.5 from Opeli, the society’s approach to AI was quite skeptical. Chatgpt was then a curiosity, a funny bot that can write a funny joke. With time (and the evolution of models) people were more and more trusted to Genai. Chatgpt and other models have not only become a machine for translating intricate topics in a simple way to somewhat transform into psychologists in their pockets.
Chatgpt, as well as other LLMs, are constructed in such a way that the model’s priority is to satisfy the user with the proper answer to his question. That is why the models are so often hallucinating – their main goal is the answer, which does not necessarily have to be in line with the facts. However, the style of response (confirming and tendering) meant that many people felt a kind of bond with Genai.
However, this is a very specific and rather public relationship, because conversations with chattempt can be indexed in Google – which means that everyone has access to them, which I recently informed in our portal. Moreover, the Altman himself clearly confirmed that conversations with chattempt can be forwarded to courts and prosecutor’s office, if such institutions apply to OpenAI for sharing chat with chat. So what does the privacy policy look like in individual LLMs? I took the three of the most popular Genai models, i.e. Claude from Antropic, ChatgPT from OpenAI and GROK from XAI.
Claude (Anthropic): Golden standard or PR play?
Anthropic is positioned as a “responsible player” on the Genai market. And it must be admitted that in terms of privacy they actually go against the current of industry trends.
What they do well:
- Default data protection: Claude does not use user conversations for models’ training without clear consent
- Transparency: Privacy policy written in a language that even understands a man (not only a lawyer with 15 years of experience – hehe)
- User control: Clearly defined data management options
Do they have a dark side? So far, Anthropic has not had major mishaps related to privacy. Maybe because they are relatively new on the market, or maybe they really approach the topic seriously. Time will tell if they will maintain this standard when the pressure of growth begins to grow.
Chatgpt (OpenAI): King of conversion, but with luggage
Opeli is a pioneer who made Genai go to the mainstream. But great … privacy problems come with greatness.
Bright pages:
- Control options: Users can disable their use of their training (theoretically)
- Retention policies: Specific periods of data storage (default abuse monitoring diaries are to store data up to 30 days)
- Security: Solid technical protection
Dark cards in history:
- Incident from March 2023: Error in the system allowed some users to see fragments of the history of other people’s conversations. Ups!
- Clarity: Privacy policy is a test of patience – long, complicated and full of legal jargon
- Default settings: Until recently, the data was used for training, unless the user was actively writing out
- Transferring the history of conversation to courts: Altman himself publicly admits that if necessary, it provides data to law enforcement agencies
Real purpose of collecting data, It is clear because OpenAI openly admits that it uses data to improve models, although they give users the opportunity to resign. Problem? Many users do not know this option or do not know how to use it. This, in turn, leads to serious consequences in cases in which the user shares with CHATEGPT confidential information (e.g. company reports).
GROK (X/Twitter): Young, wild and … unpredictable
Elon Musk’s latest child in the world of Genai. Grok is a relatively fresh player, but he has already aroused controversy, such as insulting politicians or … granting a reason to Sam Altman.
What offers:
- Integration with X: Access to real-time data from the platform
- “Answers without censorship”: Marketing targeted at users tired of “political correctness” of other AI
- A large possibility of personalization of chatbowa: particularly visible in grok4
Red flags:
- Lack of transparency: Privacy Policy GROK is a mixture of generalities and references to X/Twitter policy
- Using data from x: If you have an account on X, your posts can be used for groc training – often without clear information
- History X: The platform has numerous controversies related to the privacy and moderation of content
Grok’s biggest problem is the “DNA of privacy” after X/Twitter, which is not known for the best practices in this field. The mere fact that you can ask Grok today about every tweet will sell years, gives you a lot to think about. Witch-hunt? All you need is a prompt – whether a given user will put something compromising in a style, such and such. Such Genai tools allow very effective sowing disinformation or fueling specific social moods.
Who wins the race for your privacy among Genai giants?
| Aspect | Claude | Chatgpt | Groc |
| Politics transparency | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ |
| Default settings | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ |
| User control | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐ |
| Safety history | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ |
| Data use | Minimal | Moderate | Indistinct |
How do they really use your data?
Claude: He declares that he does not use conversations for training without permission. The data is stored mainly for safety and service improvement purposes.
Chatgpt: Uses data for models training (unless the user has been writing down), product safety analysis and development. Plus, they can cooperate with external service providers.
GROK: It gets cloudy here. Data from conversations can be used for training, but the details are toned in generalities. Plus access to your data from X – if you are there.
What does it mean to you?
First of all, it is worth remembering about the golden council – Regardless of which Genai you use, treat each conversation as if it could be read by someone else.
Practical steps:
- Read the privacy policy (yes, I know it hurts)
- Check the settings – Turn off the use of data for training
- Do not share sensitive data – card numbers, passwords, confidential business information
- Regularly clean the story conversations in the settings
Privacy in the era of Genai
Although this sentence will sound like taken out of the 90s sci-fi movies, in the world of Genai privacy became a luxury commodity. Claude seems to be the most conscious player, chatgpt slowly improves his approach, and grok … Well, grok is still a wild West motivated ego Elon Muska
The truth is, however, that each of these models collects data about you. The difference lies in how transparently they say about it and what control they give you. After all, this is not a question whether your data will be collected – this is the question whether you will have any control over it.
To sum up, Use AI wisely, read privacy policies (at least their summaries) and remember that nothing is really private on the internet. Even if they promise you.
This article was based on the analysis of publicly available privacy policies and industry reports. Company’s policies can change – always check the latest versions on official websites.