Imagine that your most private conversations with chatgpt (those about e.g. alcohol problems, trauma from the past or problems in a relationship) suddenly appear in Google search results. Sounds like a script from a privacy horror? Unfortunately, this is our reality.
The latest Fast Company discovery shows that Google indexes thousands of conversations with chatgpt, which users “shared” with friends or family. What was supposed to be a private exchange of sentences can now be read by anyone who knows the appropriate keywords. And this is just the tip of the iceberg when it comes to LLM from OpenAI!
Chatgpt like a deaf phone?
It all starts with a seemingly innocent “Share” function in chatgpt. Do you use it to send a link to a conversation to your friend through WhatsApp? Congratulations, because you are potentially providing your private conversation to the whole Internet.
How is this possible? When you click “Share”, ChatgPT creates a public link, while Google, being what it is (the ubiquitous network indexator, operating for a quarter century), diligently catalogs this content. Just a few simple inquiries in the search engine to reach almost 4,500 such conversations. A lot of? There will be more.
What do people really say AI? Chatgpt replaces the therapist?
The data is terrifying. According to research, almost half of the Americans use AI chatbots for psychological support. Three -quarters are looking for help with fears, two -thirds are looking for advice in personal matters, and 60% asks about depression.
In indexed conversations you can find detailed stories about:
- Addiction problems
- Experiences of physical violence
- Serious mental disorders
- Sex life and problems in relationships
- Childhood traumas
The difference between chatgpt and a real therapist? Your psychologist will not leave the session transcript on the open internet shelf. And of course half jokingly, half seriously, because treating LLMs as therapists is an extremely big misunderstanding. Large language models are designed in such a way as to always give the user a satisfactory answer. This means that chatgpt or other LLM will almost always try to make the conversation nice and friendly. It is for this reason that ChatgPT or Gemini hallucinate, i.e. add and make the facts make up – because the parent value is to give a good answer. Is the therapist behave like that? Does not set boundaries, does not ask, does not counter the patient’s claims? Yes, he behaves so, but a dictator and tyrant therapist who knows that there may be fatal consequences for the wrong answer.
Altman himself warns (lightly too late;))
The CEO of OpenAI Altman himself recently admitted that users should not share the most personal details of chatgpt, because the company “may be forced to make them available” at the request of the court. The problem is that he did not mention that conversations may be voluntarily open to indexing by Google.
How to protect yourself? Practical guide
1. Never click “Share” without thinking
Before you provide a conversation, think twice. This link can go not only to your friend, but also to Google and thanks to it to millions of people.
2. Remove the shared links
If you have already clicked “Share”, you can delete the shared link in the chatgPT settings. Do it as soon as possible.
3. Limit personal details
Do not share the information that could identify you: full name, address, job details, specific dates and events.
4. Use nicknames
If you have to refer to specific people, use fictitious names or initials.
5. Check your “traces”
You can check if your conversations have not hit Google, searching for fragments of your conversations with chatgpt.
6. Consider the alternative
If you need psychological support, consult a real specialist in professional secrecy. Ai as a psychotherapist is not a solution!
Before you do you, think!
This situation is a symptom of a greater problem in the Tech industry. Companies release functions, and only later think about the consequences for user privacy. The finish line did the same with its chatbots, providing users in public feed. This indicates the approach that we will observe in the Bigtechs in terms of privacy, which is why it is so important to consciously use technology.
Your privacy is in your hands
Do the above facts mean that chatgpt should be completely avoided? Nothing could be more wrong! Genai tools such as chatbots can be very helpful and increase our productivity. However, this does not change the fact that as a society we must be more aware of the opportunities and threats that AI brings with us.
In the world of AI, there is nothing like “only between us” (well, unless you set LLM locally and cut yourself off from the network). Each conversation can potentially become public. Therefore, keep common sense and do not share the machines with what you would not like to read tomorrow at Google News;)
What if you have to broaden your conversations in the past? Maybe it is worth checking if your secrets are accidentally wandering on the internet. In the end, it’s better to blow cold than to collect jaws from the floor later.