Leaking Secrets with AI: The Hidden Risks of ChatGPT’s Share Feature

Leaking Secrets with AI: The Hidden Risks of ChatGPT’s Share Feature

If you’ve ever clicked “Share” on a ChatGPT conversation, you might have unknowingly made it searchable on Google.

The share feature makes it incredibly easy to share your conversations with AI with others. But there’s a catch most people don’t realize: Shared conversations marked as “discoverable” can be indexed by search engines.

Not Every Shared Chat Is Public

Before you panic, read on. Not all shared conversations are public, only those marked as “discoverable.” But once that box is checked, the conversation becomes part of the searchable web. That means your carefully crafted prompt, your code snippet, your resume, or even your company’s internal strategy discussion could start surfacing in Google search results with the right keywords.

How These Chats End Up on Google

When you share a ChatGPT conversation, it generates a public link with a long, random-looking ID, something like chatgpt.com/share/00112233-4457-889a-bbcd-ddeeff. These links have always been publicly accessible to anyone who had them, but the chances of someone guessing or stumbling across one were practically zero.

That changed when OpenAI introduced the “Make this chat discoverable” option. If a user checks that box, they’re giving search engines like Google permission to index the chat. Once indexed, the conversation becomes searchable by keywords and can now show up in Google search results just like any blog post or news article.

So while OpenAI isn’t leaking your data, the combination of user-shared links and search engine indexing is what’s making sensitive content easier to find. The infrastructure hasn’t changed; the visibility has.

What Google Can Already See

And this isn’t theory. A Google search using the phrase site:chatgpt.com/share "keyword" reveals dozens of indexed conversations where users unknowingly expose credentials. API keys. Real staging URLs. Resumes. Internal documents. Personal emails.

Note: All findings and search results mentioned in this article are accurate as of August 1, 2025.

A Live Example

To see this in action, we ran a Google search for site:chatgpt.com/share api_key. The results speak for themselves.

leaked API key
leaked API key

This Was Never Meant for Google

Among the indexed ChatGPT conversations, some are so personal they stop you cold. This was one of them.

GPT Text
Not sure if this was intended be public on the web

It’s a reminder: just because ChatGPT will listen doesn’t mean the rest of the internet should. Share wisely.

How to Protect Yourself

So if you’ve ever shared a conversation, it’s worth revisiting it. You can manage your shared links from the ChatGPT settings panel and revoke any of them with a single click. It’s also worth thinking twice before pasting anything sensitive into ChatGPT at all, especially when collaborating or troubleshooting. Treat it like you would a tweet: if you wouldn’t be comfortable seeing it on Google, don’t share it.

It's Not Just ChatGPT, other AI Models Have Share Links That Get Archived Too

While ChatGPT’s share feature is the focus of this post, this behavior is not unique to ChatGPT. Many modern AI tools, including Claude AI, Perplexity, Poe, and Bing Copilot, offer public shareable links for chats or responses. These links are often simple URLs that, once shared publicly, can be indexed by search engines or archived permanently by tools like the Wayback Machine.

Take Claude AI, for instance. It allows users to share conversations via URLs in the format https://claude.ai/share/....

If you visit https://web.archive.org/web/*/https://claude.ai/share/*, you’ll find a list of Claude share links that have been archived, preserving snapshots of public conversations even after the original links are deleted or modified.

Finding shared Claude chats
Finding shared Claude chats

We then pulled all of these URLs locally and wrote a quick Python script to download them all.

				
					import requests
import re
import os
import time
# Read URLs from file
with open("urls.txt", "r") as f:
  urls = set(line.strip() for line in f if line.strip())
# Claude API base info
ORG_ID = "<<SNIPPED>>"
API_TEMPLATE = f"https://claude.ai/api/organizations/{ORG_ID}/chat_snapshots/{{share_id}}?rendering_mode=messages&render_all_tools=true"
COOKIES = {"sessionKey": "<<SNIPPED>>"}
HEADERS = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:141.0) Gecko/20100101 Firefox/141.0"}
# Extract share IDs and fetch responses
for url in urls:
  match = re.search(r"/share/([a-f0-9\-]+)", url)
  if not match:
    print(f"[!] Invalid URL format: {url}")
    continue
  share_id = match.group(1)
  api_url = API_TEMPLATE.format(share_id=share_id)
  try:
    response = requests.get(api_url, headers=HEADERS, cookies=COOKIES)
    if response.status_code == 200:
      with open(f"output/{share_id}.json", "w", encoding="utf-8") as out:
        out.write(response.text)
      print(f"[+] Saved: {share_id}.json")
    else:
      print(f"[!] Failed {share_id}: HTTP {response.status_code}")
  except Exception as e:
    print(f"[!] Error fetching {share_id}: {e}")
  time.sleep(5)  # Be polite
				
			

With all the chat data in hand, we could now start querying it for interesting information. We found:

  • API keys

  • JWT secrets

A JWT secret we found
A JWT secret we found
  • Very strange sexual content

image 20250801 092005
  • And many more concerning chats

Closing thoughts

We’re at a point where AI is reshaping how we think, code, write, and build. But that power comes with an old, familiar cost: What we publish, even unintentionally, can outlive us online.

About the Authors

Yael Ball

Yael Ball is an ethical hacker with over three years of experience in web and network penetration testing. She studied Computing and IT at the Open University UK and graduated from the CyberWise Cyber Security Course with Honors and Distinction, and is CompTIA Security+ certified. Yael sharpens her skills through platforms like HackTheBox. Outside of work, she’s a proud #boysmama to four sons.

Yael

Robbe Van Roey

Robbe Van Roey is a security consultant with 6 years of experience in the cybersecurity field. During this time, he has become an expert in web application and network penetration testing by responsibly disclosing vulnerabilities, engaging in bug bounty, competing in hacking competitions, and performing penetration tests. He has identified vulnerabilities in large organizations such as Google Chrome, Amazon, NVIDIA, Corsair and LastPass.

Robbe Van Roey

Start typing and press Enter to search

Shopping Cart