Last week, Fast Company lit a fire across the web when it broke the story that some ChatGPT conversations were being indexed by Google.
Cue the frantic Slack messages, calls, and rapid audits. My first concern? Whether our agency and clients were exposed. Thankfully, we weren’t—but the incident forced a necessary conversation into the spotlight: how we use AI at work and how we protect our data while doing so.
But after posting, I had to stop reading the comments. Some were so frustrating, so tone-deaf, that I felt compelled to respond properly – here.
It’s not.
Misleading or unclear UI should never leave fault at the user’s door. OpenAI’s own CISO, Dane Stuckey, agrees:
“Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option… We’re also working to remove indexed content from the relevant search engines.”
– Business Insider, July 2025
The feature in question was described as “a short-lived experiment to help people discover useful conversations.” Let’s call it what it really was: a misstep in privacy handling.
It’s not.
At what point did privacy and security become optional? When did protecting people’s search history, queries, and digital breadcrumbs become something we could just handwave?
If anything, the bigger risk is not making enough noise about this.
Consent means nothing if people don’t know what they’re consenting to. And right now, most don’t.
Shame on you.
We’ve all seen shady SEO tactics—keyword stuffing, link farming, hidden text. But deliberately ignoring the presence of sensitive queries in order to hijack the loophole for brand exposure? That’s still a new low. Turning a blind eye doesn’t make it ethical.
Some of the indexed conversations were deeply personal. I won’t share them here out of respect, but it was obvious these queries were made in confidence, not as content.
If this had continued, I have no doubt it would have been penalised as a ranking tactic. And rightly so.
One of the most frustrating parts? The cleanup isn’t clean.
As Growtika highlighted in this piece, many of the ChatGPT conversations that were indexed by Google have now been deindexed—but that doesn’t mean they’re gone.
Archived versions still exist in the Wayback Machine and other internet archives. This means sensitive, accidental data exposures remain available to anyone who knows how to look.
So even if OpenAI has removed the feature and asked search engines to delist the links, the damage is already done. Once it’s on the internet, there’s no true “undo” button.
This underscores just how crucial it is to:
This incident isn’t just about OpenAI, Google, or SEO.
It’s about what kind of internet we want to build—and whether we’re serious about protecting the humans behind the data. AI can do extraordinary things, but we’ve got to stop treating it like a free-for-all.
Companies, leaders, marketers: get your policies in place. Rethink your ethics. And stop blaming users for system failures.
Because next time, it might just be your data out there.
A senior performance marketing strategist driving growth for SAAS, tech and cybersecurity brands across the globe. Specialising in search, SEO, GEO and ppc, she has become the go-to partner for senior leaders seeking measurable growth. Rachael is known for her analytical mindset, sharp execution and ability to build performance engines that deliver measurable results.