''Grok'' Accidentally Publishes 300,000 User Conversations
Grok AI Exposes 370,000 User Chats in Major Data Leak
Elon Musk’s AI chatbot Grok has reportedly leaked 370,000 user conversations through its developer’s website, xAI, after search engines were able to index them, according to a Forbes report.
The leak stemmed from a coding flaw in xAI’s platform. When users clicked the “share” button inside Grok, the system generated a unique link for the conversation. Those links were later picked up and archived by search engines like Google because they were listed in xAI’s sitemap — making them easily accessible to the public.
The exposed chats varied widely: from routine tweet-writing and job requests to more alarming prompts, including terrorist attack scenarios, fake character insertions into violent scenes, and even instructions for hacking cryptocurrency wallets. Some conversations also contained sensitive personal details such as passwords, photos, and private documents.
Forbes noted that the leak included private exchanges where users sought health and mental health advice from the chatbot. One of those affected was British journalist Andrew Clifford, who had used Grok to summarize newspaper front pages and draft tweets for the account Sentinel Current. Clifford told Forbes he was unaware that his chats had been made searchable on Google.
The breach also revealed conversations that violated xAI’s usage policies, ranging from drug-related discussions to bomb-making instructions and even a step-by-step guide for assassinating public figures — including Elon Musk himself.
This isn’t the first time an AI company has faced such a crisis. Earlier this year, OpenAI experienced a similar incident with ChatGPT, though the issue was resolved quickly. Following that episode, Grok’s official X account had reassured users that their chats would remain strictly confidential — a promise now cast into doubt.