08/12/2025 / By Ava Grace
In an alarming case highlighting the dangers of relying on artificial intelligence (AI) for medical advice, a 60-year-old man developed severe psychiatric symptoms – including paranoia, hallucinations and delusions – after following diet recommendations from ChatGPT.
The incident was detailed in a report published Aug. 5 in Annals of Internal Medicine Clinical Cases. The unnamed patient, inspired by his college nutrition studies, sought to eliminate chloride – a component of table salt – from his diet after reading about sodium chloride’s health risks.
Unable to find reliable sources recommending a chloride-free diet, he turned to ChatGPT. The chatbot allegedly advised him to replace chloride with bromide, a chemical cousin with toxic effects. For three months, the man consumed sodium bromide purchased online instead of table salt. (Related: Experts Dr. Sherri Tenpenny and Matthew Hunt Warn: AI may replace doctors, threatening medical freedom and privacy.)
By the time he arrived at the emergency department, he was convinced his neighbor was poisoning him. Doctors quickly identified his symptoms – psychosis, agitation and extreme thirst – as classic signs of bromism, a rare poisoning syndrome caused by excessive bromide exposure.
Bromism was far more common in the early 20th century when bromide was a key ingredient in sedatives, sleep aids and over-the-counter medications. Chronic exposure led to neurological damage, and by the 1970s, regulators had banned most medicinal uses of bromide due to its toxicity. While cases are rare today, this patient’s ordeal proves it hasn’t disappeared entirely.
His blood tests initially showed abnormal chloride levels, but further analysis revealed pseudohyperchloremia – a false reading caused by bromide interference. Only after consulting toxicology experts did doctors confirm bromism as the culprit behind his rapid mental decline. After weeks of hospitalization, antipsychotics and electrolyte stabilization, the man recovered.
The report’s authors later tested ChatGPT’s response to similar dietary queries and found the bot indeed suggested bromide as a chloride substitute – without critical context, warnings or clarification about its toxicity. Unlike a medical professional, the AI failed to ask why the user sought this substitution or caution against ingesting industrial-grade chemicals.
ChatGPT’s creator OpenAI states in its terms that the bot is not intended for medical advice. Yet users frequently treat AI as an authority, blurring the line between general information and actionable health guidance.
This case underscores the risks of trusting AI over professional healthcare guidance. It also serves as a cautionary tale for the AI era. “While AI has potential to bridge gaps in public health literacy, it also risks spreading decontextualized – and dangerous – information,” the report’s authors concluded.
With AI integration accelerating in healthcare – from symptom checkers to virtual nursing assistants – misinformation risks loom large. A 2023 study found that language models frequently hallucinate false clinical details, potentially leading to misdiagnoses or harmful recommendations. While tech companies emphasize disclaimers, cases like this reveal how easily those warnings get overlooked in practice.
As chatbots proliferate, experts urge users to verify health advice with licensed professionals. The cost of skipping that step, as this case proves, can be far steeper than a Google search.
Watch the Health Ranger Mike Adams discussing the risks and benefits of AI in healthcare with Dr. Sherri Tenpenny and Matthew Hunt in this episode of the “Health Ranger Report.”
This video is from Health Ranger Report on Brighteon.com.
Sources include:
Tagged Under:
artificial intelligence, bromide, chatbot, ChatGPT, computing, cyber war, cyborg, Dangerous, dietary advice, diets, future science, future tech, Glitch, information technology, insanity, inventions, poison, poisoning, robotics, robots, salt-free diet, sodium chloride, Table Salt, toxins
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 CYBER WAR NEWS