In what doctors are calling a rare but serious case, a 60-year-old man landed in hospital after following diet advice provided by ChatGPT. Concerned about the potential health risks of table salt, the man asked the AI chatbot for alternatives. The tool suggested replacing sodium chloride entirely with sodium bromide — a chemical once used in sedatives but now largely banned for human consumption due to its toxicity.
From Curiosity to Crisis
The patient, described by doctors as having a keen interest in nutrition, was concerned about the perceived health risks of sodium chloride, commonly known as table salt. Seeking alternatives, he turned to ChatGPT, the AI chatbot from OpenAI. Among its suggestions was sodium bromide — a compound once used in sedatives and epilepsy treatments but banned in food in many countries for its toxic effects on the nervous system.
Trusting the recommendation, the man began substituting sodium bromide for table salt in his daily diet. Over three months, the chemical accumulated in his body, leading to dangerously low sodium levels, or hyponatraemia, and setting off a cascade of symptoms.
The Alarming Onset of Symptoms
When he was admitted to hospital, the patient’s condition deteriorated rapidly. Within 24 hours, he developed acute psychiatric symptoms, including paranoia, auditory and visual hallucinations, and a deep suspicion of hospital staff. His skin showed distinctive signs — rashes and cherry angiomas — pointing to bromide toxicity, known as bromism.
Doctors launched an urgent treatment plan: correcting his electrolyte imbalance, flushing bromide from his system, and stabilizing his mental state with psychiatric care. His hospital stay lasted three weeks, with a portion spent in a psychiatric ward before he was deemed medically fit for discharge.
When AI Crosses the Line From Information to Influence
The case, documented in the American College of Physicians Journal, has raised sharp questions about the boundaries of AI in healthcare. While ChatGPT’s own terms warn that its outputs “may not always be accurate” and should never replace medical advice, experts say the growing public reliance on AI for personal health decisions poses real risks.
Medical professionals stress that AI can offer general educational content, but it cannot account for individual health histories, comorbidities, or contraindications. The incident underscores the need for public awareness campaigns and potentially regulatory oversight of AI-generated health information.
“This is a modern cautionary tale,” one physician involved in the case said. “Technology can assist, but it cannot replace the expertise, context, and accountability of a trained clinician.”