Flick International Ornate glass jar filled with dark sodium bromide next to table salt on a kitchen countertop

Misguided Dietary Choices: A ChatGPT User’s Dangerous Encounter with Sodium Bromide

Misguided Dietary Choices Lead to Serious Health Risks

A man recently experienced serious health complications after using ChatGPT for dietary advice. The 60-year-old sought to eliminate table salt from his diet due to health concerns, turning to the language model for alternatives. This decision resulted in a trip to the hospital due to chemical poisoning, as documented in a new case study published in the Annals of Internal Medicine.

AI’s Troubling Dietary Recommendations

In his quest to reduce sodium chloride intake, the man followed a suggestion from ChatGPT to replace it with sodium bromide. He adhered to this recommendation for three months, unaware that sodium bromide is generally toxic for human consumption and primarily used in industrial settings such as cleaning and manufacturing.

Initially, the model likely did not intend to suggest sodium bromide as a dietary substitute. Rather, it may have connected the two substances based on their chemical properties, illustrating the inherent risks associated with relying on AI for medical advice.

The Toxic Effects of Sodium Bromide

Sodium bromide, once utilized as an anticonvulsant and sedative, is now recognized for its toxicity when ingested. According to the National Institutes of Health, long-term exposure can lead to bromism, a condition characterized by numerous adverse symptoms.

Upon arriving at the hospital, the individual presented various alarming symptoms including fatigue, insomnia, poor coordination, facial acne, and cherry angiomas—red skin bumps associated with bromism. Additionally, he exhibited signs of paranoia, suspecting that his neighbor was attempting to poison him.

Severe Symptoms and Hospitalization

As treatment progressed, the man experienced auditory and visual hallucinations. His condition worsened to the point where he was placed under psychiatric hold after an attempt to escape medical care. Hospital staff administered intravenous fluids and electrolytes and prescribed antipsychotic medication to address the patient’s escalating symptoms.

After three weeks of comprehensive monitoring and treatment, the man was eventually discharged from the hospital, but his experience raises critical concerns regarding the usage of AI in health-related inquiries.

The Role of AI in Healthcare Advice

The case study authors emphasized the potential for AI systems like ChatGPT to inadvertently contribute to adverse health outcomes. Given that the conversation logs were unavailable, the researchers highlighted the challenges in assessing the accuracy of the AI’s recommendations.

They noted that it is