Grok iPhone App AI Coworkers Gemini 2.5 ai artificial intellingence Copilot Vision notebooklm ChatGPTGrok iPhone App

In a rare and worrying incident, a man in the United States suffered serious bromide poisoning after following diet advice he said he received from ChatGPT. Doctors believe this could be the first known case where bromide poisoning is linked to AI advice, according to a report by Gizmodo.

The case was described by doctors at the University of Washington in the journal Annals of Internal Medicine: Clinical Cases. They said the man had been taking sodium bromide for three months, thinking it was a safe substitute for table salt (sodium chloride). He reportedly got this suggestion from ChatGPT, which did not warn him about the dangers.

Bromide compounds were once used in medicines for anxiety and insomnia, but they were banned many decades ago because of their dangerous side effects. Today, bromide is mostly found in veterinary medicines and certain industrial products. Human cases of bromide poisoning, also known as bromism, are now extremely rare.

The man first went to the hospital because he believed his neighbour was poisoning him. His vital signs were mostly normal, but he was paranoid, refused water even though he felt thirsty, and was having hallucinations. His condition quickly worsened, and he went into a severe psychotic state. Doctors placed him under an involuntary psychiatric hold for his safety.

After being treated with intravenous fluids and antipsychotic medicines, his condition improved. Once stable, he told doctors that he had asked ChatGPT for alternatives to table salt, and the AI had suggested bromide as a safe option. The man followed this advice, not knowing it was harmful.

The doctors did not have his original chat records, but when they later asked ChatGPT the same question, it again mentioned bromide without warning that it was unsafe for people.

Experts say this case shows the risk of relying on AI for health advice. AI can share scientific information, but it does not always give proper warnings or consider real-world safety. The man recovered fully after spending three weeks in hospital and was healthy during a follow-up check. Doctors have reminded people that AI should never replace medical professionals, and as this case proves, it can sometimes give dangerously wrong guidance.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *