Updated 6 August 2025 at 13:15 IST
Google's Medical AI Made Up a Brain Part That Does Not Even Exist
The AI said with confidence that a patient had "old left basilar ganglia infarct." The word "basilar ganglia" sounded like a real medical term, but it doesn't exist.
- Tech News
- 2 min read

What happens when AI tries to figure out what's wrong with a patient and ends up making up a section of the brain? In a research paper published in 2024, Google's healthcare AI model, Med-Gemini, did just that.
The AI said with confidence that a patient had "old left basilar ganglia infarct." The word "basilar ganglia" sounded like a real medical term, but it doesn't exist. It's not simply a tiny mistake. This brain portion is totally made up. The AI probably meant the basal ganglia, which is a genuine and crucial portion of the brain that helps with movement and feelings. But it got that mixed up with the basilar artery, which is a big blood vessel in the brainstem. The outcome was a made-up illness that got past review and was published as a real diagnosis in a medical journal. Even more worrying is that Google's team, which had more than 50 authors and medical specialists, didn't flag this.
According to a Verge news report, Dr Bryan Moore, a neurologist, found the mistake and sounded the alarm on LinkedIn, following which Google silently fixed the error in its blog post. The research paper, on the other hand, hasn't been fixed yet. Google said it was just a "misspelling." But a lot of experts feel that's not enough. In healthcare, accuracy is key.
When AI starts making up words, it can cause misunderstanding, misdiagnosis, and even harm. Health practitioners call this very dangerous. Misspelling or mixing up even two letters in a health report can lead to serious repercussions.
Advertisement
Google describes Med-Gemini as “next-generation models fine-tuned for the medical domain.” It is built upon Google’s Gemini models by fine-tuning on de-identified medical data while inheriting Gemini’s native reasoning, multimodal, and long-context abilities.
Google’s Med-Gemini is also accompanied by MedGemma in similar instances. The latter has also acted strangely at times. When tested, it returned varied answers for the same X-ray image based on how the question was asked. In one case, it accurately identified a condition, but in another, it entirely missed it.
Advertisement
Experts say that these AI technologies can be useful, but they aren't perfect, and sometimes sound overly sure of themselves, and that’s where the true threat is. If doctors start to accept AI without question, mistakes like these can easily happen in real-life patient care.
Published By : Priya Pathak
Published On: 6 August 2025 at 13:15 IST