Med-Gemini is a collection of AI models that can summarize health data, create radiology reports, analyze electronic health records, and more. The pre-print research paper, meant to demonstrate its value to doctors, highlighted a series of abnormalities in scans that radiologists “missed” but AI caught. One of its examples was that Med-Gemini diagnosed an “old left basilar ganglia infarct.” But as established, there’s no such thing.
Fast-forward about a year, and Med-Gemini’s trusted tester program is no longer accepting new entrants — likely meaning that the program is being tested in real-life medical scenarios on a pilot basis. It’s still an early trial, but the stakes of AI errors are getting higher. Med-Gemini isn’t the only model making them. And it’s not clear how doctors should respond.
“What you’re talking about is super dangerous,” Maulin Shah, chief medical information officer at Providence, a healthcare system serving 51 hospitals and more than 1,000 clinics, tells The Verge. He added, “Two letters, but it’s a big deal.” —The Verge