PRODUCT January 12, 2026 5 min read

Google Removes Some Dangerous AI Health Summaries—But Leaves Others Active

ultrathink.ai
Thumbnail for: Google's AI Health Misinformation Fix Reveals Accountability Gap

Google's response to a damning investigation into its AI health summaries reveals something uncomfortable: even when told exactly which AI outputs are dangerous, the company won't fix all of them. After The Guardian exposed that AI Overviews were delivering potentially life-threatening health misinformation, Google disabled some queries—but left others active, including advice that contradicts standard cancer treatment guidance.

This isn't a story about AI making mistakes. We've known that happens since Google's AI suggested putting glue on pizza. This is a story about what happens after the mistakes are found, documented, and handed to Google on a silver platter. The answer, apparently, is: selective correction.

The Dangerous Queries Google Did Fix

The Guardian's investigation centered on health queries where AI Overviews appear at the top of search results—the prime real estate that billions of users have been trained to trust. When users searched "what is the normal range for liver blood tests," Google's AI served up raw data tables listing specific enzymes like ALT, AST, and alkaline phosphatase.

The problem? These numbers lacked essential context. The AI didn't adjust for patient demographics—age, sex, and other factors that significantly affect what constitutes a "normal" range. Medical experts contacted by The Guardian called the results alarming. A patient with genuinely concerning liver function could look at Google's AI summary and conclude they're perfectly healthy.

This is the query Google disabled. Credit where due: when confronted with evidence that their AI was potentially convincing sick people they were fine, they acted. But that's where the good news ends.

The Dangerous Queries Google Didn't Fix

The same investigation flagged another critical error: AI Overviews for pancreatic cancer were advising patients to avoid high-fat foods. This directly contradicts standard medical guidance. Pancreatic cancer patients often struggle to maintain weight, and medical professionals typically advise them to eat calorie-dense foods—including fats—to prevent dangerous weight loss.

Telling a pancreatic cancer patient to avoid fats isn't just wrong. It could accelerate their decline. Yet as of the Guardian's reporting, Google had not disabled this query. The AI continues to serve this potentially harmful advice to people searching for information about one of the deadliest cancers.

Despite these findings, Google only deactivated the summaries for the liver test queries, leaving other potentially harmful answers accessible.

The Guardian investigation

Why fix one and not the other? Google hasn't explained their triage logic. But the selective response suggests something troubling: there's no systematic process for auditing AI health outputs, even when specific problems are reported.

The Scale Problem No One Wants to Talk About

Google AI Overviews appear on billions of searches. The company has positioned them as the future of search—summarizing information so users don't have to click through to sources. For many queries, this is convenient. For health queries, it's playing with fire.

Here's the math that should concern everyone: The Guardian found dangerous misinformation in the specific queries they investigated. They didn't conduct a comprehensive audit—they checked a handful of health topics. If two queries in a small sample contain life-threatening errors, what's the error rate across all health queries?

Google doesn't say. More concerning: Google may not know. The entire premise of generative AI is that outputs are generated on demand, not curated in advance. You can't manually review every possible response because the responses don't exist until someone asks.

This creates an accountability structure that essentially requires external investigators to find problems one at a time—and even then, fixes are applied selectively.

What Google's Response Tells Us About AI Deployment

The partial fix isn't just a Google problem. It's a preview of how AI accountability will work—or won't work—across the industry.

Consider the incentives. Google has bet heavily on AI Overviews. Removing them from health queries entirely would be an admission that the technology isn't ready for high-stakes domains. Fixing individual queries as they're flagged lets the company maintain the product while playing whack-a-mole with errors.

This is reactive safety, not proactive safety. Google waits for journalists or researchers to find problems, then addresses some of them. The burden of auditing AI falls on everyone except the company deploying it.

For other AI companies watching this unfold, the lesson is clear: you can ship first and fix later, selectively, as problems surface. The reputational cost of partial fixes is apparently lower than the cost of comprehensive pre-deployment testing or category-wide restrictions.

The Regulatory Vacuum

In traditional product liability, if you sell something that harms people, you're responsible. If a medical device gave incorrect readings, the FDA would intervene. If a drug had dangerous side effects discovered post-market, recalls would follow.

AI health summaries exist in a regulatory gray zone. They're not medical devices. They're not medical advice (Google would surely argue). They're just search results with an AI-generated summary on top. But they appear authoritative. They're positioned above human-written content. And they're consumed by people in vulnerable moments—searching for health information because they're worried about their health.

The EU's AI Act classifies some health-related AI applications as high-risk, requiring conformity assessments. But search result summaries may slip through definitional cracks. In the US, there's no equivalent framework.

What Should Happen Next

Google should disable AI Overviews for all health queries until the company can demonstrate systematic accuracy verification. This won't happen voluntarily—the product is too central to Google's AI strategy.

Regulators should treat AI-generated health information with the same scrutiny applied to other health information sources. The medium shouldn't provide liability protection.

And users should understand that AI Overviews, despite their prominent placement, are not vetted health information. They're probabilistic text generation applied to life-or-death topics.

The Guardian caught Google's AI telling pancreatic cancer patients to avoid fats and liver disease patients that their results were normal. They caught it because they looked. The terrifying question is what else is out there, in the billions of queries Google's AI answers, that no one has looked for yet.

Google's partial fix isn't a solution. It's an admission that they're not equipped to audit their own product—and they're deploying it anyway.

Related stories