Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Google removes some AI health summaries after Guardian investigation finds risk to users

The search giant has withdrawn certain AI Overviews following evidence that inaccurate and misleading health information, including guidance on liver blood tests, could falsely reassure seriously ill patients.

Ian Lyall profile image
by Ian Lyall
Google removes some AI health summaries after Guardian investigation finds risk to users
Photo by National Cancer Institute / Unsplash

Google has removed some of its artificial intelligence health summaries after an investigation found that the information could put users at risk, following a Guardian investigation.

The summaries, known as AI Overviews, appear at the top of Google search results and use generative artificial intelligence to provide quick explanations on medical topics. Google has repeatedly said the feature is helpful and reliable. The investigation found that, in some cases, it was neither.

One example involved liver blood tests. Searches for the normal range of liver function tests produced AI summaries filled with figures but little explanation. The information did not account for age, sex, ethnicity or clinical context. Medical experts said this could lead people with serious liver disease to believe their results were normal and delay follow-up care.

Following the findings, Google removed AI Overviews for searches such as “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”. A Google spokesperson said the company does not comment on individual removals, but that it makes improvements where AI summaries lack context and takes action under existing policies.

Health charities welcomed the change but warned the underlying problem remains. Vanessa Hebditch of the British Liver Trust said that slightly different search terms can still trigger AI Overviews with similar risks. She said liver test results are complex and cannot be reduced to a simple list of numbers. Normal-looking results can still mask serious disease.

The investigation found that variations such as “liver test reference range” continued to surface AI-generated summaries. Experts said these presentations could offer false reassurance and discourage people from seeking medical advice.

Patient advocates said trust in health information is fragile. Sue Farrington of the Patient Information Forum said many inaccurate AI summaries remain visible. She argued that Google should do more to direct users to evidence-based information and trusted health organisations.

Related reading

Google said its clinicians had reviewed other examples and found many summaries were supported by reputable sources. It added that AI Overviews only appear when the system has high confidence in quality.

Critics said health searches demand a higher standard. Errors, they warned, can carry serious consequences.

Ian Lyall profile image
by Ian Lyall

Read More