AI Health Roundup – August 29, 2025

AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

August 29, 2025

In this week’s Duke AI Health Friday Roundup: explainable AI in healthcare; light pollution and air pollution and cardiorenal health; effects of laws against surprise medical billing; model for flu vax development; can generative AI cope with retracted articles?; mapping microscopic tuberculosis “cities”; Mediterranean diet, genetics, and Alzheimer risk; new approach for brain-computer interfaces; integrating AI into peer review; much more:

AI, STATISTICS & DATA SCIENCE

Image shows lists of seemingly unrelated words refracted through slabs of glass. Image credit: Wes Cockx
Image credit: Wes Cockx
  • “…we validated VaxSeer through computational surrogates (empirical coverage score) rather than actual vaccine effectiveness from population-based trials. However, the strong correlation between the empirical coverage scores and real-world effectiveness suggests that VaxSeer has the potential to help select vaccine strains with improved effectiveness.” A research article published in Nature Medicine by Shi and colleagues describes a antigenic match prediction model that outperformed historical annual recommendations for flu strains selected during seasonal vaccine development (H/T @smcgrath.phd).
  • “Although the issue of ChatGPT’s reporting of retracted academic research has not been investigated before, the results about the claims extracted from retracted articles align with previous evidence that it can be unreliable… In this context, the current study extends previous research by showing that it can sometimes report that retracted claims are true, even a long time after the retraction. Encouragingly, however, it was more cautious with high-profile health issues, not reporting the associated statement to be true.” A research article recently published by Thelwall and colleagues in Learned Publishing investigates whether ChatGPT 4o-mini can appropriately identify and manage retracted articles when tasked with conducting literature reviews.
  • “For many stakeholders, relevant explanations are causal in nature, yet, explainable AI methods are often not able to deliver this information. Using the Describe-Predict-Explain framework, we argue that Explainable AI methods are good descriptive tools, as they may help to describe how a model works but are limited in their ability to explain why a model works in terms of true underlying biological mechanisms and cause-and-effect relations. This limits the suitability of explainable AI methods to provide actionable advice to patients or to judge the face validity of AI-based models.” A preprint article by Carriero and colleagues, available from arXiv, examines the relative merits of explainable AI in healthcare contexts (H/T @moorejh.bsky.social).
  • “Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him…But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it.” A horrific New York Times story by Kashmir Hill describes a teenage boy’s spiral into suicidality, continuously coached by a chatbot over a period of months.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Nighttime photograph showing halos of light surrounding urban streetlights and car headlights. Image credit: Jacek Dylag/Unsplash
Image credit: Jacek Dylag/Unsplash
  • “In this prospective, national longitudinal cohort study of participants aged 45 years and above in China, we evaluated for the first time the comprehensive impacts of LAN [light at night] exposure, air pollutants, PM5 components, and their interactions on CKM syndrome. The results of the Cox regression hazard model after adjusting for covariates showed that there was a positive correlation between LAN and CKM syndrome. Among the air pollutants, except for O3, SO42− and BC, the remaining pollutants were all positively correlated with CKM syndrome.” A study published in BMC Public Health by Liang and colleagues explores associations between light pollution, air pollution, and cardiorenal and metabolic dysfunction.
  • “And smale foweles maken melodye/That slepen al the nyght with open yë/(So priketh hem [streetlights] in hir corages)…”* As long as we’re on the topic of light pollution: a study published in Science by Pease and Gilbert suggests that growing light pollution is affecting bird activity: “Our analyses suggest that, on average, light pollution prolongs vocal activity of diurnal birds by nearly an hour. This prolonged activity could have negative, neutral, or positive fitness effects.”

             *[Mostly] from Chaucer’s Canterbury Tales, General Prologue.

  • “In some people with [tuberculosis], clusters of immune cells called granulomas form inside the lungs. These microscopic “cities” can harbor the bacterium that causes TB, allowing it to persist and resist antibiotics…Now researchers at Duke University School of Medicine have used advanced genetic sequencing tools to show the identity of all the ‘residents’ of these cellular cities, pinpointing their exact locations and how they interact. A web article by the Duke School of Medicine’s Angela Spivey highlights work at Duke University that opens up new possibilities in more effectively treating tuberculosis.
  • “Speech brain-computer interfaces (BCIs) can restore communication in individuals with neuromotor disorders who are unable to speak. However, current speech BCIs limit patient usability and successful deployment by requiring large volumes of patient-specific data collected over long periods of time….we show that speech BCIs can be trained on data combined across patients.” A research article by a group of Duke University researchers, available as a preprint from BioRxiv, presents a new approach to facilitating communication via brain-computer interfaces.
  • “…our study highlights the substantial influence of genetic variants, particularly APOE4 homozygosity, on plasma metabolites and their associations with ADRD [A risk. These genetic effects are widespread across the plasma metabolome and our findings identify the MedDiet as a promising approach to mitigate genetically dependent ADRD risk by targeting a broad spectrum of metabolic pathways.” A research article published in Nature Medicine by Liu and colleagues finds a protective effect against Alzheimer’s and related dementias in high-risk persons with two copies of the APOE4 gene who consistently adhered to a Mediterranean diet.

COMMUNICATION, Health Equity & Policy

The image is of the exterior of an impression of a building. People and figures can be seen inside and outside of the building. There are clouds of network connections all around the building and inside. It relates to the digital networked workplace. Jamillah Knowles & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Image credit: Jamillah Knowles & Digit/Better Images of AI/CC-BY 4.0
  • “A growing body of evidence indicates that publishers’ responses to notification of concerns about the integrity of publications in their journals are markedly inconsistent, both in the timing and the nature of editorial decisions. Median times to editorial decisions typically exceed 2 years. Equally disconcerting is the observation that, when faced with a common set of integrity concerns about publications from the same researchers, some publishers decide to retract much or all of the research and others take no action.” A research article by Grey and colleagues, in press at the Journal of Clinical Epidemiology, offers an analysis that suggest scholarly publishers’ responses to questions about the integrity of research findings are often inadequate and inconsistent.
  • “Our communications infrastructures are beset by compounding challenges: floods of low quality, often machine generated, content; controversies surrounding the moderation of content; polarisation of users through large-scale propaganda and misinformation campaigns. These infrastructural challenges undermine our epistemic capacity – our ability to access, make use of, produce, and evaluate knowledge – and appear to be growing uncontrollably; they are cancerous to the social relations and groupings that our communications infrastructure mediates, and are implicated in our collective inability to address societal crises.” In a “curmudgeon corner” opinion article for the journal AI & Society, Glen Berman likens the societal impact of generative AI products to a carcinogenic substance.
  • “Our hope is that automating some aspects of peer review, at first, will help to relieve the need to complete rote tasks, allowing scarcer human expertise to focus on aspects such as impact and significance, novelty, and clinical relevance. We endeavor to improve both the quality and efficiency of the peer review process, all while keeping our hands on the wheel and our eyes on the road.” An editorial by Perlis and colleagues at JAMA addresses some of the questions that the use of generative AI tools pose for the future of the peer review process.
  • “In this difference-in-differences study, we found statistically significant reductions in out-of-pocket spending among adults with direct purchase private insurance who gained surprise billing protections under the NSA [No Surprises Act]. Declines in out-of-pocket spending did not vary across sociodemographic groups. In contrast, premium spending and high burden medical spending did not change after the NSA….Our study findings support anecdotal reports that the NSA has successfully shielded patients from surprise billing.” A study by Liu and colleagues, published in BMJ, finds benefits for US patients following the enactment of a law designed to curtail surprise medical billing.