AI Health Roundup – December 12, 2025

AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

December 12, 2025

In this week’s Duke AI Health Friday Roundup: the merits of explainable AI in healthcare; PFAS chemicals and infant health; ambient scribing systems and clinician burnout; links between tattoos and immune response; blood tests for cancer; use of problematic database leads to scores of paper retractions; using AI to review scientific papers; autoencoders for high-dimensional datasets; more:

AI, STATISTICS & DATA SCIENCE

Detail from a medieval illuminated manuscript showing a scribe at work at his desk. Image courtesy National Library of Wales via Wikipedia.
Image courtesy National Library of Wales via Wikipedia.
  • “What would it take for AI scribing tools to truly improve productivity? Documentation burden represents the tip of the iceberg. Clinicians also spend time between patient visits following up on patient communications, medical orders, test results, insurance prior authorizations, and billing. Transformative improvements in productivity cannot happen until time is freed up on these tasks.” An editorial in NEJM AI by Kim and colleagues parses the findings from a pair of randomized trials – one by Afshar and colleagues and one by Lukac and colleagues – of ambient AI healthcare scribing systems.
  • “Using the Describe-Predict-Explain framework, we argue that Explainable AI methods are good descriptive tools, as they may help to describe how a model works but are limited in their ability to explain why a model works in terms of true underlying biological mechanisms and cause-and-effect relations. This limits the suitability of explainable AI methods to provide actionable advice to patients or to judge the face validity of AI-based models.” A commentary published in BMC Diagnostic and Prognostic Research by Carriero explores the benefits and limits of explainable AI for healthcare applications.
  • “Autoencoders are a class of neural networks that learn to represent complex data in a more compact form. During training, the model compresses the input into a smaller set of features, often called latent representations, and then attempts to reconstruct the original data from this reduced form.” A news and views article published in Nature Computational Science by Wang and Zhang expand upon a paper recently published in the journal by Joas and colleagues that presents an autoencoding benchmarking framework for high-dimensional biological datasets.
  • “Current methods for training VLM judges mainly rely on large-scale human preference annotations. However, such an approach is costly, and the annotations easily become obsolete as models rapidly improve. In this work, we present a framework to self-train a VLM judge model without any human preference annotations, using only self-synthesized data. Our method is iterative and has three stages: (1) generate diverse multimodal instruction-response pairs at varying quality levels, (2) generate reasoning traces and judgments for each pair, removing the ones that do not match our expected quality levels, and (3) training on correct judge answers and their reasoning traces.” In a preprint article available from arXiv, Lin and colleagues present a vision-language model (VLM) AI capable of self-training without the use of human-annotated data.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Closeup photograph of a colorful tattoo of a lizardlike creature on a person’s leg. Image credit: Andres Medina/Unsplash
Image credit: Andres Medina/Unsplash
  • “Our work suggests that most unretained ink from the skin disseminates via the lymphatic system and accumulates in the medullary part of the dLN [draining lymph node] in the initial instances following tattooing. Moreover, we reported a progressive increase of ink pigment observed in the dLN 2 mo posttattoo, probably associated with constant draining from the tattooed site at the skin. This is particularly relevant considering that the puncturing of dermal blood vessels during the tattooing process in humans might also contribute to disseminating the ink via the bloodstream.” A research paper published in PNAS by Capucetti and colleagues finds that tattoo ink can create inflammation in lymph nodes and affect immune response to vaccinations.
  • “There are many ways that the low quantity of cell-free tumor DNA can be amplified, with priming agents to improve the signal-to-noise ratio and accuracy. But all these diverse and refined tests would similarly benefit by assessment in high-risk individuals rather than just using age as the method of selection, copying the flawed and wasteful practice of our current mass screening policies.” At his Ground Truths Substack, Eric Topol dissects results from a recent clinical trial of blood test for cancer (and also scrutinizes the media response to those results).
  • “We show that New Hampshire mothers whose drinking water wells were downstream of PFAS releases had more extremely low-weight births, more extremely preterm births, and higher infant mortality than did mothers whose wells were upstream of PFAS releases….Extrapolating to the rest of the United States, PFAS impose billions of dollars of costs on U.S. residents each year by worsening infant health.” A research article published in PNAS by Baluja and colleagues finds associations between wells downstream of contamination with PFAS “forever chemicals” and a number of alarming health consequences for neonates.

COMMUNICATIONS & Policy

A small toy robot with wheels and illuminated eyes sits, partly in shadow, on a flat surface against a dark background. Image credit: Jochen van Wylick/Unsplash
Image credit: Jochen van Wylick/Unsplash
  • “LMs can check statistics, catch plagiarism and verify citations; this contribution alone could be transformative. If routine work is offloaded to a computer, human attention — the scarcest resource in science — can be reserved for what matters most. But LLMs have limits, too. Trust an AI reviewer beyond those guard rails, and it might quickly become a liability.” A viewpoint article in Nature by Giorgio F. Gilestro wrestles with the fast-accelerating phenomenon of AI-created reviews for scientific manuscripts.
  • “The papers attempted to train neural networks to distinguish between autistic and non-autistic children in a dataset containing photos of children’s faces. Retired engineer Gerald Piosenka created the dataset in 2019 by downloading photos of children from ‘websites devoted to the subject of autism,’ according to a description of the dataset’s methods, and uploaded it to Kaggle, a site owned by Google that hosts public datasets for machine-learning practitioners….The dataset contains more than 2,900 photos of children’s faces, half of which are labeled as autistic and the other half as not autistic. The Transmitter’s Calli McMurray reports on the retraction of a large tranche of papers published in Springer Nature journals (with many more retractions pending) due to the use of a methodologically and ethically suspect dataset.
  • “Animals that live in big groups, from baboons to termites, are constantly communicating information to each other — creating the potential for misinformation to creep in…But animals are not the only organisms that exchange information. Bacteria send signals to each other about their environment, using the information to mount collective defenses against attacks. Inside our bodies, the cells of our immune system stay in constant communication as they ward off diseases.” In an article for the New York Times, science writer Carl Zimmer looks at recent research revealing the phenomenon of misinformation in the natural world.