AI Health Roundup – November 21, 2025

AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

November 21, 2025

In today’s Duke AI Health Friday Roundup: practice licenses for AI?; drones deliver defibrillators fast; how to approach the need for continuous evaluation of predictive models; light pollution messes with ecosystem metabolism; basic AI competencies for docs; cancer vaccine trial shows encouraging results two decades later; getting arsenic out of well water yields health dividends; much more:

AI, STATISTICS & DATA SCIENCE

A collage that merges circuit board patterns with textile motifs in a grid-like background of alternating black, grey, and white. Two hand-drawn arms are on each side of the image, positioned as if gently pulling on thin, white strings that cross the image diagonally. The hands appear soft and somewhat translucent, contrasting with the rigid lines of the circuit board patterns behind them. The strings are woven through both the hands and the background, symbolising the connection between traditional weaving and modern technology. The overall colour palette features muted earth tones, including browns, beiges, and grays, creating a sense of both history and continuity between the natural and technological worlds. Hanna Barakat & Archival Images of AI + AIxDESIGN / Better Images of AI/ CC-BY 4.0
Image credit: Hanna Barakat & Archival Images of AI + AIxDESIGN / Better Images of AI/ CC-BY 4.0
  • “The developer would seek a core license for the base model, and the licensing authority would define a range of competencies relevant to the model’s intended scope of practice. Similar to the human practitioner, the model would need to meet minimum performance thresholds in each of these domains in the preclinical setting and then undergo a period of supervised training and practice in an accredited clinical environment.” In a research article published in JAMA Internal Medicine, Bressman and colleagues propose replacing the current “software as a medical device” regulatory framework for approving medical AI systems with one that mimics the process for practitioner licensing. An accompanying editorial by Rittenberg and colleagues raises some potential issues with this idea.
  • “…despite widespread AI adoption, doctors themselves think less of the abilities of their own peers using AI. The human-in-the-loop paradigm only reinforces the idea that physicians will effortlessly catch the mistakes of machines, playing into the doctor-hero tropes that featured prominently during the pandemic….In doing so, human-in-the-loop models shift responsibility onto people unprepared to shoulder it, with potential consequences. For a doctor, missing an error documented by an AI scribe in a patient’s medical history could follow that patient for years, while relying on a triage AI algorithm for sepsis could delay a patient’s care for a life-threatening condition.” An essay by Vishal Khetpal, published in STAT News, argues for greater focus on cultivating AI competencies in new physicians.
  • “In the rush to integrate artificial intelligence (AI) into clinical practice, a powerful and increasingly common class of predictive models is quietly proliferating. These models can be administrative tools that predict operational metrics, but are often clinical models that skirt the U.S. Food and Drug Administration (FDA)’s software as a medical device (SaMD) regulations.” A perspective article published in NEJM AI by Leuchter and colleagues wrestles with the complexities of continuous evaluation for predictive clinical models.
  • “Artificial intelligence (AI) is rapidly reshaping clinical knowledge, workflow, and relationships, and it is doing so at a pace that presses the medical humanities to reinterpret their aims and methods. This article argues that, far from being peripheral to the algorithmic turn, the medical humanities are central to judging when, how, and under what conditions AI supports humane care.” An article by Li Jia published in the journal Life Studies examines the implications of AI for the humane dimensions of medical practice.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Photograph provides an example of urban light pollution, with glare from streetlights and car headlights diffusing through misty nighttime conditions. Image credit: Jacek Dylag/Unsplash
Image credit: Jacek Dylag/Unsplash
  • “Artificial light pollution is increasing worldwide with pervasive effects on ecosystem structure and function, yet its influence on ecosystem metabolism remains largely unknown. Here we combine artificial light at night (ALAN) intensity metrics with eddy covariance observations across 86 sites in North America and Europe to show that ALAN indirectly decreases annual net ecosystem exchange by enhancing ecosystem respiration…Our findings show that ALAN disrupts the fundamental energetic constraints on ecosystem metabolism, warranting the inclusion of light pollution in global change and carbon–climate feedback assessments.” A research article published in Nature Climate Change by Johnstone and colleagues reports findings from a study that examined the effects of light pollution on ecosystem-level measures of metabolism.
  • “A small group of women with advanced breast cancer received a vaccine via a clinical trial more than 20 years ago. Today, they’re all still alive. Scientists say that kind of long-term survival is almost unheard of for patients with metastatic breast cancer, and it’s what caught the attention of researchers now….They found something remarkable: The women still had strong, long-lasting immune cells that recognized their cancer.” A cancer vaccine study that began decades ago at Duke has shown notable success at the 20-year mark for some patients with metastatic breast cancer.
  • “Over approximately 20 years, a nearly 70% decrease in average arsenic levels in primary well water and a 50% decrease in average urinary arsenic levels were observed in adults from Araihazar, Bangladesh, due to active arsenic mitigation efforts in the community. The results from this longitudinal study show a dose-response association between reductions in urinary arsenic levels and reduced risk of death from chronic diseases, including CVD and cancer, based on individual-level data on arsenic exposure and deaths.” A research article published in JAMA by Wu and colleagues finds that efforts to reduce levels of arsenic in well water in Bangledesh resulted in significant reductions in a number of major illnesses.
  • “…for the first time in the United States, a coalition of researchers, public safety agencies, and community partners – led by Duke Health and coordinated through the Duke Clinical Research Institute – is testing a new way to save lives…Drones carrying automated external defibrillators (AEDs) are being dispatched during real 911 calls in Forsyth County, North Carolina. The effort is part of a clinical trial that aims to see if drones can deliver AEDs to patients faster than traditional emergency services. A Duke Health news article by Mat Talhelm describes a Duke program that uses drone networks to rapidly dispatch automatic external defibrillators to patients in need who are located in areas with immediate access to such lifesaving technology.

COMMUNICATIONS & Policy

A small, orange and white Pomeranian dog sits in front of a laptop computer as if intent on something on the screen. A pair of large-framed glasses is perched on its nose. Image credit: Cookie the Pom/Unsplash
Image credit: Cookie the Pom/Unsplash
  • “It is important to carefully consider all the concerns and any additional experiments requested by reviewers. Additional experiments might be seen as unnecessary by the authors, but the reviewers’ perspective should not be immediately discarded, especially because we tend to recruit referees with diverse expertise to comment on different aspects of the paper.” Obligatory zeroth step: count to ten, slowly…An editorial in Nature Computational Science suggests approaches for crafting an effective response to peer-review critique.
  • “Sociotechnical research emphasises that understanding the efficacy, harms, and risks of AI requires attention to the cultural, social, and economic conditions that shape these systems. Yet development and regulation often remain split between technical research and sociotechnical work rooted in the humanities and social sciences. Bridging not only disciplinary divides but the gap between research and AI regulation, policy, and governance is crucial to building an AI ecosystem that centres human risk.” A position paper published in Internet Policy Review by Oduro and colleagues argues for closer integration with a wide spectrum of scholarly perspectives to inform approaches to AI policymaking.