AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

March 13, 2026

In this week’s Duke AI Health Friday Roundup: what is and is not a stochastic parrot; using LLMs to ID timing and nature of events from chart notes; survey reveals how authors prefer to use LLMs; AI model predicts functional outcomes from genomic changes; study charts changes from revised kidney function algorithm; lasting impact from some antibiotics on gut microbiome; agentic AI for disease surveillance; much more:

AI, STATISTICS & DATA SCIENCE

Three brightly-colored Blue Macaws sit on a branch, two with heads turned to the left, one with head turned to the right. Image credit: Sid Balachandran/Unsplash
Image credit: Sid Balachandran/Unsplash
  • “…there is a vast array of technology called “AI” that is not reducible to LLMs, and many current AI systems that utilize LLMs also leverage a variety of other technologies, including hand-written-rules, deterministic (non-stochastic) programs, various algorithms, and non-language models. This means that ‘AI’, broadly, is not equivalent to a large language model; it is not ‘just a stochastic parrot’.” In a post on Medium, AI expert Margaret Mitchell traces important distinctions between a specific kind of machine learning – large language models – and the larger world of AI.
  • “We demonstrate that LLM-driven identification and timing of patient outcomes from unstructured clinician notes is feasible. Contextual outcome identification unlocks the potential of unstructured clinician notes for predictive modeling.” A research article published in Artificial Intelligence in Medicine by Abdullahi and colleagues presents findings from a study of large language models used to identify the nature and timing of clinical outcomes from unstructured chart notes.
  • “Acting as 24/7 digital epidemiologists, multiagent systems can integrate heterogeneous signals from multisource surveillance systems, conduct contextual risk evaluation and adaptive forecasting, generate tailored early warnings, and provide actionable recommendations for targeted control—closing the loop between detection and response. Embedding interpretability and mandatory human-in-the-loop oversight enhances trust and accountability.” In an article published in the Journal of Medical Internet Research, Yang and colleagues propose a framework for integrating agentic AI to expand response capabilities in epidemiological surveillance for respiratory illnesses.
  • “…the technological leap is also raising alarm that consumers could be duped by deepfakes. A February study in The British Journal of Psychology found that people overestimated their ability to recognize A.I.-generated faces, leaving them vulnerable to “fraud and deception.” That risk is intensifying as the technology improves….While A.I. once had obvious giveaways, like hands with extra fingers, the newest videos look confoundingly authentic — and often viewers are not told otherwise.” In the New York Times, Bensinger, Hsu, and Shen offer an article describing the use of highly realistic AI audio and video deepfakes to pitch health supplements online – usually with the use of AI undisclosed.
  • “…we introduce Evo 2, a biological foundation model trained on 9 trillion DNA base pairs from a highly curated genomic atlas spanning all domains of life to have a 1 million token context window with single-nucleotide resolution. Evo 2 learns to accurately predict the functional impacts of genetic variation—from noncoding pathogenic mutations to clinically significant BRCA1 variants—without task-specific fine-tuning.” In an article published by Brixi and colleagues in Nature, the authors present a generative AI model trained curated genomic information and designed to predict the functional outcomes of mutations.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Telescopic photograph of a waxing crescent moon. Image credit: Stefan Schwinghammer/Unsplash
Image credit: Stefan Schwinghammer/Unsplash
  • “By tracing the ants’ paths, the team discovered that blocking the Moon disoriented them and sent them off in an errant direction. After comparing those paths with the paths of ants that could see moonlight, the team concluded their behavior was consistent with what’s called a time-compensated lunar compass. Essentially, the insects observe how quickly the Moon moves to estimate the relative position of their home and update that prediction with time.” Science’s Jason Dinh reports on recently published research showing that a species of ant is able to navigate by combining its internal clock-sense with the Moon’s position in the sky.
  • “…although the strongest associations were found for antibiotics used <1 year before sampling, antibiotics used 1–4 years and 4–8 years before sampling were also associated with lower diversity and differences in the abundance of species. Second, the associations were mainly related to three antibiotic classes: clindamycin, flucloxacillin and fluoroquinolones.” In a research article published in Nature Medicine, Baldanzi and colleagues present findings from a Swedish study that finds lasting effects from antibiotics – three in particular – on the human gut microbiome.
  • “…implementation of the wait time modification policy was associated with increased transplant rates among Black preemptive and postdialysis candidates. These findings provide evidence that remedying the harms of race-based medicine may be a promising approach to address racial kidney transplant inequities.” A research article published in JAMA Internal Medicine by Khazanchi and colleagues examines changes in wait times for kidney transplantation among Black transplant candidates after a race-based algorithm used to estimate kidney function were removed from use in the early 2020s.
  • “The patient has a serious diagnosis and is being appropriately referred to a tertiary oncology center. One assumes there would be timely review, rapid scheduling, and coordinated care. Instead, securing an initial visit for this patient revealed a complex sequence of administrative delays, incomplete handoffs, insurance inquiries, system miscommunications, and logistical dependencies that collectively undermined timely access to care.” A JAMA Viewpoint article by Hassid and Kaafarani illustrates the systemic risks that patients may face when being referred to a different care facility.

COMMUNICATIONS & Policy

Sunset behind a cluster of geodesic pylons and webwork of high-tension powerlines. Image credit: Geon George/Unsplash
Image credit: Geon George/Unsplash
  • “Creating a RWD utility model requires unified reform across governance, standards, infrastructure, and regulation. Although modernization efforts such as electronic case reporting have advanced public health goals, many RWD use cases still suffer from fragmented implementation—making evidence generation extremely challenging.” A policy article published in Science by Haendel and colleagues advocates for applying a regulatory framework to health data analogous to the ones used to govern public utilities.
  • “After Alice’s death in 2007, her children decided to commit some of the family fortune to supporting progranulin research, as a “gift to our family and the world,” Richards Donohoe says. With her brother-in-law Bob Farese Jr., then an endocrinology researcher at the Gladstone Institutes, she founded a nonprofit research consortium, later named the Bluefield Project after Alice’s hometown in Virginia.” Science’s Jennie Erin Smith profiles a nonprofit research organization established by a family struck by neurodegenerative disease.
  • “AI reviews should help authors, not subjugate them. We envision a system where authors can refine early drafts, anticipate critical feedback, stimulate internal discussion, and engage more symmetrically with the review process before they submit their work to a journal or preprint server. We see AI feedback as an additional lens rather than a gatekeeper, akin to a “pre-flight check” to speed up the process and make it more efficient.” A survey study by Lemberger, Mastboim, and Rechavi, available from EMBO Reports, captures authors’ sentiments about using AI tools as part of the prepublication review process.
  • “Content is exposed. AI summarizes it, strips attribution, flattens context, and delivers answers without directing anyone to your website….Community is harder to disintermediate. The relationships built through years of participation, the reputation that comes from knowing who reviewed a paper or organized a session or mentored an early-career researcher, the trust that accumulates when people show up repeatedly and put their names behind judgments: none of this survives the journey through an AI intermediary.” A guest post at Scholarly Kitchen by Ben Kaube and Steve Smith suggests that scholarly and academic societies may have potent assets in a media landscape increasingly dominated by AI-scraped summarization.