AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
October 10, 2025
In this week’s Duke AI Health Friday Roundup: probing the inner workings of large language models; AI tackles biosecurity; librarians try stem the tide of AI slop in collections; organic compounds found in spray from Enceladus geysers; AI sycophants erode prosocial behavior; childhood stress echoes in adult health; effective agentic AI requires surprisingly few training samples; more:
AI, STATISTICS & DATA SCIENCE
- “Most work is really trying to figure out how a model works, while I’m trying to figure out why it works that way. To answer that ‘how’ question, people usually just look inside a model at the end of training. You try to uncover an efficient way of describing what’s going on inside the model, and then you impose your explanations on top of that….But that doesn’t tell you why the model works the way it does. And that’s a really important question if we want to predict how the model will behave in the future.” Quanta’s Ben Brubaker interviews Harvard AI researcher Naomi Saphra, whose work applies an evolutionary lens to uncovering the inner workings of large language models.
- “We evaluated the ability of open-source AI-powered protein design software to create variants of proteins of concern that could evade detection by the biosecurity screening tools used by nucleic acid synthesis providers, identifying a vulnerability where AI-redesigned sequences could not be detected reliably by current tools. In response, we developed and deployed patches, greatly improving detection rates of synthetic homologs more likely to retain wild type–like function.” A research article published in Science by Wittmann and colleagues looks at AI’s potential to improve security around bioengineered materials.
- “Care of an individual begins and ends with a human being. AI can help that person be seen more clearly—free from the noise of paperwork, distraction, and exhaustion. The next phase of progress in health care will depend less on technical capacity and more on ethical stewardship and the health-care community’s ability to keep humans at the centre of design and deployment. If done properly, AI will not replace care; rather, it could help us rediscover it.” A Lancet editorial sounds a hopeful note as it examines where the onrush of AI in medicine is likely to take us (H/T @erictopol.bsky.social).
- “LIMI (Less Is More for Intelligent Agency) demonstrates that agency follows radically different development principles. Through strategic focus on collaborative software development and scientific research workflows, we show that sophisticated agentic intelligence can emerge from minimal but strategically curated demonstrations of autonomous behavior. Using only 78 carefully designed training samples, LIMI achieves 73.5% on comprehensive agency benchmarks, dramatically outperforming state-of-the-art models…Most strikingly, LIMI demonstrates 53.7% improvement over models trained on 10,000 samples-achieving superior agentic intelligence with 128 times fewer samples.” In a preprint available from arXiv, Xiao and colleagues describe a novel approach for training agentic AI with a minimal number of training samples.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- It is Space Month, after all: “We again detect aryl and oxygen moieties in these fresh ice grains, as previously identified in older E-ring grains. Furthermore, the unprecedented high encounter speed revealed previously unobserved molecular fragments in Cosmic Dust Analyzer spectra, allowing the identification of aliphatic, (hetero)cyclic ester/alkenes, ethers/ethyl and, tentatively, N- and O-bearing compounds. These freshly ejected species are derived from the Enceladus subsurface, hinting at a hydrothermal origin and involvement in geochemical pathways towards the synthesis and evolution of organics.” A paper published in Nature Astronomy by Khowaja and colleagues presents a new analysis of ejecta from Saturn’s moon Enceladus that hints at conditions that might be hospitable for life.
- “Hinz said humans combat acute stress through a “flight or fight” response: “Your body collectively reacts by increasing your heart rate and blood pressure when you are experiencing a stressful situation,” she explained. “Those and other responses help you deal with that stress, but it’s not good to always be in that state. I’m interested in what happens when that doesn’t really subside.”…Poverty is at the crux of the study, which indicates a stable, financially secure home is essential for a healthy childhood free of chronic stress.” Duke University’s Thomasi McDonald profiles the work of Duke researchers whose recent study illuminates the connections between childhood stress and adult health.
COMMUNICATIONS & Policy
- “A picture book about rabbits claiming the animals can make their own clothing. Mushroom Foraging Guides so inaccurate that the New York Mycological Society tweeted out a warning about the potentially deadly errors. A nonfiction book apparently imitating another release with the same title — but with awkward sentences and a mysterious author….These are the kind of AI-written books that librarians now contend with when procuring new titles for their patrons.” In an article for Governing, Jule Pattison-Gordon describes how librarians are struggling to stem the rising tide of AI slop from infiltrating library shelves.
- “…across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users’ actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants’ willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again.” In a preprint available from arXiv, Cheng and colleagues probe the tendency of overly friendly and accommodating chatbots to erode prosocial human behavior.
- “The primary intent actually was to understand how a health system can make an informed decision about use of these tools…And also, I think a growing recognition that even though the vendors give us great tools to try to understand how well these tools work in real life, Sometimes the assumptions that the vendors make are very different than the assumptions our own clinical care teams make.” This month’s NEJM AI features a podcast interview with UC San Diego’s Karendeep Singh.
