AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
May 23, 2025
In this week’s Duke AI Health Friday Roundup: prize for (AI-assisted) talking to the animals; tallying withdrawn applications at NIH; LLMs tend to overgeneralize; aha moments play important role in memory; the long-running benefits of measles vaccination; AI slop comes for summer reading lists; paper on AI-boosted productivity withdrawn; aligning AI principles with real-world governance needs; much more:
AI, STATISTICS & DATA SCIENCE

- “Can a robot arm wave hello to a cuttlefish—and get a hello back? Could a dolphin’s whistle actually mean ‘Where are you?’ And are monkeys quietly naming each other while we fail to notice? …These are just a few of the questions tackled by the finalists for this year’s Dolittle prize, a $100,000 award recognizing early breakthroughs in artificial intelligence (AI)-powered interspecies communication. The winning project…explores how dolphins use shared, learned whistles that may carry specific meanings—possibly even warning each other about danger, or just expressing confusion.” In a news article for Science, Christa Lesté-Lasserre talks with the winner and other finalists for the Doolittle Prize, which is awarded to AI research that advances the ability to decode communication in other species.
- “In this study, we leveraged a self-supervised, deep-learning strategy — temporal learning — for longitudinal MRI analysis and postoperative risk assessment in children with gliomas…We demonstrate that this approach improves the ability to predict postoperative glioma-recurrence risk across patients from three institutions and two clinical settings (low- and high-grade glioma), representing over 715 patients and 3994 scans. Deep learning–based short-term risk stratification may provide an actionable window for early intervention with systemic therapy, radiation, or clinical trial enrollment in patients with high risk of recurrence.” In a research article published in NEJM AI, Tak and colleagues demonstrate the use of self-supervised deep learning model for image-based risk stratification for pediatric glioma.
- “Healthcare delivery organizations (HDOs) can be explicitly or implicitly required to comply with high-level principles…While the current state of AI regulation is based largely on voluntary compliance with existing principles, as the regulatory landscape expands HDOs will inevitably have to satisfy an increasing number of mandatory requirements….But practically navigating these commitments is hard. As it stands, HDOs must wade through the many different principles that are not always aligned with each other.” In an article appearing in NPJ Digital Medicine, Hasan and colleagues describe an approach to aligning the sometimes disparate domains of AI principles, best practices as articulated by healthcare organization, and rapidly changing regulatory standards.
- “Our analysis of nearly 5000 LLM-generated science summaries revealed that most models produced broader generalizations of scientific results than the original texts—even when explicitly prompted for accuracy and across multiple tests. Notably, newer models exhibited significantly greater inaccuracies in generalization than earlier versions. These findings suggest a persistent generalization bias in many LLMs…” In a research article published in Royal Society Open Science, Peters and Chin-Yee report findings from an analysis that show that several widely-used LLMs tend to distort scientific articles by over-generalizing when tasked with summarization – and that this effect is actually more pronounced in newer models.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

- “Participants tended to recall solutions that came to them in a flash of insight far better than ones they arrived at without this sense of epiphany. Furthermore, the more conviction a person felt about their insight at the time, the more likely they were to remember it five days later when the researchers asked them again. ‘If you have an ‘aha! moment’ while learning something, it almost doubles your memory,’ said Cabeza, who has been studying memory for 30 years. ‘There are few memory effects that are as powerful as this.’” Duke Today’s Robin Smith highlights recent work by Duke researchers that sheds additional light on how human memory functions.
- “So far this year, at least 2,500 applications for research funding have been withdrawn — a term the agency uses to denote refusal for administrative reasons. This is more than double the number of applications that were withdrawn in the same period in each of the past two years….The increase in withdrawals seems to be mostly because the NIH quietly closed around 100 funding categories in February and March, according to documents seen by Nature. Many of these categories supported researchers from diverse backgrounds or investigators early in their careers.” Nature’s Smriti Mallapaty reports on the recent and unprecedented wave of denials of research funding at NIH.
- “The global rollout of measles vaccines has been one of history’s most successful public health efforts. Each year, they save millions of lives….This is especially true in low-income countries where children face the highest risk of dying from measles because of poorer overall health, nutrition, and living standards….Measles vaccination does more than prevent the disease. It preserves a child’s broader immunity and protects those most at risk, including infants, pregnant women, and people with weakened immune systems from health conditions or undergoing cancer treatments.” At Our World in Data, Saloni Dattani and Fiona Spooner lay out the numbers that show why the measles vaccine has been such a global life-saver.
COMMUNICATION, Health Equity & Policy

- “Alongside actual books like Call Me By Your Name by André Aciman, a summer reading list features fake titles by real authors. Min Jin Lee is a real, lauded novelist — but ‘Nightshade Market,’ ‘a riveting tale set in Seoul’s underground economy,’ isn’t one of her works. Rebecca Makkai, a Chicago local, is credited for a fake book called ‘Boiling Point’ that the article claims is about a climate scientist whose teenage daughter turns on her.” At The Verge, Mia Sato reports on recent news that a featured insert syndicated in at least two major metro newspapers contained substantial amounts of AI-generated slop, including a summer reading list that featured real authors but fictitious books.
- “Reforming scholarly communication to prioritise the interests of science over publishing would help leverage available technologies and infrastructure, repurpose existing practices to realise the benefits they were always supposed to bring, and create more accessible and equitable means of participating in scholarly communication. It is a choice, and it is within our reach.” In a perspective article published in the journal Learned Publishing, Damian Pattinson and George Currie argue that market incentives are distorting scholarly publishing and call for a turn toward “science-led publishing.”
- “Besides watching your words and asking questions about the systems that are being promoted, what should be done to hold the line on AI hype? Bender and Hanna say there’s room for new regulations aimed at ensuring transparency, disclosure, accountability — and the ability to set things straight, without delay, in the face of automated decisions.” A post at Alan Boyle’s CosmicLog discusses his recent podcast with Emily Bender and Alex Hanna, whose new book dissects the perils of AI hype and careless adoption and implementation of AI-based applications.
- “Unfortunately for everyone involved, the work is entirely fraudulent. MIT put out a press release this morning stating that they had conducted an internal, confidential review and that they have “no confidence in the veracity of the research contained in the paper.” The WSJ has covered this development as well. The econ department at MIT sent out an internal email so direly-worded on the matter that on first glance, students reading the email had assumed someone had died.” The BS Detector dissects a now-discredited preprint/article in press that purported to show the effects of incorporating AI into materials science research and development.