AI Health Roundup – October 31, 2025

AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

October 31, 2025

In this week’s Duke AI Health Friday Roundup: using LLMs for research changes quality of downstream product; how activity occurs modulates exercise benefits; script sleuths for bogus citations; BBC study finds bots distort news; humans and LLMs apply different priorities when asked to exercise judgment; analysis finds benefit for RSV vaccination in older adults; paper-mill proliferation drives need for countermeasures; much more:

AI, STATISTICS & DATA SCIENCE

Illustrations of six data workers, working at computers in isolation from each other. Painted background includes hazy image of cubicles; digital overlay of glass fractures. Image credit: Kathryn Conrad & Digit / Better Images of AI/ CC-BY 4.0
Image credit: Kathryn Conrad & Digit / Better Images of AI/ CC-BY 4.0
  • “…when subsequently forming advice on the topic based on their search, those who learn from LLM syntheses (vs. traditional web links) feel less invested in forming their advice, and, more importantly, create advice that is sparser, less original, and ultimately less likely to be adopted by recipients. Results from seven online and laboratory experiments (n = 10,462) lend support for these predictions, and confirm, for example, that participants reported developing shallower knowledge from LLM summaries even when the results were augmented by real-time web links.” A research article published by Melumad and Yun in PNAS Nexus reports findings from experiments evaluating how the use of large language models for information retrieval effects learning.
  • “Our findings show that model outputs often align with expert ratings of reliability and bias, yet systematic asymmetries emerge across the political spectrum. Moreover, LLMs generate consistent linguistic markers when explaining their evaluations….The results show that LLMs and humans prioritize different reliability criteria, consistent with a shift from context—dependent, normative reasoning-understood here as the application of explicit quality standards and contextual reasoning rather than implying perfectly rational agents—toward pattern-based approximation.” A research article published by Loru and colleagues in the Proceedings of the National Academy of Science evaluates ways in which LLMs apply “judgement” in certain tasks.
  • “…we provide a conceptual framework to integrate human decision-making with AI, focusing on cognitive AI: a computational approach that models human cognitive processes to create AI systems that learn and make decisions in ways similar to those of humans. We discuss the elements and necessary capabilities of cognitive AI and how to realize human–AI complementarity in decision-making while considering ethical risks.” A perspective article published in Nature Reviews Psychology by Gonzalez and Heidari proposes a framework for integrating human decision-making and cognitive AI.
  • “Eli Lilly announced a partnership with chipmaker NVIDIA on Tuesday to build what it claims will be the ‘most powerful supercomputer owned and operated by a pharmaceutical company…. The company will also use some of the computing power for additional projects in clinical trials, manufacturing, and quality assurance processes.’ Lilly’s technology investment is meant to help the drugmaker tap into the potential of artificial intelligence for drug discovery.” STAT News’ Brittany Trang reports on a new partnership between Eli Lilly and Nvidia aimed at boosting AI-based drug discovery and design.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Underwater photograph shows a dark grey hammerhead shark seen from above, swimming just above a rippled, sandy seafloor. Image credit: Michael Worden/Unsplash
Image credit: Michael Worden/Unsplash
  • “Recently, researchers used CT scans and digital tools to calculate the surface areas and volumes of an ancient and diverse animal lineage: sharks. The team’s analysis, published in Royal Society Open Science, included more than 50 shark species and provides some of the best empirical evidence to date for some kind of firm scaling rule in zoology. As with a sphere, the surface area and body mass of sharks do indeed follow a two-thirds scaling law, the team found. If this holds true in other animal groups, it probably reflects underlying rules of heat exchange, metabolism or development that constrain evolution.” In an article for Quanta, Joanna Thompson reports on a new study that confirms a long-standing conjecture about scaling laws in biology.
  • “In this large prospective cohort study, we found that among suboptimally active participants (those with an average daily step count < 8000), those who accumulated most of their steps in longer bouts had lower risks for all-cause mortality and CVD than those whose steps were mostly taken in shorter bouts. These association patterns were more pronounced for cardiovascular risk than for all-cause mortality.” An analysis of exercise patterns among less active adults, published in the Annals of Internal Medicine by del Pozo Cruz and colleagues, identifies longer periods of exercise as being associated with lower risk of cardiovascular events than shorter bursts of activity.
  • “…we found that the incidence of hospitalization for any cardiorespiratory disease was lower in the RSVpreF vaccine group relative to the no-vaccine group. The absolute rate reduction was larger for cardiorespiratory hospitalization than respiratory tract disease hospitalization alone, indicating that some of the averted events were cardiovascular hospitalizations, although the vaccine effectiveness against isolated cardiovascular end points did not reach statistical significance.” In a research article published in JAMA by Lassen and colleagues, the authors present findings from an analysis of a trial of a bivalent vaccine for RSV in older (60 years of older) adults.

COMMUNICATIONS & Policy

Cropped image from a black and white photograph showing women reading newspapers in the newspaper section of Emily McPherson College Library, Russell Street, circa 1960s. Image credit: Museums Victoria/Unsplash
Image credit: Museums Victoria/Unsplash
  • “Just over a third of UK adults say they completely trust AI to produce accurate summaries of information. This rises to almost half of under 35s. That misplaced confidence raises the stakes when assistants are getting the basics wrong. These shortcomings also carry broader consequences: 42% of adults say they would trust an original news source less if an AI news summary contained errors, and audiences hold both AI providers and news brands responsible when they encounter errors.” A report sponsored by the BBC finds that the current crop of popular LLM chatbots frequently get answers about items in the news wrong, suggesting that uncritical trust in their output may be problematic.
  • “Neither selective funding and publication nor burying internal research will reliably prevent independent researchers from gathering data regarding an industry’s impacts. If compelling enough, such findings may lead to financially damaging policy. Companies need some other way to prevent this, and one powerful mechanism involves limiting the ability of independent researchers to gather such evidence.” In a preprint available from arXiv, Bak-Coleman and colleagues argue that scholarly investigations in technology research need to be insulated from the influence of the industries developing them.
  • “Wójcik’s script identified 40 references missing DOIs in the urban planning book, Urban Morphology and Sustainable Smart Cities. We looked into the book ourselves by checking the first 32 citations and were unable to verify 11 of them. Four of these cite documents from the Indian government that have since been taken offline. We contacted the listed authors of the remaining seven works, four of whom responded and confirmed they did not write them or there were substantial errors in the citation.” Retraction Watch reports on an enterprising doctoral student who wrote a program to check recently published textbooks for citation problems – particularly those that might be due to the use of LLM applications in the research and writing processes.
  • “Fictitious personas are just one example of identity fraud. Individuals and paper mills can also impersonate real scientists, posing as authors, reviewers or guest editors to slip poor-quality or fabricated work into journals…. In work published earlier this year, software engineer Diomidis Spinellis at the Athens University of Economics and Business uncovered 48 articles in one journal that he suspected were generated by artificial intelligence (AI). One of them listed him as an author without his knowledge.” In a news feature for Nature, Miryam Naddaf explores the rise of countermeasures for publishers to employ against paper mills and other scientific fakery.