AI Health Roundup – August 22, 2025

AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

August 22, 2025

In this week’s Duke AI Health Friday Roundup: potential benefits and challenges of ambient listening; genetic influences on chronic pain; what patients need to know about health AI; labeling mental health chatbots for safety; AI-powered research and plagiarism; rotating menu selections for healthier eating; large proportion of AI implementations are failing; shrinking medical school from 4 years to 3; much more:

AI, STATISTICS & DATA SCIENCE

Shelves in a record store or library, with a tab labeled EASY LISTENING projecting out marking the beginning of section. Image credit: Joshua Olsen/Unsplash
Image credit: Joshua Olsen/Unsplash
  • “AI coding and billing software vendors told STAT that patients won’t foot the bill for the larger invoices their tools can create. That will fall to insurers, they say. But health insurance payouts don’t materialize out of the ether, warned health care experts. Making sure that providers get paid for every diagnosis they make or service they render will create higher bills for the same care, costs that will eventually land on patients.” STAT News’ Brittany Trang takes a look at some sobering implications behind one of the more widely hailed potential applications for AI – automated or “ambient” scribing of chart notes and patient encounters.
  • “A three-state model of static MS subtypes (RMS, SPMS and PPMS) was found to be inferior to more complex and dynamic models. In these dynamic models, the frequency of disease activity varies between individuals, explaining some of the individual differences in the accumulation of damage to the CNS and the acquisition of physical and cognitive impairment over time.” A research article by Ganjgahi and colleagues, published in Nature Medicine, applied a machine learning model to develop a continuum approach to classifying different manifestations of multiple sclerosis.
  • “…we found ADT [ambient documentation technology] to be associated with improvements in clinician experience….we focused on 2 key aspects of clinician experience: burnout and well-being associated with documentation. Specifically, at MGB, ADT use was associated with a 21.2% absolute reduction in burnout prevalence, while at Emory, ADT use was associated with a 30.7% absolute increase in documentation-related well-being prevalence. While promising, given limited survey response rates, these findings may represent the experience of more enthusiastic users and a best-case scenario for the outcomes associated with ADT.” In a research article published in JAMA Network Open, You and colleagues explore the question of whether AI-powered ambient documentation systems are affecting clinician burden and subsequent feelings of burnout.
  • “…there’s no system to help people identify the good mental health AI tools from the bad. When people use AI to gather information about their physical health, most of the time they still visit a doctor to get checked out, receive a diagnosis, and undergo treatment. That helps reduce the risk of harm….But for mental health, AI can easily become both the information provider and the treatment. That’s a problem if the treatment is harmful.” An opinion article published in STAT News by Choudhury and Adler proposes a labeling system to help consumers identify the relative level of safety of AI chatbots used (either by design or opportunistically) for mental health support.
  • “It’s hard to reduce an idea to a list of keywords, and search engines might not have full papers in their databases. The top hits that search engines return in this automated process — which might be ranked by a criterion such as citation count — could easily miss out relevant work that a specialist researcher in the field would know.” In an article for Nature, Ananya probes questions about whether AI-generated research can constitute plagiarism when it incorporates ideas, methods, and designs from other sources.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Unmatched place settings with plates, fruit and utensils piled on a tabletop. Image credit: Brooke Lark/Unsplash
Image credit: Brooke Lark/Unsplash
  • “…by rearranging dishes within a weekly menu, we can change the competition structure between dishes, thereby also changing the frequency with which individual options are selected. If these swaps are strategically implemented over a week, they can change outcome variables of interest…Here we tested this form of multi-day choice-architecture manipulation and demonstrate that we can achieve a meaningful reduction in both carbon footprint and saturated fatty acid intake.” A brief communication published in Nature Food by Flynn and colleagues examines the purposeful “swapping” of menu items at a UK campus refectory to influence outcomes including healthier eating and carbon footprint.
  • “…our results provide compelling evidence that SLC45A4 encodes a neuronal membrane polyamine transporter that shows genetic association with human pain. SLC45A4 is expressed in sensory neurons, and ablation of its function results in altered polyamine homeostasis, thermal coding and pain perception in mice. Future studies will be needed to assess the effect of specific coding variants on both the regulation of polyamine transport and nociceptive function.” A research article published in Nature by Middleton and colleagues presents findings from a mouse-model study that identify a particular gene locus as playing a potentially significant role in chronic pain (H/T @smcgrath.phd).
  • “In line with the learning goals of the intervention, we show that participants improve their ability to correctly identify the use of specific misinformation techniques. This insight is important because teaching manipulation technique recognition is not only effective to help evaluate information about vaccines, but also more viable than trying to debunk myriads of constantly-evolving myths.” An article by Appel and colleagues published in Scientific Reports describes results from a randomized study of a “gamified” approach to combating misinformation about vaccines.

COMMUNICATION, Health Equity & Policy

Dull, weathered-metal toy robot, posed as if in thought with a hand raised to its head. Image credit: Anton Maxsimov 5642/Unsplash
Image credit: Anton Maxsimov 5642/Unsplash
  • “…for 95% of companies in the dataset, generative AI implementation is falling short. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration….The data also reveals a misalignment in resource allocation. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation—eliminating business process outsourcing, cutting external agency costs, and streamlining operations.” Fortune’s Sheryl Estrada reports on a recent MIT study that finds the vast majority of generative AI implementations in the commercial sector are failing, often due to poor integration or lack of appropriate alignment with tasks.
  • “Rather than serving as a critical part of practical instruction, the fourth year of medical school is largely spent preparing for residency applications. Clinical foundations are laid by the third year, with exposure to the core clerkships and direct patient care. The final year of medical school is typically spent in elective courses, acting internships, and other specialization exploration. Although useful for determining one’s specialty, this process can be implemented earlier in a student’s education, allowing curriculum consolidation.” A viewpoint article published in JAMA by Mia Wells and Devdutta Sangvai advocates for a significant reform in medical education – reducing the typical 4-year interval to 3 years.
  • “…the survey evidence is pretty compelling that right now patients do see AI differently than other kinds of medical decision making processes. Recent surveys found, for example, sixty percent of U. S. adults are uncomfortable with their physician relying on AI. Almost eighty percent say they have low expectations that AI will improve important aspects of their care. About a third say I trust health care systems to use AI responsibly, the other two thirds not so much…So it matters to patients.” JAMA+AI editor Roy Perlis interviews Stanford’s Michelle Mello about what patients need and want to know about AI tools being used as part of their care.
  • “A Dublin-based organization that manages the Dewey Decimal System and partners with libraries worldwide has laid off dozens of central Ohio employees, citing the rise of artificial intelligence and federal funding cuts. OCLC, a global nonprofit with thousands of library members in more than 100 countries, confirmed to NBC4 it recently reduced its central Ohio workforce by about 80 positions.” Local Ohio NBC affiliate WCMH reports that an Ohio-based nonprofit organization that manages the Dewey Decimal System for libraries wordwide is pointing to AI adoption and budget shortfalls as it lays off workers.