AI Health Roundup – November 14, 2025

AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

November 14, 2025

In this week’s Duke AI Health Friday Roundup: puzzle-solving approaches illuminate new pathways for AI development; mechanistic link between Epstein-Barr virus and lupus uncovered; picking the right framework for reporting on general AI research in healthcare; new chip extends quantum computing capabilities; interrogating the creative potential of AI; updated Turing Test uncovers key differences between human and AI speech; much more:

AI, STATISTICS & DATA SCIENCE

Selective focus photograph shows a disorderly pile of jigsaw puzzle pieces lying on a flat surface. Image credit: Nathalia Segato/Unsplash
Image credit: Nathalia Segato/Unsplash
  • “LLMs can generate a lot of plausible-sounding lemmas (statements that are used to prove larger theorems), and automated reasoning can check whether they’re correct or not. But as soon as something is incorrect, the SAT [satisfiability] solver can give counterexamples back — ideally, the smallest counterexample. Because the solver is really good at figuring out: OK, that mistake I just made, what did it depend on?” Quanta’s John Pavlus interviews mathematician Marjin Heule, whose recent work has focused on developing symbolic AI systems capable of solving math problems that stump humans by applying puzzle-solving techniques.
  • “For each puzzle type the algorithm learns, such as a sudoku, Jolicoeur-Martineau trained a brain-inspired architecture known as a neural network on around 1,000 examples, formatted as a string of numbers…During training, the model guesses the solution and then compares it with the correct answer, before refining its guess and repeating the process. In this way, it learns strategies to improve its guesses.” Small but mighty: Nature’s Elizabeth Gibney reports on a new AI model that, despite being trained on a much smaller corpus than is usual for LLMs, beats such models when it comes to reasoning puzzles.
  • “We introduce Self-Adapting LLMs (SEAL), a framework that enables LLMs to self-adapt by generating their own finetuning data and update directives. Given a new input, the model produces a self-edit-a generation that may restructure the information in different ways, specify optimization hyperparameters, or invoke tools for data augmentation and gradient-based updates. Through supervised finetuning (SFT), these self-edits result in persistent weight updates, enabling lasting adaptation.” In a research article available from arXiv, Zweiger and colleagues present a “self-adapting” large language model that shows promise in developing an AI model capable of self-directed responses to new circumstances outside their original training.
  • “The long road to building a fully functioning quantum computer may have shortened thanks to a new version of a gizmo called a superconducting qubit. The new qubit can maintain its delicate quantum states for more than 1 millisecond, three times the previous best for such a device. Reported last week in Nature, the result suggests a full-fledged quantum computer may need far fewer qubits than previously thought. Most important, the advance was made not by redesigning the qubit, but by improving the materials from which it was fashioned.” Science’s Adrian Cho reports on a new milestone on the road to practical quantum computing with the development of a chip capable maintaining quantum states longer than the previous best records.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Graffiti-like wall mural showing deliberately rough version of emoji (hearts, thumbs up, smiling faces) done in black and white against a silver background. Image credit: George Pagan III/Unsplash
Image credit: George Pagan III/Unsplash
  • “The current analysis illustrates that drug promotion content is frequently posted by individual creators, lacks essential risk information, and bears the hallmarks of undisclosed marketing. These findings suggest that posts circumvented established advertising principles and potentially eroded the fair balance crucial for informed patient decision-making, consistent with prior literature on traditional DTCA’s impact on prescribing.” An analysis, published in JAMA by Kresovich and colleagues, examines social media posts related to prescription drugs and finds substantial amounts of under-the-radar promotion taking place.
  • “Our findings provide a mechanistic basis for why only a small fraction of EBV-infected individuals develop SLE whereby EBV infects autoreactive antinuclear antigen B cells, which are known to be present in the naïve B cell compartments of patients with autoimmune diseases but not healthy individuals.” A research article by Younis and colleagues, published this week in Science Translational Medicine, implicates the Epstein-Barr virus as playing a role in the autoimmune disease lupus erythematosus and provides a mechanistic explanation for the relationship.
  • “In the new paper, Bishop and her co-authors focus on the most highly cited papers linking the microbiome to autism. They argue that because of tiny sample sizes, poor statistical methods, and a lack of successful replications, these studies only offer weak evidence. Many have contradictory findings: Some report that autistic children have a lower abundance of certain gut bacteria than neurotypical controls, whereas others report a higher abundance or no difference at all.” Writing for Science Insider, Cathleen O’Grady reports on recent research that casts doubt on previous work suggesting links between the gut microbiome and the development of autism.

COMMUNICATIONS & Policy

A mosaic-like image of clouds, made of server and data center components, symbolizing the hidden physical infrastructure of cloud computing. Nadia Piet & Archival Images of AI + AIxDESIGN / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Image credit: Nadia Piet & Archival Images of AI + AIxDESIGN /Better Images of AI/ CC-BY 4.0
  • “The researchers suggest that achieving the curiosity and imagination needed for truly groundbreaking discoveries might require going beyond the deep neural networks — hierarchical layers of inter-connected nodes — that underlie generative AI. Although these excel at recognizing statistical patterns, they can struggle with flexible, outside-the-box thinking.” In an essay for Nature, writer Jo Marchant visits the vexed question of whether AIs are capable of creativity – or capable of helping enhance human creativity.
  • “As journal editors adopt these reporting standards, investigators may be encouraged to complete and submit checklists and methodological diagrams to accompany their submissions to optimize the transparent reporting of their methods. Authors applying GAI models in healthcare must therefore carefully identify the most appropriate reporting guideline for their study, as these standards contain tailored items for studies involving GAI models.” A commentary published in NPJ Digital Medicine by Huo and colleagues surveys reporting practices and guidelines for research on general-purpose artificial intelligence applications.
  • “Our findings challenge core assumptions in the literature. Even after calibration, LLM outputs remain clearly distinguishable from human text, particularly in affective tone and emotional expression. Instruction-tuned models underperform their base counterparts, and scaling up model size does not enhance human-likeness. Crucially, we identify a trade-off: optimizing for human-likeness often comes at the cost of semantic fidelity, and vice versa.” A research article by Pagan and colleagues, available as a preprint from arXiv, presents a revised version of the Turing Test for the AI era and uncovers key tells for differences between human and LLM-generated language.