AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
December 5, 2025
In this week’s Duke AI Health Friday Roundup: considering healthcare AI bubbles; Cochrane reports on HPV vaccine and cervical cancer; egregious AI slop crops up in autism paper; even microbes can sense seasonal change; academic libraries in the AI age; why lonely people turn to chatbots; AI-written peer reviews (and papers) show up en masse at conference; medical AI and alignment questions; much more:
AI, STATISTICS & DATA SCIENCE
- “To me, the question of an AI bubble in healthcare isn’t whether there will be a business shakeout with winners and losers – that seems inevitable. Rather, the key question is whether the overall healthcare AI enterprise will have staying power as the field consolidates and rationalizes.” In a post on his Substack page, UCSF physician Robert Wachter surveys the likelihood of a bursting bubble in health AI, as well as the likely splashback.
- “If it isn’t immediately apparent, it’s nonsense. There’s a bit of a random bicycle with a torture-device for a seat; a small child points — at what, we can never know? — as his parent, in a feat of grand body horror, has become attached to a slab of concrete…There’s the Factor Fexcectorn, the word AUTISM seemingly pointing to a small orb that sits just outside someone’s brain and, of course, the ┐ Tol LIne storee, that most vile thing!” A research paper published in Nature Scientific Reports is getting a whole lot of the bad kind of attention this month, especially after readers took a glance at the paper’s central figure, which turned out to be a very obvious example of meaningless AI slop, complete with gibberish labels. At his No Breakthroughs page on Substack, Jack Ryan walks us through this howler.
- “The U.S. does not lack innovative training models or motivated workers and learners, but it doesn’t have a public funding system that can quickly seed new training programs or scale successful ones, explained Rachel Lipson, scholar in residence and co-founder of the Harvard University’s Project on Workforce…. That lack of investment has consequences for workers who are displaced by new technologies and automation. ‘If you look at the last few waves of technological change or macro and structural shifts, it’s very clear in the U.S. that we have not done a particularly good job helping people through the journey of a transition from the time of a job loss until landing in a new job,” Lipson said.’ A feature story posted by the National Academies of Science, Engineering & Medicine examines workforce development and training issues in the AI era.
- “If merely changing a prompt can change a decision that will cost thousands of dollars per year, and earn another party a similar amount, it is evident that there will be massive financial and medical or legal incentives to guide or align the behavior of AI models. AI with values warped towards maximizing reimbursement will result in overuse of costly and potentially risky procedures. Uncertainty about which values are driving the behavior of AI models will reduce trust and slow adoption.” An article in Nature Medicine by Isaac Kohane and Arjun Manrai raises the issue of alignment in medical AI – whether the AI system’s training and performance reflects the appropriate values for a given task.
- “The Pangram team used one of its own tools, which predicts whether text is generated or edited by LLMs. Pangram’s analysis flagged 15,899 peer reviews that were fully AI-generated. But it also identified many manuscripts that had been submitted to the conference with suspected cases of AI-generated text: 199 manuscripts (1%) were found to be fully AI-generated; 61% of submissions were mostly human-written; but 9% contained more than 50% AI-generated text.” Nature’s Miryam Naddaf reports that the tide of AI-generated manuscripts has now come for the AI researchers themselves, as LLM-written peer reviews – and in many cases, the manuscripts themselves – swamp a major international machine learning conference.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “The discovery connects cyanobacteria to a plethora of much more complex organisms with seasonal rhythms, and it indicates that anticipating seasons may have emerged early in life’s evolution. It may have even predated the internal clocks that give an organism a sense of day and night.” Quanta’s Elizabeth Landau explores recent revelations that a sense of changing seasons is so ubiquitous among terrestrial life that even bacteria share it.
- “Girls vaccinated at or before the age of 16 were 80% less likely to develop cervical cancer than unvaccinated girls…. Importantly, the review found no evidence to support claims that HPV vaccination increases the risk of serious adverse events. By cross-referencing alleged adverse events with real-world follow-up data, the review team found no relationship between reported serious side effects and HPV vaccination.” A pair of newly published reviews at the Cochrane Collaboration that encompass randomized trials and observational studies, respectively, continue to reinforce findings showing that the HPV vaccine effectively prevents the development of cervical cancer.
- “The most significant outcome of this research is demonstration of the potential for supervised machine-learning methods to discover biochemical information from highly degraded molecular suites as old as the Paleoarchean Eon—samples in which no intact biomolecules have been preserved.” In a research paper recently published in the Proceedings of the National Academies of Science, Wong and colleagues describe the use of a supervised machine learning approach to identify biochemical traces of microorganisms in extremely old rock samples.
COMMUNICATIONS & Policy
- “All too often, the library is not fully included in conversations that guide student learning and academic support. Whether it’s curriculum planning, academic technology rollouts, or student support initiatives, libraries and librarians are sometimes left out. When the library is treated as background infrastructure rather than as an active learning environment, the entire academic mission is weakened.” In a guest post at Scholarly Kitchen, academic librarian Jane Jiang describes the critical space that university libraries occupy, even as hype and resources flow to adoption of AI in academic settings.
- “The Centers for Disease Control and Prevention (CDC) this week cited some of our work in new guidance related to vaccines and autism. However, the citations do not provide the greater context of the full body of work on vaccine safety that is essential for informed debate about this topic. It is important to point out that the 2012 Institute of Medicine report assessing adverse effects of vaccines and cited by the CDC, found that very few health problems are caused by or clearly associated with vaccines. Further, based on our body of work on this topic and the overwhelming scientific consensus, we support the statement that vaccines do not cause autism.” The National Academy of Medicine has issued a statement regarding recent communications from the Centers for Disease Control and Prevention suggesting a possible association between vaccination and autism.
- “Our specific experimental findings demonstrate that repeated exposure to AAPA posts…significantly influences users’ affective polarization and emotional responses. A feed-ranking algorithm that controls exposure to such content produces measurable changes, equivalent to ~3 years of affective polarization, over the period it was deployed. Notably, our results indicate that the intervention does not disproportionately impact any specific political leaning or demographic group, suggesting that the approach is bipartisan.” An experimental study by Piccardi and colleagues, published this month in Science, finds that applying algorithmic ranking of posts so that material characterized by antidemocratic attitudes and partisan animosity (AAPA) is preferentially displayed to users has a marked effect on polarization.
- “If loneliness is the condition, these systems become the mirror through which it speaks. People find chatbots appealing because they can express themselves while being in control of the interaction. Participants frequently described relief at not having to worry about judgment, misinterpretation, or emotional burden.” An essay at Data & Society by Livia Garofalo and Briana Vecchione examines the phenomenon of people turning to AI chatbots for companionship and conversation (H/T @drsherrypagoto.bsky.social).
