AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

March 27, 2026

In this week’s Duke AI Health Friday Roundup: neither docs nor AI can reliably detect deepfake radiology images; genetic errors may explain cloning failures; gut-brain paper under scrutiny for data irregularities;genetic switch explains big brains; are non-healthcare companies getting access to protected health data?; health AI adoption may be exceeding the “speed of trust;” much more:

AI, STATISTICS & DATA SCIENCE

Greyscale drawings of human bones, including hands, vertebrae and skull. Wax transfer from illustrations in Grey’s Anatomy. Image credit: Joyce Hankins/Unsplash
Image credit: Joyce Hankins/Unsplash
  • “…large language model (LLM) image synthesis has advanced from a technologic curiosity to the generation of highly realistic radiographs. The moderate performance of radiologists and current multimodal LLMs…in identifying synthetic radiographs, combined with the broad public availability of these tools, highlights the potential for malicious exploitation…A multilayered response, including clinician education, automated deepfake detection systems, mandatory watermarking, and rigorous dataset governance, is essential to prevent this emerging novelty from evolving into a systemic threat.” A research article published in the journal Radiology by Tordjman and colleagues reveals that deepfake radiological images are sufficiently convincing as to evade detection by both human and LLM-based review.
  • “It’s now a pillar on which much of modern empirical science rests. Almost every time a scientist uses measurements to infer something about the world, the central limit theorem is buried somewhere in the methods. Without it, it would be hard for science to say anything, with any confidence, about anything….No matter how irregular a random process is, even if it’s impossible to model, the average of many outcomes has the distribution that it describes.” Quanta’s Joseph Howlett traces the history of the central limit theorem and its central place in quantitative sciences.
  • “Extreme heat is an emerging public health threat, causing more deaths annually than all other natural hazards combined…In urban areas, extreme heat is becoming increasingly prevalent owing to a combination of anthropogenic and natural climatic phenomena, both of which elevate temperatures above and below ground…Unlike above-ground spaces, underground environments retain more heat due to limited advection phenomena in soils and rocks.” A study published in Nature Cities by Chinazzo and Rotta Loria uses natural language processing to extract data from publicly available posts in Google Maps that reveal trends in levels of discomfort with the ambient temperatures in underground metro systems.
  • “What needs to change is who contributes to decisions about how AI tools are purchased, governed, and used. Patients and community members need formal decision-making roles, not just advisory positions. Health care systems and insurers need to publicly report performance, including across different racial/ethnic groups, before AI tools are rolled out. Patients need to be told clearly and in advance when AI is being used in their care. These are the basic conditions for a trustworthy system.” An opinion article by physician Oni Blackstock, published in STAT News, examines the intersection of AI adoption in healthcare and a growing crisis of mistrust in medicine.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Dozens of white mouse figurines, arranged as if moving together in a herd on a wooden table top. Image credit: James Wainscot/Unsplash
Image credit: James Wainscot/Unsplash
  • “After 20 years, 58 generations and more than 30,000 cloning attempts, a team of researchers has hit the limit on the number of times a single mouse can be serially re-cloned. The results…suggest that asexual reproduction is ultimately unsustainable for mice, and potentially other mammals, too. The clones looked normal and lived as long as normal mice. But large mutations — including the loss of an entire chromosome — accumulated in the cloned lineage at an unusually high rate.” Nature’s Heidi Ledford reports on recently published research that establishes an upper limit for repeated cloning in mice before the accretion of genetic errors becomes catastrophic.
  • “The Parkinson’s disease study, published in Cell in 2016, used mouse behavioral data of motor function that has identical sets of numbers in two different experimental groups. Markus Englund, a software engineer, found the overlap when using his software to identify duplications in Dryad, an open-source research data repository. The 2016 dataset contains so many identical numbers in a row ‘that you wouldn’t expect to ever see this by chance,’ Englund says.” The Transmitter’s Claudia López Lloreda reports on recently surfaced findings of data irregularities in a pair of influential papers examining gut-brain connections.
  • “The mismatch between rapidly evolving genomic capabilities and static models of care delivery has tangible consequences: diagnostic delays, missed opportunities for disease-modifying treatment, fragmented coordination across specialties, and inequities in access to expertise and advanced therapies. These failures are not primarily technological but structural.” An opinion article published in JAMA by Harry Ostrer calls for a new approach to treating rare genetic diseases in children.
  • “A small stretch of DNA that evolved in humans may help explain why our brains became so large and so powerful…The DNA segment acts like a volume knob, dialing up production of brain cells during early development and helping build the thick, folded cortex that supports human thought, language, and reasoning.” Duke University’s Shantell Kirkendoll reports on a recently published paper that sheds light on the mechanisms that allowed humans to evolve disproportionately large brains.

COMMUNICATIONS & Policy

A repeating pattern of a photograph of a silicon chip, recoloured so that it is multi-coloured, in the style of pop art. Deborah Lupton / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Deborah Lupton / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
  • “In a recent court filing, electronic health records giant Epic Systems proved one of health care providers’ worst fears: Companies are posing as providers to gain access to patient records. …Health care providers holding those records can’t legally refuse the request. But if they share the records with someone who doesn’t have rights to them, they violate HIPAA, the federal law that protects patients’ health information.” In an article for STAT News, Brittany Trang details an ongoing lawsuit wherein EHR company Epic has accused other companies of posing as healthcare entities in order to access health information.
  • “With many physicians dissatisfied with their health system’s AI adoption speed, it is not surprising that most use occurs outside institutional oversight. AI governance within health systems is generally designed to assess AI tools oriented around specific use cases, which allows systems to quantify the potential harms of such tools so that they can be mitigated. This approach to governance fails for general-purpose generative AI platforms because the range of harms and benefits varies depending on how physicians use them.” A perspective article published in NEJM AI by Ötleş and colleagues examine the complications that can ensue when physicians eager to use AI tools get ahead of institutional governance policies – themselves sometimes insufficient for the actual need.
  • “When expertise is drawn from too narrow a base, its holders mistake the limits of their own knowledge for the limits of knowledge itself. They treat communication as delivery rather than exchange. They dismiss the understanding of those closest to the problem as lay ignorance rather than what it often is: a different and necessary form of expertise…AI is now repeating this dynamic on a compressed timeline.” At his Slow AI blog, Sam Illingworth critiques institutional reliance on outmoded models of communication being rolled out as part of AI training and pedagogy efforts.