AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
March 7, 2025
In this week’s Duke AI Health Friday Roundup: “tiny” ML poised to play a big role; new resource makes continuous-monitoring data from the ED available; Francis Collins departs NIH; patterns of uptake of large language model technologies contain some surprises; AI model predicts mental health risk for adolescents; remembering a prolific blood donor whose rare plasma was a life-saving boon to thousands; is AI capable of clinical reasoning?; much more:
AI, STATISTICS & DATA SCIENCE

- “TinyML (the ML stands for machine learning) is a low-cost, low-power implementation of AI that is being increasingly adopted in resource-poor regions, especially in the Global South. In contrast to the large language models (LLMs) that have dominated the news with their versatility and uncanny knack for humanlike expression, tinyML devices currently have modest, specialized capabilities. Yet they can be transformative.” A fascinating feature at Science by Sandeep Ravindran describes the emergence of a kind of AI adapted for the needs of the Global South – relatively inexpensive, lightweight, and energy-sipping.
- “Some of the results for testing LLMs in this space might be a reflection of materials in the models’ pretraining and to date there have been no large-scale, prospective, clinical trials investigating patient outcomes. Further, to study these models, researchers have adapted or developed new measures of human reasoning from cognitive psychology and medical education, the two fields that have traditionally been most interested in understanding how clinicians think. But these benchmarks have become saturated as LLMs improve; there seems to be little reason to doubt that the models will eventually conquer new benchmarks as well. Does this mean that LLMs can perform clinical reasoning?” In a perspective article published in Lancet, Rodman and Topol explore the question of whether generative AI models are capable of the complex and subtle task of “clinical reasoning.”
- “We present Multimodal Clinical Monitoring in the Emergency Department (MC-MED), a first-of-its-kind dataset containing multimodal clinical and physiological data from 118,385 adult ED visits to monitored beds of the Stanford adult ED between September 2020 and September 2022. The dataset includes: patient demographics, medical histories, and home medications; continuously monitored vital signs and ECG, PPG, and respiratory waveforms; orders placed and medications administered during the visit; laboratory and imaging results; diagnoses, visit disposition, and length of stay.” In a paper available from Physionet, Kansal and colleagues describe a new dataset (available to researchers via a data use agreement) created from the patient data, including continuous physiological monitoring, from more than 100,000 emergency department visits.
- “This study demonstrates that AI models can accurately predict adolescent mental health prospectively using readily available questionnaires. Social environment and behavioral measures (particularly sleep disturbances) strongly influenced model-predicted psychiatric illness risk, whereas the impact of neurobiological measures on predicted risk was limited.” A research article published in Nature Medicine by Hill and colleagues describes the development of an AI model that achieved 84% accuracy in predicting patients who were likely to enter the highest-risk categories within a year.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

- “…with the demand for precision and fidelity of computational models continuing to grow, HPC [high-performance computing] faces bottlenecks in data handling, algorithm efficiency, and the scalability of new architectures, especially in fields such as chemistry and biology, where molecular simulations increasingly strain hardware and software limits. Governments worldwide are heavily investing in HPC infrastructure to support research, industrial innovation, and national security, each adopting distinct approaches….Conversely, in the US, there is no long-term plan or comprehensive vision for the next era of HPC advancements…”An article in Science by Deelman and colleagues addresses the uncertain future of high-performance computing in the United States.
- “Harrison’s plasma contained a rare and precious antibody called anti-D, which was discovered in the mid-1960s. It is used in medications to prevent haemolytic disease of the fetus and newborn (HDFN) — also known as rhesus disease — a potentially fatal disease that occurs when a pregnant person’s blood is incompatible with that of their unborn baby, prompting their immune system to attack it.” National Public Radio reporter Rachel Treisman offers a remembrance of the late James Harrison, a remarkably prolific blood donor whose plasma contained a rare and life-saving antibody.
- “Against a backdrop of historical neglect and a future of political uncertainty, Science Advances devoted a special issue to new research and perspectives on women’s health. The collection, published Wednesday, highlights studies on the interplay between hormones, chromosomes and dementia, including findings with implications for men as well as women. It also features essays arguing for the importance of studying sex differences throughout biomedical research.” STAT News’ Elizabeth Cooney reports on the publication of a collection of research and perspectives in Science Advances related persistent gaps in knowledge about sex-based differences in health outcomes and risk of illness.
COMMUNICATION, Health Equity & Policy

- “More than three-quarters of respondents said that enlarging current AI systems ― an approach that has been hugely successful in enhancing their performance over the past few years ― is unlikely to lead to what is known as artificial general intelligence (AGI). An even higher proportion said that neural networks, the fundamental technology behind generative AI, alone probably cannot match or surpass human intelligence. And the very pursuit of these capabilities also provokes scepticism: less than one-quarter of respondents said that achieving AGI should be the core mission of the AI research community.” Nature’s Nicola Jones reports on findings from a survey of artificial intelligence experts finds broad skepticism that current AI systems based on transformer and neural network technologies will be able to achieve human-like intelligence.
- “Collins writes wistfully in his statement that NIH was long ‘seen as a high priority and a non-political bipartisan effort.’ He adds that he has ‘loved being employed by this extraordinary, life-giving institution for 32 years’ and calls his NIH colleagues ‘individuals of extraordinary intellect and integrity, selfless and hard-working, generous and compassionate. They personify excellence in every way, and they deserve the utmost respect and support of all Americans.’” At Science, Jocelyn Kaiser reports on the suddenly announced retirement of former Human Genome Project director and NIH chief Francis Collins.
- “Contrary to typical technology diffusion patterns, areas with lower educational attainment showed higher AI writing tool usage. Comparing regions above and below state median levels of bachelor’s degree attainment, areas with fewer college graduates stabilized at 19.9 percent adoption rates compared to 17.4 percent in more educated regions. This pattern held even within urban areas, where less-educated communities showed 21.4 percent adoption versus 17.8 percent in more educated urban areas.” Ars Technica’s Benj Edwards reports on a recent analysis that maps the uptake of genAI tools and finds a few surprises.