AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
January 16, 2025
In this week’s Duke AI Health Friday Roundup: AI may be imposing tunnel vision on science; new report shows stubborn problems in cardiovascular health; taking Health GPT for a test drive; the genetics influencing a dog’s ears; housing, food insecurity among healthcare workers; progress in AI vs clinician deskilling; LLMs appear to have “memorized” large swathes of training data; much more:
AI, STATISTICS & DATA SCIENCE
- “…the team found that junior scientists who used AI were less likely to drop out of academia and more likely to become established research leaders, doing so nearly 1.5 years earlier than their peers who hadn’t. …But what was good for individuals wasn’t good for science. When the researchers looked at the overall spread of topics covered by AI-driven research, they found that AI papers covered 4.6% less territory than conventional scientific studies.” At Science, Celine Zhao reports on a recent analysis published in Nature by Hao and colleagues that suggests the use of AI may be benefiting individual scientists and teams to the detriment of the larger scientific enterprise.
- “This study underscores the potential of sleep-based foundation models for risk stratification and longitudinal health monitoring. By integrating several physiological signals and leveraging large-scale pretraining, SleepFM performs consistently well across diverse disease categories and outperforms supervised baselines.” A research article published in Nature Medicine by Thapa and colleagues presents a machine learning model designed to predict risk for a host of diseases from sleep data.
- “As someone who has spent nearly two decades experimenting with digital health tools (personal health records, tethered portals, wearables, symptom trackers, oncology apps, research dashboards) I was excited to see what this new tool could actually do for me. I approached it the same way I approach nearly everything: with curiosity and a long list of test scenarios.” At her blog, patient advocate Liz Salmi provides a candid assessment of her initial experience using OpenAI’s Health GPT.
- “A growing body of research has found that different AI models can develop similar representations, even if they’re trained using different datasets or entirely different data types. What’s more, a few studies have suggested that those representations are growing more similar as models grow more capable.” Quanta’s Ben Brubaker reports on recent research that suggests artificial intelligence systems – regardless of differences in model – tend to converge on similar representations of concepts.
- “…the researchers also showed that Llama had losslessly compressed large portions of other works, such as Ta-Nehisi Coates’s famous Atlantic essay “The Case for Reparations.” By prompting with the essay’s first sentence, more than 10,000 words, or two-thirds of the essay, came out of the model verbatim. Large extractions also appear to be possible from Llama 3.1-70B for George R. R. Martin’s A Game of Thrones, Toni Morrison’s Beloved, and others.” The Atlantic’s Alex Reisner reports on recent research suggesting that, contrary to previous assurances, many large language models do appear to have stored copies – in some cases, full and complete ones – of elements of their training data, including books and other creative (and copyrighted) works.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “A gene that is important for human hearing could determine whether a dog’s ears are pendulous like a basset hound’s or stubby like a rottweiler’s, according to a genetic analysis of more than 3,000 dogs, wolves and coyotes…. Rudolph and her colleagues analysed the genomes of thousands of canines, looking for differences in sequence that correlate with ear length.” Nature’s Heidi Ledford reports on recent research that sheds light on the genetics behind the enormous variation in dog ears.
- “If we want a healthier future, we must understand how far we have come, how far we have yet to go, and what stands in our way. By putting data at the center of our collective awareness, we aim to help the cardiovascular community, including clinicians, researchers, policymakers, and the public, see the landscape clearly and chart a more effective path forward.” The Journal of the American College of Cardiology has just published their inaugural issue of Cardiovascular Statistics in the United States, an annual look at the state of cardiovascular health, and the data show stubbornly high prevalence of cardiovascular disease and climbing rates of death related to hypertension.
- “Iterative application of shorter NPIs [non-pharmaceutical interventions], interspersed with temporary relaxation, helped reset adherence, mitigating fatigue and sometimes even improving compliance. Psychological relief and a sense of regained autonomy during relaxation periods may renew public willingness to comply when restrictions are reintroduced. These findings emphasize the dual benefits of short, strategic NPIs for epidemic control and public resilience, offering actionable insights for designing more sustainable pandemic interventions.” A preprint article by Rikani and colleagues, available at MedRxiv, presents findings from a study that examined strategies for improving compliance with non-pharmaceutical interventions (eg, shutdowns; stay-at-home orders) during pandemics.
COMMUNICATIONS & Policy
- “Significant rates of financial hardship exist among US health care workers. At least 1 measure of food insecurity was reported by 1 in 4 direct care/support workers and 1 in 10 health technologists/technicians; these rates exceed prior reports…Certain health care workers’ wages may not be sufficient to meet basic needs.” A troubling report published by Zhong and colleagues in JAMA documents day-to-day challenges to the U.S. clinical workforce including housing and food insecurity.
- “When older people struggle with daily activities because they have grown frail, because their chronic illnesses have mounted, or because they have lost a spouse or companion, most don’t want to move….That means they need home care, either from family and friends, paid caregivers, or both. But paid home care represents an especially strained sector of the long-term care system, which is experiencing an intensifying labor shortage even as an aging population creates surging demand.” At KFF Health News, Paula Span reports on the looming crisis in home health care amid an aging population – as well as some possible solution.
- “The recent exceptional growth in the number of special issues has led to the largest delegation of editorial power in the history of scientific publishing. Has this power been used responsibly? In this article we provide the first systematic analysis of a particular form of abuse of power by guest editors: endogeny, the practice of publishing articles in one’s own special issue. While moderate levels of endogeny are common in special issues, excessive endogeny is a blatant case of scientific misconduct.” An analysis by Crosetto and colleagues, available as a tartly-worded preprint at arXiv, reveals patterns of excessive “endogeny” in journal special issues – a form of publication that has proliferated in recent years.
- “The gradual adoption of digital tools over the ensuing decades, however, prompted some to speculate about what clinical skills mattered most in a world of ubiquitous point-of-care digital aids. The esteemed Duke University internist Eugene Stead, echoing informatics pioneer Octo Barnett, ventured that these tools would liberate physicians from memory-intensive practices of having to prepare ‘just in case’ for myriad clinical possibilities, instead allowing them to operate in a more process-oriented just-in-time mode.” A perspective published in NEJM AI last month by Andrew S. Lea examines the perennial issue of deskilling in the context of medical AI.
