AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
October 11, 2024
In this week’s Duke AI Health Friday Roundup: evaluation framework for LLMs in healthcare; chemistry, physics Nobels go to AI researchers; large proportions of scientists leaving at 5, 10-year marks; CAR T therapy piloted for autoimmune diseases; AI deployed for sentiment analysis of discussion about GLP-1 agonists; pragmatic clinical trial garners much more data from EHRs, claims than PROs; worries about buggy code being produced with AI-assisted processes; much more:
AI, STATISTICS & DATA SCIENCE
- “Despite recent enthusiasm on the potential of LLMs and GenAI in many healthcare systems, the inner workings of these models remain opaque, in other words, they are still ‘black boxes’. The articles we reviewed reveal that evaluations of these ‘black box’ models typically involve manual testing through human evaluation, which underscores a significant issue: the lack of traceability, reliability, and trust. Critical details such as the origins of the text sources, the reasoning processes within the generated text, and the reliability of the evidence for medical use are often not transparent.” A research article published by Tam and colleagues in NPJ Digital Medicine describes an evaluation framework for large language models used in healthcare applications.
- “These imbalances and poor reporting of representation raise concerns regarding potential embedded biases in the algorithms that rely on these datasets. They also underscore the need for universal and comprehensive reporting practices to ensure equitable development and deployment of artificial intelligence and machine learning tools in medicine.” A viewpoint article by Jiang and colleagues appearing in Lancet Digital Health raises concerns about possible biases lurking in the data collected in open-access biosignal repositories.
- “This study demonstrates the potential for AI to help elucidate patterns and themes among social media posts about medications, in this case, GLP-1RAs. Common themes included success stories of improving diabetes and obesity management, struggles with insurance coverage, and questions regarding diet, side effects, and medication administration. Several potential applications include identification of common barriers to medication use, questions and misconceptions that can be addressed in patient-clinician discussions and public health messaging, and the identification of groups who are interested in using these medications for off-label indications to improve their health that may warrant further study.” In a research article published in JACC Advances, Javaid and colleagues used a BERT-based AI approach to analyze sentiments in online discussions of GLP-1 receptor agonists over a ten-year period.
- “She [AI researcher Melanie Mitchell] does not deny that AI might someday reach a similar level of intelligent understanding. But machine understanding may turn out to be different from human understanding. Nobody knows what sort of technology might achieve that understanding and what the nature of such understanding might be…If it does turn out to be anything like human understanding, it will probably not be based on LLMs.” An article in Science News by Tom Siegfried lays out the skeptical case for the likely trajectory of large language models in the context of achieving machine intelligence.
- “It is human nature to be tempted by an easier shortcut, particularly when under pressure by a manager or launch schedule, but putting full trust in AI can have an impact on the quality of code reviews and understanding how the code interacts with an application….But, despite the risk, engineering departments have not been deterred from AI coding tools, largely due to the efficiency benefits. A survey from Outsystems found that over 75% of software executives reduced their development time by up to a half thanks to AI-driven automation.” An article by Tech Republic’s Fiona Jackson examines the issue of whether AI-generated code is actually performing adequately in real-world settings (H/T @IrisVanRooj).
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- As this year’s Nobel prizes are announced, the prize in physiology or medicine goes to the discoverers of microRNA’s role in gene regulation, but the physics prize unexpectedly goes to AI researchers Geoffrey Hinton and John Hopfield. Nature’s Elizabeth Gibney and Davide Gastelvecchi explain: “Both used tools from physics to come up with methods that power artificial neural networks, which exploit brain-inspired, layered structures to learn abstract concepts. Their discoveries ‘form the building blocks of machine learning, that can aid humans in making faster and more-reliable decisions’, said Nobel committee chair Ellen Moons, a physicist at Karlstad University, Sweden, during the announcement.” And in a further nod to AI’s influence in basic science, the prize in chemistry was awarded to developers of AlphaFold and Rosetta, which use AI to predict protein folding and model new proteins, respectively,
- “The number of adults in England who vape but have never regularly smoked rose rapidly between 2021 and 2024, particularly in younger age groups and most of these individuals reported vaping regularly over a sustained period. The public health impacts of this finding will depend on what these people would otherwise be doing: it is likely that some might have smoked if vaping were not an available option (exposing them to more harm), whereas others might not have smoked or vaped.” A research article published by Jackson and colleagues in Lancet Public Health examines patterns of vaping uptake among English adults who have never smoked.
- “Researchers have produced the first wiring diagram for the whole brain of a fruit fly, a feat that promises to revolutionise the field of neuroscience and pave the way for unprecedented insights into how the brain produces behaviour…Rarely in science has so much effort been directed toward so little material, with scientists taking years to map the meanderings of all 139,255 neurons and the 50m connections bundled up inside the fly’s poppy seed-sized brain.” An article by the Guardian’s Ian Sample explains why the complete mapping of the fruit fly’s neurological connectome – an undertaking assisted by AI technology – is such a potentially big deal.
- “One woman and two men with severe autoimmune conditions have gone into remission after being treated with bioengineered and CRISPR-modified immune cells. The three individuals from China are the first people with autoimmune disorders to be treated with engineered immune cells created from donor cells, rather than ones collected from their own bodies. This advance is the first step towards mass production of such therapies.” Nature’s Smriti Mallapaty reports on pioneering applications of CAR T therapy to address autoimmune diseases.
- “…we examined the relative data source contribution for various clinical end points in a large pragmatic trial. We demonstrated that (1) claims and EHR data contributed 92% to 100% of the composite end point and secondary end point events among participants with EHR and claims data; (2) this trend was consistent among older participants (≥65 years); and (3) for participants with available EHR and claims, patient-reported data contributed relatively little in addition to the other event sources among patients with available EHR and claims data. Lastly, EHR data add little to capturing all-cause death when there are available claims data.” A research article published in JAMA Cardiology by Rymer and colleagues examines the proportional contribution of different data sources – including claims data, EHRs, and patient-reported data – from the ADAPTABLE trial of aspirin dosing.
COMMUNICATION, Health Equity & Policy
- “Two decades earlier, when I was just starting out as a geophysical sciences major in college, an encounter with a curious shark marked the beginning of a lifelong respect and love for open-water swimming, and perhaps hazardous activities in general. I was treading water in the deep ocean with a friend when a large shark materialized out of the blue nothingness, heading straight toward us. As it approached, I watched helplessly as my friend splashed the water in front of it….I felt small and afraid, like a tourist who didn’t belong….The same feeling was familiar from my early-career experience as a scientist.” An essay in Science by Virginia Manner draws some career development lessons from an alarming encounter with one of the ocean’s toothier residents.
- “The study found that, within five years, one-third of all scientists in the 2000 group had stopped publishing. This rose to about half within ten years and to nearly two-thirds by 2019 (see ‘Academic exodus’). Women were around 12% more likely than men to have left science after five or ten years. By 2019, only 29% of women in the group were still publishing, compared with nearly 34% of men.” Nature’s Miryam Naddaf reports on a recent study that suggests steep rates of career attrition among academic scientists.
- “With this new project, PLOS has set itself the ambitious goal of seeding transformational (my highlight) change in scholarly publishing: overcoming two of the big barriers that currently exclude many researchers from participating in Open Science — the lack of recognition for Open Science contributions, and the lack of affordability — by thinking beyond the article and beyond the Article Processing Charge (APC).” At Scholarly Kitchen, MoreBrains Cooperative co-founder Alice Meadows scrutinizes open-access pioneer PLOS’ next move in academic publishing.