AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

August 18, 2023

In this week’s Duke AI Health Friday Roundup: transparency for AI-generated content; a critical appraisal of large language models; reconsidering radiation therapy; the future of governance for health AI; sport supplements whiff on truth in labeling; electronic payment charges siphon money from healthcare; focusing on AI’s real dangers; investigation reveals trouble with ethical oversight at French institute; much more:

AI, STATISTICS & DATA SCIENCE

Picture of window with a view of distant city skyline, taken at some distance back in a darkened room. Image credit: Ed Vázquez/Unsplash
Image credit: Ed Vázquez/Unsplash
  • “There’s no question that we need more transparency if we’re going to be able to differentiate between what is real and what is synthetic. Last month, the White House weighed in on how to do this, announcing that seven of the most prominent AI companies have committed to ‘develop robust technical measures to ensure that users know when content is AI-generated, such as watermarking.’ …Disclosure methods like watermarks are a good start. However, they’re complicated to put into practice, and they aren’t a quick fix.” MIT Technology Review’s Claire Leibowicz delves into the complications of identifying AI-generated content as a potential deluge of compute- generated fictions and deception threatens to erode trust in online information sources.
  • “We discuss 4 common claims about LLMs: that LLMs are robust…, that they systematically achieve state-of-the-art results…, that their performance is predominantly due to their scale…, and that they exhibit emergent properties…. By collecting existing evidence and counter-arguments, we aim to highlight some of the gaps and inconsistencies in our current knowledge and to help orient future work so as to address these gaps.” A review paper by Lucccioni and Rogers, available as a preprint from arXiv, offers a deep dive into large language models, from definitions to benchmarking to capabilities (or lack thereof).
  • “Given technological advances and quality and transparency improvements prompted by recent U.S. federal regulation, CDS algorithms will increasingly be integrated into routine clinical care. We expect that, willingly or not, the current generation of trainees will use CDS algorithms regularly in their practice. This shift will bring powerful opportunities to improve care — but also drawbacks, if algorithms are relied on inappropriately. For CDS to do its job effectively, we need to train medical students in its use.” A perspective article published in the New England Journal of Medicine by Goodman and colleagues surveys the path ahead as algorithmic decision support makes increasing inroads into clinical care.
  • “Other researchers agree that GPT-4 and other LLMs would probably now pass the popular conception of the Turing test, in that they can fool a lot of people, at least for short conversations. In May, researchers at the company AI21 Labs in Tel Aviv, Israel, reported that more than 1.5 million people had played their online game based on the Turing test. Players were assigned to chat for two minutes, either to another player or to an LLM-powered bot that the researchers had prompted to behave like a person. The players correctly identified bots just 60% of the time…It’s the kind of game that researchers familiar with LLMs could probably still win, however.” In a feature article for Nature, Celeste Biever examines ongoing attempts to update the venerable Turing Test – a conceptual framework for assessing machine intelligence – for the modern AI age (H/T @AI_4_Healthcare). Meanwhile, at Science, Jack Stilgoe makes the case for a different conceptual approach to assessing AI capabilities.
  • “We conduct an experiment with professional radiologists that varies the availability of AI assistance and contextual information to study the effectiveness of human-AI collaboration and to investigate how to optimize it. Our findings reveal that (i) providing AI predictions does not uniformly increase diagnostic quality, and (ii) providing contextual information does increase quality. Radiologists do not fully capitalize on the potential gains from AI assistance because of large deviations from the benchmark Bayesian model with correct belief updating.” A National Bureau of Economic Research working paper by Agarwal and colleagues investigate the merits of combining AI image recognition in radiology with human clinical expertise.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Photograph showing lower body and arms of a person crouching to lift heavy weights in a gym. Image credit: Victor Freitas/Unsplash
Image credit: Victor Freitas/Unsplash
  • “Seven of 57 products (12%) were found to contain at least 1 FDA-prohibited ingredient….Five different FDA-prohibited compounds were found, including 4 synthetic simulants, 1,4-dimethylamylamine, deterenol, octodrine, oxilofrine, and omberacetam. Six products contained 1 of these prohibited ingredients, and 1 product contained 4 different prohibited ingredients….Eighty-nine percent of dietary supplement labels did not accurately declare the ingredients found in the products, and 12% of products contained FDA-prohibited ingredients.” A study published in JAMA Network Open by Cohen and colleagues suggests that some manufacturers of ostensibly “performance-enhancing” sports supplements could stand to seriously up their game.
  • “Radiation has also gone through a century of advancements, with early cancer radiotherapy beginning shortly after Wilhelm Röntgen discovered X-rays in 1895 and Marie and Pierre Curie discovered radium in 1898. Since then, radiation has evolved into treatments that include modern brachytherapy, inserting a radioactive source into a tumor, and focused beams of ionizing radiation that kill off cells while incurring as little off-target damage as possible. The new thinking about easing back on radiation goes beyond just easing back on the treatment, though.” STAT News’ Angus Chen reports on the evolving role of radiation therapy in cancer treatment,
  • “This First Nations led approach resulted in vaccine uptake of 90.3% for the first two doses in First Nations communities by 1 December 2022, an uptake higher than many other jurisdictions or countries…Critical success factors for this excellent vaccine uptake were trust building through clinical responsiveness in earlier phases of the pandemic and consistent culture and science based communications with First Nations people through multiple media platforms, partnerships with community leaders, and support from traditional healers and knowledge keepers for the vaccine…” In an opinion article published in BMJ, Anderson and MacKinnon examine lessons from Canadian First Nations communities’ response to the COVID pandemic.
  • “Researchers at the University of Illinois Urbana-Champaign are helping shift the focus to include mitigation of the chemicals, which the investigators say are just as persistent as their long-chain counterparts, more mobile and harder to remove from the environment….A study directed by chemical and biomolecular engineer Xiao Su uses electrosorption rather than filters and solvents and combines synthesis, separations testing and computer simulations to help design an electrode that can attract and capture a range of short-chain PFAS from environmental waters.” A news post at the National Science Foundation website has some potential good news for curbing pollution from PFAS compounds (often referred to as “forever chemicals”).

COMMUNICATION, Health Equity & Policy

Small, off-kilter robot sculpture made out of wood, junk, and found objects. Image credit: Nina Mercado/Unsplash
Image credit: Nina Mercado/Unsplash
  • “AI-related policy must be science-driven and built on relevant research, but too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity (the property that a test measures what it purports to measure).” An opinion article by Emily Bender and Alex Hanna published in Scientific American implores regulators to focus on actual harms stemming from AI deployment, rather than fanciful apocalyptic scenarios touted by some in the industry.
  • “…policymakers should support standardized implementation of irreducible local governance. AI systems are not simply plug-and-play and are intricately intertwined with context-specific workflows and data streams. Their integration into care pathways requires careful attention and ongoing monitoring by local health entities, including input from local patients and other community stakeholders to ensure that altered care patterns reflect locally nuanced values…” A commentary published in Nature Machine Intelligence by Price and colleagues examines approaches to governance of medical AI and makes recommendations for future improvements.
  • “Insurers now routinely require doctors to kick back as much as 5% if they want to be paid electronically. Even when physicians ask to be paid by check, doctors say, insurers often resume the electronic payments — and the fees — against their wishes. Despite protests from doctors and hospitals, the insurers and their middlemen refuse to back down.” An investigation reported by Cezary Podkul, co-published via ProPublica and NPR, shines a light on electronic transaction fees applied to healthcare payments – and the how they managed to become embedded in routine billing practices.
  • “In summary, among the studies we have investigated, 248 were conducted with the same ethics approval number, even though the subjects, samples, and countries of investigation were different. Thirty-nine (39) of the manuscripts we considered did not even contain a reference to the ethics approval number although they contained research on human beings.” A commentary by Frank and colleagues published in the journal BMC Research Integrity and Peer Review highlights concerning patterns related to research ethics approvals and oversight at France’s Institut Hospitalo-Universitaire Méditerranée Infection, whose then-director, Didier Raoult, was embroiled in controversies stemming from his touting of hydroxychloroquine as an effective COVID remedy.