AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

September 13, 2024

In this week’s Duke AI Health Friday Roundup: real-time AI decision support for surgeons; how patient self-advocacy effects medical billing; chatbots for cancer genetic counseling; common dye renders living mice transparent; longitudinal modeling of blood flow to enable digital twins; impact of vaping on exercise tolerance; the importance of clinical validation for AI tools; reexamining race-based clinical algorithms; frontier model LLMs to aid human forecasting; much more:

AI, STATISTICS & DATA SCIENCE

This picture is made up of 9 images in rows of 3. Each row shows a different image of a pill bottle spilling out pills onto a plain surface, on yellow or white backgrounds. On one side, the image is an original photograph. The next two iterations show it getting represented in progressively larger blocks of colour.. Image credit: Rens Dimmendaal & Banjong Raksaphakdee / Better Images of AI / Medicines / CC-BY 4.0
Image credit: Rens Dimmendaal & Banjong Raksaphakdee / Better Images of AI / Medicines / CC-BY 4.0
  • “…we presented a framework and methodology for bringing surgical AI models to end-users via a web platform….The web platform is compatible with, and can be accessed from, a wide range of pervasive devices, such as laptops, mobile devices and tablets. By using lightweight model architectures and a highly optimized data and network pipeline, we were able to achieve a high frame-rate prediction stream with low round-trip delay even for low internet speeds…” A study by Protserov and colleagues, published in NPJ Digital Medicine, describes the development and testing of AI-assisted real-time decision support for surgical procedures – one that doesn’t depend on access to high-performance computing and internet resources.
  • “Our preregistered analyses show that interacting with each of our frontier LLM assistants significantly enhances prediction accuracy by between 24 percent and 28 percent compared to the control group. Exploratory analyses showed a pronounced outlier effect in one forecasting item, without which we find that the superforecasting assistant increased accuracy by 41 percent, compared with 29 percent for the noisy assistant. We further examine whether LLM forecasting augmentation disproportionately benefits less skilled forecasters, degrades the wisdom-of-the-crowd by reducing prediction diversity, or varies in effectiveness with question difficulty. Our data do not consistently support these hypotheses.” A research paper by Schoenegger and colleagues, available as a preprint from arXiv, presents findings from a study that evaluated the use of frontier model LLMs as an assistive resource for improving accuracy of human forecasting (H/T @emollick).
  • “…trial findings suggested equivalence between these genetic services delivery models for the primary outcomes of uptake of pretest cancer genetic services and genetic testing and for the secondary outcome of beginning pretest genetic services, although statistically significantly more patients in the SOC [standard of care] group ordered genetic testing. The equivalence findings have important implications for clinical practice because chatbot approaches are supported to offer pretest cancer genetic services and genetic testing after outreach to unaffected patients eligible for genetic evaluation, providing a way to meet the rapidly increasing demand for these services.” In an article published this week in JAMA Network Open, Kaphingst and colleagues compare the performance of a chatbot with standard approaches for conveying information to patients about cancer genetics counseling service in a randomized trial.
  • “…a purely technological solution to a sociotechnical issue is not the right solution in healthcare. Although it is not possible to remove all biases or errors from the AI models we train, models we develop should collect diverse data on important problems, train robust models, validate their performance and establish appropriate audits in deployed settings so that errors are detected before a model causes harm.” In a commentary article for Nature Reviews Cancer, MIT’s Marzyeh Ghassemi lays out several key steps for avoiding or mitigating bias in AI technologies applied to cancer therapeutics.
  • “Machine learning enhanced the accuracy of electronic triggers to identify MODs. This ML enhancement could advance an organization’s ability to monitor diagnostic errors for research, learning, and quality improvement. Moreover, it substantially reduces the burden of clinician-dependent manual medical record review…Next steps include incorporating clinical note text as a source of missed opportunity prediction to leverage the rich clinical data needed to determine MODs, increasing the number of expert-labeled records on which the approach is tested, and validation in an external, independent population.” A research letter by Zimolzak and colleagues, appearing this week in JAMA Network Open, evaluates the application of machine learning to improve detection of errors in medical diagnosis.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Photograph showing a tabletop rack of vape pens with barcodes affixed as part of a laboratory testing process. Image credit: Centers for Disease Control and Prevention
Image credit: Centers for Disease Control and Prevention
  • “Both vapers and smokers showed signs that their blood vessels were not working as well as the non-smoking and non-vaping group, according to the blood tests and ultrasound scans. The smokers and the vapers were more out of breath, experienced intense leg fatigue and had higher levels of lactate in their blood, a sign of muscle fatigue, even before they reached their maximum level of exercise.” Research presented by Faisal and colleagues at this year’s European Respiratory Society conference suggests that vaping diminishes exercise capacity in young people to a similar degree to that seen in smokers.
  • Talk about your glass menagerie: a research article published in Science by Ou and colleagues describes the use of common optical dyes (including one widely used in food coloring) that preferentially absorb shorter wavelengths of light to render living mice transparent when the dye is applied to their skin: “The authors show that the addition of common dye molecules that absorb in the near ultraviolet and blue regions improve optical transparency in nearby longer wavelengths. In essence, by causing sharp absorption in the blue region, the refractive index in the red part of the spectrum is increased without increasing absorption. The addition of tartrazine was able to make the skin of a live rodent temporarily transparent.”
  • “…our findings indicate that adults with overweight or obesity who exercised regularly for at least a few years exhibit structural and proteomic remodelling in aSAT [abdominal subcutaneous adipose tissue], as evidenced by higher capillarization, altered ECM [extracellular matrix] contents, fewer ATM [adipose tissue macrophage markers], upregulated proteins and phosphoproteins involved in metabolism (lipogenesis, fat storage and release, and oxidative phosphorylation), protein translation and post-transcriptional modifications. Moreover, our ex vivo experiments suggest an enhanced capacity for angiogenesis and lipid storage in exercisers.” A research article published in Nature Metabolism by Ahn and colleagues suggests that physiological differences ensue in in the adipose tissue of overweigh or obese adults who undertake long-term endurance exercise, which may in turn have implications for overall cardiometabolic health (H/T @EricTopol).
  • “This study presented the first LHM [longitudinal hemodynamic mapping] of WSS [wall shear stress] that spans six weeks of activity. Compared to single-heartbeat WSS maps in varying activity states, we demonstrated in a patient-specific case that LHMs provide additional hemodynamic information that could not be captured by established methods.” In an article published last week in NPJ Digital Medicine, Tanade and colleagues describe a study that evaluated an approach to using data collected from wearable devices to continuously model the flow of blood in a patient’s cardiovascular system.

COMMUNICATION, Health Equity & Policy

Lighted candle lantern with a heart shaped window showing the candle flame. Image credit: Cathal Mac an Bheatha/Unsplash
Image credit: Cathal Mac an Bheatha/Unsplash
  • “Wright knew well that Black patients are at higher risk for heart disease and stroke, and about 30% more likely to die from heart disease than white patients. That’s why the calculator had included his race — along with his age, cholesterol, and blood pressure among other traits — to predict his risk. But he also knew — better than most — that there was nothing inherent to his physiology as a Black man that easily explained that higher risk.” An article by STAT News’ Katie Palmer (log-in required) examines how cardiologists began to question whether race-based prediction was accomplishing what it was meant to – and how those questions sparked a re-examination of ways to account for and overcome bias in clinical algorithms.
  • “As AI devices rapidly enter patient care, patients and the public need to know which devices are safe and effective. It is often thought that AI devices supported by human oversight have generally low risk, but this cannot be guaranteed without clinical testing. Even a minor change to an algorithm or clinical workflow could disrupt implementation in patient care, and AI devices can come with documented risks, including user errors.” A commentary in Nature Medicine by El Fassi and colleagues suggests that clinical validation of AI health tools (and access to the underlying data demonstrating it) is a necessary component for trust in AI-based applications approved for use in healthcare.
  • “This cross-sectional survey of a representative sample of patients in the US found that most respondents who self-advocated achieved bill corrections and payment relief. Differences in self-advocacy may be exacerbating socioeconomic inequalities in medical debt burden, as those with less education, lower financial literacy, and the uninsured were less likely to self-advocate.” A research article published in JAMA Health Forum by Duffy, Frasco, and Trish examines the effects of patient self-advocacy in the face of burdensome or erroneous medical bills.