AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

May 10, 2024

In this week’s Duke AI Health Friday Roundup: lessons for health AI from self-driving vehicles; monoclonal antibody for malaria prevention; intense pressures on foreign residency applicants; large language models offer second opinions; poor quality dogs some patient-facing materials; PCAST releases report on AI for science and research; mapping patterns of research misconduct in the literature; new improvements to AlphaFold debut; much more:

AI, STATISTICS & DATA SCIENCE

Closeup photograph of a car’s backlit dashboard tachometer, with the indicator needle sitting just above the idle mark between 0 and 1 x1000 RPM. Image credit: Chris Liverani/Unsplash
Image credit: Chris Liverani/Unsplash
  • “When most people think of AI, whether vehicles or health care, they think of fully replacing the driver or fully bypassing the doctor. While there are many good reasons to completely replace the driver for transportation, this thinking is counterproductive in health care. We must learn from the challenges and many steps in deploying “fully autonomous” AI in other fields. Moreover, we must recognize that in health care there are distinct advantages of augmentation over complete automation.” In an article published in NEJM Catalyst, Norden and Shah explore how lessons emerging from efforts to build self-driving vehicles can be applied to healthcare AI.
  • “The latest version of AlphaFold, described on 8 May in Nature, aims to do just that — by giving scientists the ability to predict the structures of proteins during interactions with other molecules. But whereas DeepMind made the 2021 version of the tool freely available to researchers without restriction, AlphaFold3 is limited to non-commercial use through a DeepMind website.” Nature’s Ewen Callaway covers a recent announcement of improvements to DeepMind’s AlphaFold tool for modeling and predicting protein structures.
  • “Building a second opinion system powered by a large language model is no longer in the realm of science fiction. As a physician treating patients (A.R.) and a medical AI researcher (A.M.), we envision a system that allows a treating physician, using the electronic medical record, to place an ‘order.’ But instead of selecting a diagnostic test, the physician would summarize the clinical question about a patient the same way they would talk to a colleague.” In an opinion article published in STAT News, Rodman and Manrai envision a place for large language models as a resource for second opinions in the clinic.
  • “Our findings suggest that the incorporation of LLMs into the ED clinical workflow could offer a significant opportunity to provide triage acuity assignments that are on par with existing practices. Overall, the LLM’s only significant performance weakness was in distinguishing patients assigned a less urgent vs nonurgent acuity, which is unlikely to have significant clinical consequences. In addition, this performance was achieved despite providing only patients’ clinical history to the LLM, omitting the vital signs and other physical examination findings that may be available to triage clinicians on initial evaluation.” A research article published in JAMA Network Open by Williams and colleagues describes the use of a large language model for triaging patients in an emergency department.
  • “What we found should alarm anyone who cares about a trustworthy and ethical media industry. Basically, AdVon engages in what Google calls ‘site reputation abuse’: it strikes deals with publishers in which it provides huge numbers of extremely low-quality product reviews — often for surprisingly prominent publications — intended to pull in traffic from people Googling things like ‘best ab roller.’” A recently published expose of the use of AI-written text (with AI-generated personae for the “authors”) in Sports Illustrated was apparently just the tip of the iceberg, as Futurism’s Maggie Harrison Dupré
  • “AI will fundamentally transform the way we do science. Researchers in many fields are already employing AI to identify new solutions to a wide array of long-standing problems. Today, scientists and engineers are using AI to envision, predictively design, and create novel materials and therapeutic drugs. In the near future, AI will enable unprecedented advances in the social sciences, both through new methods of analyzing existing data and the development and analysis of new kinds of anonymized and validated data. Such advances will allow government to better understand how policies affect the American people, and improve those policies to better meet societal needs and challenges.” The President’s Council of Advisors on Science and Technology (PCAST) have recently released a report on the potential for integrating AI science and research.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Early 20th-century morphological drawing of a female Anopheles mosquito, with labeled body parts, by D.A. Turkhud. Image via Wikimedia Commons.
Image credit: D.A. Turkhud/Wikimedia Commons
  • “We found that a single subcutaneous dose of L9LS provided protective efficacy of up to 70% against falciparum infection and of up to 77% against clinical malaria in children 6 to 10 years of age over a 6-month malaria season, during which 81% of the participants in the placebo group became infected with P. falciparum and 59% had clinical malaria.” In a research article published in the New England Journal of Medicine, Kayentao and colleagues present results from a phase 2 study of a monoclonal antibody for malaria prevention.
  • “We found that liquid biopsy videos and web pages had extensive shortcomings as sources of patient health information. Consistent with previous research assessing cancer-related online materials, most patients need help understanding online content about liquid biopsies and may be more confused after searching online. Health literacy plays an important role in patient experiences and outcomes. Patients with cancer often report knowledge gaps about their care and may be unable to clarify their concerns with clinicians, necessitating accessible resources to make informed decisions.” A study published in JAMA Network Open by Litt and colleagues presents findings from an evaluation of the quality of public-facing materials about liquid biopsies in oncology.
  • “Substantial geographic disparities in cancer clinical trials availability exist throughout the United States, with the most socially vulnerable counties being far less likely to have any trial and having only a fraction of trials available, a disparity that has worsened over time. This study contributes new perspectives to the role of SDOH in disparities in clinical trial participation by exploring community-level measures of SDOH via SVI, providing a national-level analysis, and demonstrating trends over the past 15 years.” Also in JAMA Network Open: a new study by Sekar and colleagues confirms previous patterns linking the availability (or lack of availability) of cancer clinical trials with social determinants of health (SDOH).

COMMUNICATION, Health Equity & Policy

Reams of white paper in chaotic heaps and folds, viewed from edge-on. Image credit: JJ Ying/Unsplash
Image credit: JJ Ying/Unsplash
  • “The Express Research Workshop belongs to a growing cottage industry of businesses, consultants, and nonprofits dangling seemingly easy publications for the more than 12,000 international medical graduates who apply for U.S. residency positions every year. A joint investigation by Retraction Watch and Science identified 24 such organizations across the U.S. and abroad. The programs likely have spawned thousands of publications—most of them full-length, peer-reviewed papers.” An investigative article appearing in Science by Frederik Joelving and Retraction Watch explores the murky territory of services that promise to pad the CVs of foreign applicants for medical residency slots with publication credits.
  • For additional context on the story above, see this recent essay in STAT News by Anmol Shrestha: “…the increased emphasis on publication has led to a push for medical students to produce junk studies — studies that often are never cited again by other researchers. I have had several classmates offer me authorship on papers that would have little scientific impact as long I contributed a few paragraphs and a share of the money for a ‘pay-to-publish’ journal. Such junk studies were guaranteed to add a line on our resumes, but would do little to add to our understanding of the world of medicine.”
  • “Although retraction is a scientific self-correction mechanism, the recent surge in the number of retractions is a concerning trend. This study reveals the widespread occurrence of academic misconduct across various topics, but the severity of misconduct varies significantly among them. Through the AMR [academic misconduct retraction] index, it was discovered that certain topics face particularly severe issues of academic misconduct, emphasizing the urgent need for increased attention and efforts to address these problems. The emergence of large-scale fraud has further complicated and obscured the issue of academic misconduct.” An editorial published in The Innovation by Li and Shen provides a map revealing patterns of scientific retractions and misconduct, broken down by field.