AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

January 5, 2024

In this week’s Duke AI Health Friday Roundup: chatbots and Borgesian Babel; dogs are good for your health; chatbot errs in diagnosing pediatric conditions; assurance labs for health AI; digital apps for contact tracing; “Coscientist” AI shows research chops; health impacts motivate people to address racial disparities; new class of antibiotics debuts against resistant A. baumanii; wearables for depressive disorders; meeting a new paradigm for data sharing; much more:

AI, STATISTICS & DATA SCIENCE

A baby sitting on an exam-room table is being examined by a female doctor with her back turned to the camera. The doctor is listening to the baby’s heart with a stethoscope; the baby is curious about the stethoscope and is grasping it with one hand. A bright rainbow-and-ocean-themed wallpaper runner is in the background, along with a rack for oto- and laryngoscopes. Image credit: Centers for Disease Control and Prevention
Image credit: Centers for Disease Control and Prevention
  • “The chatbot had a diagnostic error rate of 83% (83 of 100). Among the incorrect diagnoses, 72% (72 of 100) were incorrect and 11% (11 of 100) were clinically related but too broad to be considered a correct diagnosis…Despite the high error rate of the chatbot, physicians should continue to investigate the applications of LLMs to medicine. LLMs and chatbots have potential as an administrative tool for physicians, demonstrating proficiency in writing research articles and generating patient instructions…However, the underwhelming diagnostic performance of the chatbot observed in this study underscores the invaluable role that clinical experience holds.” A research letter published this week in JAMA Pediatrics by Barile and colleagues examines the diagnostic chops of large language models as applied to pediatric case studies (H/T @EricTopol).
  • “As the technology for building models becomes widely available and community consensus on how to evaluate their performance emerges, the rationale for “a lab for testing” to ensure model credibility as well as accountability is increasing. A public-private partnership to launch a nationwide network of health AI assurance labs could promote transparent, reliable, and credible health AI.” A JAMA Special Communication coauthored by leading AI experts involved with the Coalition for Health AI presents the case for AI “assurance labs.”
  • “Researchers have used ChatGPT, and the broader technology known as generative AI, to brainstorm research ideas, create computer code and even write entire research papers….But not all scientists are embracing the technology. According to a survey carried out by Nature, about 78% of researchers do not regularly use generative AI tools such as ChatGPT. Of those that do, many have used it only for fun activities not related to their research, or as an experiment. Some have chosen to steer clear of chatbots because of the potential pitfalls and limitations. Others fear that they are missing out.” Nature’s Carissa Wong talks with scientists about their feelings about incorporating ChatGPT into the scientific process.
  • “In less time than it will take you to read this article, an artificial intelligence-driven system was able to autonomously learn about certain Nobel Prize-winning chemical reactions and design a successful laboratory procedure to make them. The AI did all that in just a few minutes — and nailed it on the first try.” A post at the National Science Foundation website by Jason Stoughton introduces readers to “Coscientist,” a modular AI comprising elements of several large language models designed specifically for use in scientific research.
  • “Traditional clinical assessments depend on patient recall. Although such recall can include important factors that wearable technology (often termed ‘wearables’) do not detect, such as patients’ reports of distress, the assessments by wearables of longitudinal data from daily life may augment methods of monitoring and treating depression, providing objective complements to subjective information from patients.” A review article published in the New England Journal of Medicine by Fedor and colleagues surveys the landscape of wearable technologies designed to help diagnose, monitor, and/or treat aspects of depressive disorders, and catalogs some of the challenges associated with using the data gathered from such devices.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

A man is sitting on a couch, reading a book with a dog on the dust jacket to his dog, a Sheltie, who is sitting on the couch next to him and appears to be following along on the page. Image credit: Ken Foreman/Unsplash
Image credit: Ken Foreman/Unsplash
  • “…this was a real question, right? Why? If it isn’t the physical activity, why is it that you live longer? And in fact, if you look across all of these different studies of the way that our physical health is improved by having dogs, one of the themes that emerges is something that we actually already knew from psychology, which is if you have a really robust system of social support, all of your health markers tend to be better like that.” Scientific American’s Andrea Thompson interviews computer scientist Jen Goldbeck about the “surprising” [Ed: we are not surprised] benefits of having a dog.
  • “Here we report the identification and optimization of a structurally novel antibiotic class, tethered MCPs, culminating in the selection of a clinical candidate, zosurabalpin. We further identify the lipopolysaccharide (LPS) transport machinery as an unprecedented antibiotic target for MCPs in Acinetobacter. The in vitro antibacterial and pharmacokinetic properties of zosurabalpin translated into potent in vivo efficacy in animal models of infection, including infections caused by pan-drug resistant strains of baumannii.” A research article published in Nature by Zampaloni and colleagues describes the discovery and testing of a new class of antibiotics for the treatment of resistant “superbug” strains of the bacterium Acinetobacter baumannii.
  • “Digital contact tracing thus has its place in the toolkit of non-pharmaceutical interventions for future pandemics, and should be part of pandemic preparedness plans. Proximity estimation will probably be improved as smartphones move to using other types of radio technology for signalling, such as ultra-wideband, which enables the distances between devices to be measured more accurately than does Bluetooth. Future smartphones might also be able to take into account other factors that affect the probability of disease transmission, such as being indoors or outdoors.” A Nature News and Views article by Justus Benzler examines the use of a smartphone contact-tracing app to predict the risk of COVID transmission.
  • “…domains of social inequality are clearly highly interrelated; disparities in one domain may coincide with or even generate disparities in other domains. Consider the issue of underfunded public schools in majority-Black US neighborhoods: An economic issue—lack of public funding—contributes to schools providing nutrient-deficient lunches disproportionately to Black children. Despite their interconnected nature, our experiments demonstrate that highlighting the health consequences, in particular, will likely garner more support to address the issue than if the economic precursor is made salient.” A study published in Science by Brown and colleagues finds that underscoring the health impact of racial disparities is particularly salient when trying to gather support for efforts to correct those disparities.

COMMUNICATION, Health Equity & Policy

A photograph taken from above of a person in a yellow jacket wandering through an outdoor labyrinth, leaving tracks in a light dusting of snow. Image credit: Dan Asaki/Unsplash.
Image credit: Dan Asaki/Unsplash.
  • DATELINE: TLÖN, UQBAR, ORBIS TERTIUS: “The tragedy suffered by the librarians of Babel is that they are so tantalized by the certainty that anything they could possibly hope to read exists somewhere in the Library that they have oriented their society around attempting to find the right books, not realizing that there is so much text with the aesthetic appearance of truth that the books themselves will not serve their goals. As social, cultural, and individual relationships with generative AI tools continue changing rapidly, we should be careful avoid the same tragedy.” At the Scholarly Kitchen, Isaac Wink compares the sometimes-hallucinated output of LLMs to Borges’ infinite (and inscrutable) “Library of Babel.”
  • “Health insurance giant UnitedHealth Group used secret rules to restrict access to rehabilitation care requested by specific groups of seriously ill patients, including those who lived in nursing homes or suffered from cognitive impairment, according to internal documents obtained by STAT…The documents, which outline parameters for the clinicians who initially review referrals for rehab care, reveal that many patients enrolled in Medicare Advantage plans were routed for a quick denial based on criteria neither they, nor their doctors, were aware of.” STAT News’ Bob Herman and Casey Ross report (log-in required) on efforts by an insurer to block access to rehabilitative services by some groups of patients.
  • A conversation between JAMA editor Kirsten Bibbs-Domingo and professor and AI expert Alondra Nelson, available from JAMA’s YouTube channel, explores the implications of the recent Executive Order on AI, particularly with regard to privacy and equity issues.
  • “Errors happen because law enforcement deploys emerging technologies without transparency or community agreement that they should be used at all, with little or no consideration of the consequences, insufficient training and inadequate guardrails. Often the data sets that drive the technologies are infected with errors and racial bias. Typically, the officers or agencies face no consequences for false arrests, increasing the likelihood they will continue.” A perspective by Joy Buolamwini and Barry Friedman, published today in the New York Times, addresses potential problems arising from the use of AI in police surveillance and recommends approaches to federal governance of such tools in light of a recent White House OMB proposed guidance.
  • “There are clear opportunities ahead, but there is a need for a path forward to guide researchers. If these efforts are successful, every publicly funded project will have two equally important goals: first, to accomplish its research aims of collecting and analyzing data and reporting results to advance science, and second, to produce data that other investigators can use to replicate findings and produce new insights…” In a perspective article published last week in the New England Journal of Medicine, Ross and colleagues hail the potential benefits – as well as accompanying challenges – likely to ensue from recent changes to federal data-sharing mandates.