AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

July 19, 2024

In this week’s Duke AI Health Friday Roundup: language completeness and LLM capabilities; open foundation models, risk, and the value chain; CAR-T for autoimmune diseases; editing the gut microbiome in vivo; international medical graduates face roadblocks in US; what should publishers do when a paper is retracted?; nurses “collaborate” with LLMs; the role of academic medical centers in health AI adoption and oversight; Human Genome Project under the microscope; much more:

AI, STATISTICS & DATA SCIENCE

Selective focus photograph of an open dictionary, with some of the definitions legible. Image credit: Joshua Hoehne/Unsplash
Image credit: Joshua Hoehne/Unsplas
  • “Language completeness assumes that a distinct and complete thing such as `a natural language’ exists, the essential characteristics of which can be effectively and comprehensively modelled by an LLM. The assumption of data completeness relies on the belief that a language can be quantified and wholly captured by data. Work within the enactive approach to cognitive science makes clear that, rather than a distinct and complete thing, language is a means or way of acting. Languaging is not the kind of thing that can admit of a complete or comprehensive modelling. From an enactive perspective we identify three key characteristics of enacted language; embodiment, participation, and precariousness, that are absent in LLMs, and likely incompatible in principle with current architectures.” A preprint by Birhane and McGann, available from arXiv, critically examines claims and suppositions surrounding the linguistic capabilities of large language model AIs.
  • “Understanding the generative AI value chain is crucial for developing effective strategies to govern open models. Mapping out the various stages of the AI value chain helps us pinpoint where interventions can make a difference, and what risk mitigation strategies various actors can enact. This understanding can also help us explore new approaches for releasing future cutting-edge models, such as staged and component releases, to balance the benefits of openness with the need for responsible use and monitoring.” A report developed by the Partnership on AI in collaboration with GitHub addresses how organizations that adopt open foundation models (i.e., multi-purpose generative AIs whose model weights are available to anyone) can manage the risks that accompany the use of these technologies (some additional commentary here by GitHub’s Peter Cihon).
  • “…current communication systems rely mainly on human efforts, which are both labor and knowledge intensive. A promising alternative is to leverage the capabilities of large language models (LLMs) to assist the communication in medical center reception sites. Here we curated a unique dataset comprising 35,418 cases of real-world conversation audio corpus between outpatients and receptionist nurses from 10 reception sites across two medical centers, to develop a site-specific prompt engineering chatbot (SSPEC).” A study by Wan and colleagues, published in Nature Medicine, reports findings from a study of patient satisfaction that evaluated the use of large language models by nurses in communicating with patients.
  • “…as the pace of AI development and uptake accelerates, these technologies—many of them relatively untested—present challenges for the national health care enterprise. Academic medical centers (AMCs) are strug­gling to keep up with the breakneck pace of change and navigate an ever-increasing resource mismatch between academia and industry. At the same time, AI tools have the potential to enhance the success of our core missions while mitigating this resource mismatch. AMCs have an unprec­edented opportunity to forge thoughtful partnerships and assume a leadership role in the responsible implementa­tion of these new, powerful tools.” An invited commentary by Duke School of Medicine Dean Mary Klotman and Duke AI Health Director Michael Pencina, appearing in a theme issue of the North Carolina Medical Journal, addresses the pivotal role that academic medical centers are poised to play in the adoption of health AI technologies.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

DNA Genotyping and Sequencing. Vials containing DNA samples from studies of the genetic risk for cancer at the Cancer Genomics Research Laboratory, part of the National Cancer Institute's Division of Cancer Epidemiology and Genetics (DCEG). Image credit: National Cancer Institute
Image credit: National Cancer Institute
  • “…the volunteers were also informed that measures had been put in place to protect them: They would remain anonymous, and to minimize the chances that any one of them could be identified based on their unique genetic sequence, the published genome would be a patchwork, derived not from one person but stitched together from the DNA of a large number of volunteers…Soon, however, those assurances began to wither. When a much-celebrated working draft of the human genome was published in 2001, the vast majority of it — nearly 75 percent — came from just one Roswell Park volunteer, an anonymous male donor known as RP11.” A remarkable report by Ashley Smart, co-produced by STAT News and Undark, pieces together the history of the Human Genome Project to illuminate how, despite explicit plans to the contrary, the project’s output was dominated by the genetic material of a single donor.
  • “CAR-T therapies are living drugs. To make them, T cells are generally removed from a person and genetically engineered to produce chimeric antigen receptors (CARs) that recognize a specific target. Once reinfused, they seek and destroy their target. So far, CAR-T cells have proved successful in treating blood cancers by destroying pathogenic B cells causing leukemias. Because B cells also drive autoimmune conditions, wiping out B cells has potential for treating these diseases too.” A news article in Nature Biotechology by Charlotte Harrison describes recent forays into the use of CAR-T therapy for autoimmune diseases.
  • “This base-editing system represents a ‘critical leap forward’ in developing tools that can modify bacteria directly inside the gut, says Chase Beisel, a chemical engineer at the Helmholtz Institute for RNA-based Infection Research in Würzburg, Germany. The study ‘opens the possibility of editing microbes to combat disease, all while preventing the engineered DNA from spreading’, he adds.” Science’s Gemma Conroy reports on a successful attempt at using CRISPR-Cas editing techniques to modify genes in gut bacteria inside a living mouse.
  • “In 2022, my investigation in Science showed evidence that the famous 2006 experiment that helped push forward the amyloid hypothesis used falsified data. On June 24, after most of its authors conceded technical images were doctored, the paper was finally retracted. Days later, a City University of New York scientist behind a well-financed, controversial Alzheimer’s drug was indicted on charges alleging research fraud….Such cases are extreme. Yet few of the multitude of honest Alzheimer’s papers offer much hope to patients.” An editorial in the New York Times by Science writer Charles Piller takes a critical perspective on the dominance of the “amyloid hypothesis” in Alzheimer disease research – a dominance that has come under heightened scrutiny in the wake of allegations of scientific misconduct in key publications.

COMMUNICATION, Health Equity & Policy

Stop sign with “all way” sign hanging crooked underneath, against a dark background and fluorescent lighting. Image credit: Jake Allen/Unsplash
Image credit: Jake Allen/Unsplash
  • “Barriers to entry and restricted opportunities for career development risk forcing IMGs to reconsider the US as a viable destination; these candidates are increasingly being hired in countries with comparable health care systems that recognize their value. The persistence of these barriers is artifactual, historical, and no longer serves the interest of the US. Bottlenecks at each stage of IMG integration require different levels of intervention. The hesitations of program directors and employers to recruit IMGs could be assuaged by increasing awareness of ECFMG certification and global medical school accreditation, IMG performance in clinical practice, and the added value they can bring in terms of research and prior training.” A special communication published in JAMA by McElvaney and McMahon addresses the difficulties faced by international medical graduates (IMGs) attempting to navigate employment in the US health system.
  • “Regarding the identification of misinformation, individuals with high critical thinking attitudes (subjective literacy) are less likely to recognize misinformation, while other objective literacies do not have a significant relationship. Regarding dissemination behavior, individuals with high information literacy, media literacy, and critical thinking scores tend not to disseminate misinformation, whereas those with high critical thinking attitudes are more likely to disseminate such information.” A preprint article by Yamaguchi and colleagues, available from SSRN, examines the relationships between different kinds of information literacy and the ability to identify and propensity to share misinformation.
  • “he new NISO Recommended Practice begins by setting forth a consistent set of terminology that is based on existing work by COPE and other industry groups. Building upon this terminology, the working group developed a set of consistent display and naming protocols for how retracted works should be presented to readers. The set of recommendations provides guidance on mechanisms for distributing retraction-related metadata and outlines the publisher’s responsibilities for metadata and associated actions.” A guest post by Todd Carpenter at Scholarly Kitchen describes a recently released set of best practices for managing publication retractions by the National Information Standards Organization (NISO).