AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

September 1, 2023

In this week’s Duke AI Health Friday Roundup: reinforcement learning to align LLMs with human preferences; modeling T cell exhaustion; examining clearance lineages of AI medical devices; writing as medicine for docs; healthcare needs more than current foundation models; watermarking images to spot AI influence; semaglutide tested in heart failure; NCSU researchers automate dragnet for fraudulent robocalls, much more:

AI, STATISTICS & DATA SCIENCE

Nine small images with schematic representations of differently shaped neural networks, a human hand making a different gesture is placed behind each network. Image credit: Alexa Steinbrück /Better Images of AI/CC-BY 4.0
Image credit: Alexa Steinbrück /Better Images of AI/CC-BY 4.0
  • “We propose a simple algorithm for aligning LLMs with human preferences inspired by growing batch reinforcement learning (RL), which we call Reinforced Self-Training (ReST). Given an initial LLM policy, ReST produces a dataset by generating samples from the policy, which are then used to improve the LLM policy using offline RL algorithms. ReST is more efficient than typical online RLHF methods because the training dataset is produced offline, which allows data reuse….Our results show that ReST can substantially improve translation quality, as measured by automated metrics and human evaluation on machine translation benchmarks in a compute and sample-efficient manner.” A preprint article by Gulchere and colleagues, available from arXiv, presents a schema for an approach to reinforcement learning for large language models that incorporates human feedback.
  • “The predicate networks of cleared AI/ML-based medical devices varied in complexity and between medical specialties, with more than a third cleared on the basis of non-AI/ML-based medical devices in the first generation, and approximately only a quarter originating in a de novo cleared device across all generations. Especially for devices in radiology, the AI/ML tasks changed frequently along the device’s predicate network, raising safety concerns.” A research article published in Lancet Digital Health by Muehlematter and colleagues traces the evolutionary history of AI and ML-based medical devices cleared for use under the FDA’s 501(k) pathway.
  • “From the client perspective, the offer is hard to resist. Online labor platforms present them with a cheap, often skilled, workforce available 24/7…. But for the workers on the other side, the situation can be dire. They can spend large parts of their day searching and applying for jobs, time that is unpaid. For most platforms, there is no guarantee that the tasks offered will not fall below their minimum wage. …And if they have any problems with the client, there is not always a clear appeal process, putting them at risk of not getting paid at all.” At TechCrunch, Jonas CL Valente explores the role of “cloudworkers” whose hidden contributions are critical to supporting the current surge in AI offerings.
  • “To realize health care-specific foundation models, we are going to need a lot of data. At our academic medical center, there are records associated with more than 4 million patients. Yet even if each patient generated a book’s worth of text (a gross overestimate), this is far less data than what is currently used to train existing foundation models. Moreover, there would be entire “chapters” of health experiences missing as individuals moved across health systems.” An opinion article by Jenna Wiens, Rada Mihalcea and Brahmajee K. Nallamothu appearing in STAT News turns a critical eye on the potential role of current large language model AIs in healthcare settings and examines ways to tailor the technology to meet the demands of the environment.
  • “SynthID is created using two neural networks. One takes the original image and produces another image that looks almost identical to it, but with some pixels subtly modified. This creates an embedded pattern that is invisible to the human eye. The second neural network can spot the pattern and will tell users whether it detects a watermark, suspects the image has a watermark, or finds that it doesn’t have a watermark.” MIT Technology Review’s Melissa Heikkilä reports on the debut of Google DeepMind’s tool for “watermarking” online content that has been created by an AI.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Superresolution image of a group of killer T cells (green and red) surrounding a cancer cell (blue, center). When a killer T cell makes contact with a target cell, the killer cell attaches and spreads over the dangerous target. The killer cell then uses special chemicals housed in vesicles (red) to deliver the killing blow. This event has thus been nicknamed “the kiss of death”. After the target cell is killed, the killer T cells move on to find the next victim.   Image credit: Alex Ritter, Jennifer Lippincott Schwartz & Gillian Griffiths/National Institutes of Health
Image credit: Alex Ritter, Jennifer Lippincott Schwartz & Gillian Griffiths/NIH
  • “The inability to durably reverse CD8 T cell exhaustion remains a major barrier to treatment of cancer and chronic infection. A better understanding of the molecular cues that induce and maintain exhaustion will enable development of more effective therapeutics to prevent or reverse this state. In vivo models have engendered foundational knowledge about [T cell exhaustion] but generate low cellular yields and are time consuming, costly, and difficult to scale.” A research article published in Science Immunology by Wu and colleagues describes a method for in vitro modeling of the condition known as “T cell exhaustion” that occurs during cancer and some viral diseases.
  • “Advances in biomedical science, data science, engineering, and technology are leading to high-pace innovation with potential to transform health and medicine. These innovations simultaneously raise important ethical and social issues, including how to fairly distribute their benefits and risks.” A new report from the National Academies of Science, Engineering & Medicine presents a framework for ensuring that ethical principles and equity are baked into the development and application of new technologies.
  • “In this randomized, placebo-controlled trial involving patients with heart failure with preserved ejection fraction and obesity, once weekly semaglutide at a dose of 2.4 mg led to larger reductions in heart failure–related symptoms and physical limitations (as measured with the KCCQ-CSS) and a greater degree of weight loss than placebo at 52 weeks.” A research article published in the New England Journal of Medicine by Kosiborod and colleagues presents findings from a randomized trial evaluating the effects of the weight loss drug semaglutide on symptoms of heart failure with preserved ejection fraction.
  • “When the American Academy of Pediatrics reaffirmed its support for gender-affirming care earlier this month, and called for a systematic review of the evidence, some swaths of the public saw the move as casting doubt on the benefits of such care…. But the AAP and other experts say the systematic review only indicates their confidence in the current standards of care, and their awareness of a need to stay on top of the evidence amid a changing political landscape in which anti-trans legislation — particularly targeting youth access to health care — has proliferated.” An article by STAT News’ Theresa Gaffney illuminates recent controversy surrounding the American Academy of Pediatrics’ call for a systematic review of evidence for gender-affirming care.

COMMUNICATION, Health Equity & Policy

Photograph of an open unruled notebook with blank page. A sharpened pencil rests on the lower part of the page. Image credit: Kelly Sikkema/Unsplash
Image credit: Kelly Sikkema/Unsplash
  • “As Braitman sees it, her work is straightforward: it’s helping people who work in medicine communicate more clearly and honestly. If a play or poem — or book, as in Voskanian’s case — emerges from her classes, that’s great. ‘But my aim really is to support their vulnerability in a field in which vulnerability is often punished,’ she said. ‘Because I think it’s a great engine of empathy for themselves and for others.’ The writing is the medicine.” STAT News’ Isabella Cueto profiles Laurel Braitman, whose classes in writing and storytelling are helping physicians to cope with the grief and trauma that sometimes accompanies their jobs.
  • “Even with four months to go, we predict the 2023 Phrase of the Year will be ‘artificial intelligence.’ But like a discount utensil packaged in one of those pale blue gift boxes, some marketers are using the term to conceal that what they’re peddling is nothing more than old-school deception.” In a post at the Federal Trade Commission Business Blog, Lesley Fair addresses a recent and salutary example of the rampant application of “AI” to all manner of business products – whether there’s AI in there or not.
  • “The new tool, SnorCall, essentially records all robocalls received on the monitored phone lines. It bundles together robocalls that use the same audio, reducing the number of robocalls whose content needs to be analyzed by around an order of magnitude. These recorded robocalls are then transcribed and analyzed by a machine learning framework called Snorkel that can be used to characterize each call.” A news article by Matt Shipman, available on the North Carolina State University website, describes the efforts of NCSU researchers to develop an automated system capable of capturing and classifying fraudulent come-ons from robocalls.
  • “For weeks, the Times and the maker of ChatGPT have been locked in tense negotiations over reaching a licensing deal in which OpenAI would pay the Times for incorporating its stories in the tech company’s AI tools, but the discussions have become so contentious that the paper is now considering legal action.” NPR’s Bobby Allyn breaks down the implications of a looming lawsuit by the New York Times in response to having its content fed to OpenAI’s chatbot for training purposes.