AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

September 23, 2022

In today’s Duke AI Health Friday Roundup: probing reading comprehension for machines; CAR-T for lupus; reflections on loss of public trust in science (and how to fix it); reverberations of racism in digital image collections; FDA eyes pulse oximeter performance with darker skin; USPSTF recommends widespread anxiety screening; wearable sensors for measuring tumor regression; much more:

AI, STATISTICS & DATA SCIENCE

A jumble random jumble of old-fashioned moveable type letters and words. Image credit: Amador Loureiro/Unsplash
Image credit: Amador Loureiro/Unsplash
  • “Two of the most fundamental challenges in Natural Language Understanding (NLU) at present are: (a) how to establish whether deep learning-based models score highly on NLU benchmarks for the ‘right’ reasons; and (b) to understand what those reasons would even be. We investigate the behavior of reading comprehension models with respect to two linguistic ‘skills’: coreference resolution and comparison.” A preprint available from arXiv by Choudhury and colleagues interrogates what it means to question a machine learning model’s “comprehension” of natural language.
  • “The feature will let users edit images in a number of different ways. They can upload a photograph of someone and generate variations of the picture, for example, or they can edit specific features, like changing someone’s clothing or hairstyle. The feature will no doubt be useful to many users in creative industries, from photographers to filmmakers.” At The Verge, James Vincent reports that the popular AI art-generating program DALL-E has lifted its earlier prohibition on editing of (presumably) real human features using the application.
  • “Films produced in the mid-twentieth century, such as Jean Renoir’s The Golden Coach from 1952, show how the history of color calibration in photography—with its chemistry of inherent racial bias—was carried over into analog images. Renoir used a color film process that was standard at the time but rendered invisible the darker facial features of Black people.” A field review by Nettrice R. Gaskins at the Social Science Research Council’s Just Tech traces the lineage of racism in photography and film through to modern-day image classifiers and facial recognition technology.
  • “Most efforts are focused on tools that can help to make original proteins, shaped unlike anything in nature, without much focus on what these molecules can do. But researchers — and a growing number of companies that are applying AI to protein design — would like to design proteins that can do useful things, from cleaning up toxic waste to treating diseases. Among the companies that are working towards this goal are DeepMind in London and Meta (formerly Facebook) in Menlo Park, California.” Nature’s Ewen Callaway describes new AI tools that are able to blueprint radically novel proteins.
  • “In virtue of the thesis of extended intelligence, we contend that intelligence is context-bound, task-particular and incommensurable among agents. Our thesis carries strong implications for how intelligence is analyzed in the context of both psychology and artificial intelligence.” A preprint by Barack and colleagues available from arXiv argues that the “intelligence” of an artificial agent is inseparable from the context in which it operates.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Close-up photograph of a person using a wireless pulsed oximeter, placed over their index finger. Image credit: Towfiqu barbhuiya/Unsplash
Image credit: Towfiqu barbhuiya/Unsplash
  • “The committee will discuss ongoing concerns that pulse oximeters may be less accurate in individuals with darker skin pigmentations. The committee will also discuss factors that may affect pulse oximeter accuracy and performance, the available evidence about the accuracy of pulse oximeters, recommendations for patients and health care providers, and the amount, and type of data that should be provided by manufacturers to assess pulse oximeter accuracy and to guide other regulatory actions as needed.” In response to concerns about the accuracy of pulse oximeters when used on patients with darker skin, the FDA has announced that a committee will be convening in early November to examine data on the performance of the ubiquitous medical technology.
  • “One lupus patient, who got CAR-T about 18 months ago and hasn’t received any therapies since, has remained free of disease symptoms, Schett said. A second patient has been in drug-free remission for a year. It’s too early to say the patients are cured, but results are encouraging, even to Schett.” STAT News’ Isabella Cueto reports on findings from a study, published in Nature Medicine, of CAR-T cancer therapy for the autoimmune disorder lupus erythematosus.
  • “…we present a commercially scalable wearable electronic strain sensor that automates the in vivo testing of cancer therapeutics by continuously monitoring the micrometer-scale progression or regression of subcutaneously implanted tumors at the minute time scale.” In a mouse-model study published in Science Advances, Abramson and colleagues evaluated the use of wearable sensors to monitor tumor growth or regression in real time.
  • “Many are troubled by the possibilities that PGT-P presents: bioethicists have long been wary of attempting to select disease and disability out of the human gene pool, and the high cost of testing could further entrench health inequities….Researchers are also concerned that, in most cases, the genomic models behind these tests are too weak to predict disease risk in a meaningful way for a developing embryo.” In a news feature for Nature, Max Kozlov examines the growing ethical and practical concerns about the use of polygenic risk scores to help prospective parents select embryos created by in-vitro fertilization.

COMMUNICATION, Health Equity & Policy

Photograph of a brick building with fire escapes on the side. On the facing side of the building is a painted sign that reads “How are you, really?” Image credit: Finn/Unsplash
Image credit: Finn/Unsplash
  • “The recommendations are based on a review that began before the COVID-19 pandemic, evaluating studies showing potential benefits and risks from screening. Given reports of a surge in mental health problems linked with pandemic isolation and stress, the guidance is ‘very timely,’ said Lori Pbert, a task force member and co-author.” The Associated Press’ Lindsey Tanner reports on new draft recommendations for widespread adult anxiety screening from the US Preventive Services Task Force.
  • “…the authors showed that the predictability of belief change improves when one takes into account the network structure of beliefs, which can help to elucidate when certain beliefs are more likely to change than others. As science denialism is becoming increasingly dangerous to our society, the proposed model could be used to assist with designing effective educational interventions for combating harmful disbeliefs.” At Nature Computational Science, Fernando Chirigati highlights recent work by Dalege and van der Does, published in Science Advances, that may shed new light on how personal beliefs and social networks interact to reinforce science denialism.
  • “The Kaiser Family Foundation estimates well over 300,000 Americans are in graveyards today because of the misinformation, the doubt, the suspicion, the distrust that caused them to say, that vaccine is not safe for me. How did that happen? I never saw that coming. And the consequences of that are all around us now. And it continues. We’re still losing 400 people a day, many of them still unvaccinated.” STAT News’ Elizabeth Cooney summarizes a wide-ranging conversation with former NIH Director Francis Collins that includes reflections on the recent crisis in public trust in science and public health.
  • “We found that manuscripts with both initial low or high levels of statistical content increased their statistical content during peer review….We found that when reports were more concentrated on statistical content, there was a higher probability that these manuscripts were eventually rejected by editors.” A recent paper by Garcia-Costa and colleagues, published Royal Society Open Science, evaluates the effects of peer review on the statistical rigor of scientific manuscripts at a selection of journals.