AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

August 25, 2023

In this week’s Duke AI Health Friday Roundup: how bias emerges in healthcare algorithms; COVID vaccination and reduced maternal-fetal risk; research institutions need to beware predatory publishers; AI enables speech and expression by avatar for paralyzed woman; the protein “unknome” gets a closer look; figuring out what open AI really means; a testing schema for AI consciousness; sharing code helpful, encourages citations – but most authors still don’t share; much more:

AI, STATISTICS & DATA SCIENCE

A person is illustrated in a warm, cartoon-like style in green. They are looking up thoughtfully from the bottom left at a large hazard symbol in the middle of the image. The Hazard symbol is a bright orange square tilted 45 degrees, with a black and white illustration of an exclamation mark in the middle where the exclamation mark shape is made up of tiny 1s and 0s like binary code. To the right-hand side of the image a small character made of lines and circles (like nodes and edges on a graph) is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. It faces off to the right-hand side of the image. Image credit: Yasmin Dwiputri & Data Hazards Project / Better Images of AI / CC-BY 4.0
Image credit: Yasmin Dwiputri & Data Hazards Project / Better Images of AI / CC-BY 4.0
  • “…the researchers focused on ‘subpopulation shifts’ — differences in the way machine learning models perform for one subgroup as compared to another. ‘We want the models to be fair and work equally well for all groups, but instead we consistently observe the presence of shifts among different groups that can lead to inferior medical diagnosis and treatment,’ says Yang…The main point of their inquiry is to determine the kinds of subpopulation shifts that can occur and to uncover the mechanisms behind them so that, ultimately, more equitable models can be developed.” An article at MIT News by Steve Nadis explains some of the far-reaching findings of a paper, recently presented at 40th International Conference on Machine Learning, that delves into ways that bias can emerge in machine-learning applications in healthcare (H/T Matthew Elmore).
  • “Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness….Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.” A research article by Butlin and colleagues, available as a preprint from arXiv, tackles the question of how best to go about rigorously answering the question of whether a given instance of AI has achieved actual consciousness.
  • “This paper conducts fairness testing on automated pedestrian detection, a crucial but under-explored issue in autonomous driving systems. We evaluate eight widely-studied pedestrian detectors across demographic groups on large-scale real-world datasets….Our findings reveal significant fairness issues related to age and skin tone. The detection accuracy for adults is 19.67% higher compared to children, and there is a 7.52% accuracy disparity between light-skin and dark-skin individuals.” A research paper by Li and colleagues, available as a preprint from arXiv reports findings that suggest that some autonomous driving systems may pose a greater threat to children and to persons with darker skin tone.
  • “Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis.” A research article published in Nature by Metzger and colleagues describes the successful application of AI in restoring the ability of a severely paralyzed person to communicate. An article by the New York Times’ Pam Belluck helps break down some of the paper’s dense technical information.
  • While AI, including generative AI, continues to enjoy substantial enthusiasm and commercial interest, a recent spate of articles reveal some emerging skepticism now that these models are encountering the real world. First, speculation from Ted Gioia and Gary Marcus that the economic argument for (at least some applications of) LLMs may warrant a closer look. Next comes reporting from the Atlantic on the use of copyrighted books to train multiple LLMs (and the potential legal complications), as well as concern (via a Fast Company article) about Google’s deployment of generative-AI-enabled browsing, and a lapse of either the AI author or the ostensibly human quality control for a web article that directed tourists seeking destination dining in Ottawa to a food bank.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Photograph showing the midsection of a pregnant woman, standing in front of a fall-like background. Her hands are placed on top of her belly, fingers forming a heart shape. Image credit: Alicia Petresc/Unsplash
Image credit: Alicia Petresc/Unsplash
  • “Vaccinated people had lower COVID-19 rates than unvaccinated people, with a booster further reducing infection rates. Vaccinated people had decreased rates of preterm birth, stillbirth, low birthweight, and very low birthweight with similar small-for-gestational-age rates compared with unvaccinated people. Furthermore, boosted people had decreased rates of COVID-19 and stillbirth as well as similar rates of preterm birth, small-for-gestational-age, low birthweight, and very low birthweight compared with people who were vaccinated unboosted. Altogether, these findings indicate that COVID-19 mRNA vaccination reduces the risk of adverse maternal–fetal outcomes with booster doses conveying additional protection.” Results from a retrospective analysis, published this week in Lancet Digital Health by Piekos and colleagues, shed light on associations between maternal-fetal outcomes and vaccination and boosting for COVID-19 in pregnant persons.
  • “Munro and colleagues used the database to study 260 genes that are shared between fruit flies and humans and that have low knownness scores. After dialing down the activity of each of the protein-coding genes in the flies, the researchers found that about 60 were essential for life. Others were important for reproduction, growth, movement and resilience against stress.” An article by Science News’ Skyler Ware ventures into the “unknome” – a compendium of not-very-well understood proteins in the human biological system.
  • “With billions of people already using traditional medicines, the organization needs to explore how to integrate them into conventional health care and collaborate scientifically to understand their use more thoroughly, says Shyama Kuruvilla, WHO lead for the Global Centre for Traditional Medicine and the summit, who is based in Geneva, Switzerland. Many researchers who study traditional medicines agree — but some are not sure whether the summit will deliver.” Nature’s Gyathri Vaidyanathan reports on the World Health Organization’s first-ever summit on traditional medicine – the professional controversy that has ensued.
  • “Among nonhospitalized individuals, although the risks of most sequelae became nonstatistically significant at 2 years, substantial risk remains, impacting several major organ systems. The risk horizon for those hospitalized during the acute phase is even longer with persistently increased risk of most sequelae at 2 years.” A paper recently published in Nature Medicine by Bowe and colleagues examines data from VA healthcare databases for their analysis of the lingering effects of COVID infection out to the 2-year mark.

COMMUNICATION, Health Equity & Policy

Photograph of a shark in profile, taken at an aquarium. The background is dark and the shark is dappled by shadows. Image credit: Laura College/Unsplash.
Image credit: Laura College/Unsplash.
  • “I was shocked by how adrift many authors seemed when faced with the workings of scholarly publishing. Many were not aware that they had fallen prey to a predatory publisher. Several researchers mistook me for a journal and replied to my survey e-mail by attaching articles, with comments thanking me in advance for their ‘next quick publication’ or asking me ‘how much it will cost them in dollars’….Research institutions, too, are falling down on the job of providing basic education in scholarly publishing norms, especially to scientists in LMICs [lower/middle-income countries].” A Nature essay by Chérifa Boukacem-Zeghmouri visits a perennial but rapidly worsening problem for academia: the onslaught of predatory publishing and conferences, many of them aimed at early-career researchers.
  • “When science curricula underrepresent or do not include such social and institutional dimensions of science, which play a key role in the validation and communication of scientific processes, it is as if a fundamental element of science has been dismantled, projecting an image of science that is idealized, reconstructed, and distorted.” An essay by Sibel Erduran, published in Science, argues that cultivating a broad, informed understanding of science and how it works requires attention to its social, cultural, and institutional elements.
  • “Taken together, we find that ‘open’ AI can, in its more maximal instantiations, provide transparency, reusability, and extensibility that can enable third parties to deploy and build on top of powerful off-the-shelf AI models. These maximalist forms of ‘open’ AI can also allow some forms of auditing and oversight. But even the most open of ‘open’ AI systems do not, on their own, ensure democratic access to or meaningful competition in AI, nor does openness alone solve the problem of oversight and scrutiny.” In a paper available from SSRN, Widder, West, and Whittaker weigh the shifting signifier of “open AI” and its implications for understanding the nature of emerging AI systems.
  • “We find that scientists are overwhelmingly (95%) failing to publish their code and that there has been no significant improvement over time, but we also find evidence that code sharing can considerably improve citations, particularly when combined with open access publication.” A preprint paper by Maitner and colleagues, available from Research Square, explores code-sharing practices in the recent biology literature.