AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

November 17, 2023

In this week’s Duke AI Health Friday Roundup: testing GPT-4’s diagnostic chops; yeast with 50% synthetic genome survives, replicates; roles for AI in clinical trials; role of pets in zoonotic spillover; vaccine status, bias, and perceptions of risk; potential for bias in radiological deep learning models; what rats remember; developing standards for health-related chatbots; how publishing professionals perceive recent changes in social media; much more:

AI, STATISTICS & DATA SCIENCE

Image of a computer screen showing a ChatGPT interface with example prompts, capabilities, and limitations arrayed in a series of graphic tiles. Image credit: Levart_Photographer/Unsplash
Image credit: Levart_Photographer/Unsplash
  • “In this pilot assessment, we compared the diagnostic accuracy of GPT-4 in complex challenge cases to that of journal readers who answered the same questions on the Internet. GPT-4 performed surprisingly well in solving the complex case challenges and even better than the medical-journal readers. However, performance did appear to change between different versions of GPT-4…Although it demonstrated promising results in our study, GPT-4 missed almost every second diagnosis.” A study by Eriksen and colleagues, published in NEJM AI, reports findings of a comparison of the diagnostic acuity of GPT-4 versus a group of medical-journal readers.
  • “…our study demonstrates that biases in the chest radiography foundation model related to race and biologic sex led to substantial performance disparities across protected subgroups. To minimize the risk of bias associated with use of foundation models in critical applications such as clinical decision-making, we argue that these models need to be fully accessible and transparent.” An article published this past September in Radiology: Artificial Intelligence by Blocker and colleagues examines the potential for bias in foundation models applied to chest radiography.
  • “AI offers enormous potential, yet rigorous validation and regulatory oversight are essential to ensure that deployment is safe, effective, and ethical in the clinical trials ecosystem. Not only do model outputs need to provide accurate assessment of health state used for evaluating treatment benefit and risk, but the framework must also address risks related to data privacy, security, and bias.” An editorial published this week in JAMA by Hernandez and Lindsell examines the potential roles AI could play in the clinical trials enterprise.
  • “To develop a reporting guideline for chatbot assessment studies, we have gathered a diverse group of international stakeholders including statisticians, research methodologists, reporting guideline developers, natural language processing researchers, journal editors, chatbot researchers and patient partners. The development of the chatbot assessment reporting tool (CHART) is registered with the EQUATOR (enhancing the quality and transparency of health research) international network. This guideline will generate a reporting checklist and flow diagram by adhering to robust methodology, as well as the evidence-based EQUATOR toolkit on developing reporting guidelines.” A brief report published in Nature Medicine by Huo and colleagues describes the creation of partnership for developing standards for reporting on evaluations of AI chatbots designed to provide health-related advice.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

A grey and white rat pokes its nose out from between the flaps of a cardboard box. Image credit: slyfox photography/Unsplash.
Image credit: slyfox photography/Unsplash.
  • “Rats also encode spatial information in the hippocampus. But it’s been impossible to establish whether they have a similar capacity for voluntary mental navigation because of the practical challenges of getting a rodent to think about a particular place on cue, says study author Chongxi Lai…. In their new study, Lai, along with Janelia neuroscientist Albert Lee and colleagues, found a way around this problem by developing a brain-machine interface that rewarded rats for navigating their surroundings using only their thoughts.” Science’s Catherine Offord reports on recent research that suggests rats are able to use their imaginations to navigate through environments they’ve previously visited.
  • “Although the process of making the cells was time-consuming, what really slowed things down is debugging, Boeke says. Researchers first had to test whether each yeast cell with a new synthetic chromosome in it was viable — meaning it could survive and function normally — then fix any problems by tweaking the genetic code. When two or more synthetic chromosomes are inside the same cell, this can lead to new bugs that must be fixed, so the debugging problem becomes more complex as the process proceeds.” Nature’s Katherine Bourzac reports on a genetic engineering milestone: the successful survival and reproduction of a strain of yeast with a 50% synthetic genome.
  • “This analysis finds that COVID-19 and the drug-overdose epidemic were major contributors to the widening gender gap in life expectancy in recent years. Men experienced higher COVID-19 death rates for likely multifactorial reasons, including higher burden of comorbidities and differences in health behaviors and socioeconomic factors, such as labor force participation, incarceration, and homelessness. Differentially worsening mortality from diabetes, heart disease, homicide, and suicide suggest that chronic metabolic disease and mental illness may also contribute.” A research letter published this week in JAMA-Internal Medicine by Yan and colleagues has more bad news about US life expectancy, as an existing gender-based gap continues to widen.
  • “Because of their close proximity with people, companion animals and peri-domestic wildlife occupy a key position in the epidemiological networks of many zoonotic pathogens. Surveillance of those zoonotic pathogens should include companion animals and peri-domestic wildlife, and research should combine ecological approaches with molecular approaches to understand their roles as epidemiological reservoirs (e.g., dogs for rabies viruses) or bridges (e.g., horses for bat-borne paramyxoviruses).” A review article published in Science Translational Medicine by Gamble and colleagues examines the potential for pets and other domestic or semi-domestic animals to pose risks for zoonotic infections.

COMMUNICATION, Health Equity & Policy

Photograph of a neon sign in the shape of a social media “like” icon (speech-bubble enclosing a heart shape and the numeral zero. Image credit: Prateek Katyal/Unsplash
Image credit: Prateek Katyal/Unsplash
  • “One year into Elon Musk’s acquisition of X (formerly known as Twitter), engagement metrics are down across the board, with app downloads down 38%, web traffic down 7% globally and down 11.6% in the U.S. “What is UP with Twitter?” “What about Threads?” “Is Academic Twitter disappearing?” “Could BlueSky be the replacement?” Are your metrics changing? These are  questions we continue to hear in personal conversations, organizational meetings, industry articles, and in gatherings of the Society for Scholarly Publishing (SSP) MarComm committee.”  An article at Scholarly Kitchen plumbs the publishing community’s response to recent tumult in the social media landscape.
  • “The authors’ analysis revealed that unvaccinated individuals who identified strongly with their unvaccinated status were more likely to remember their earlier estimation of the risk as lower than it actually was. Conversely, and more markedly, those who had been vaccinated overestimated their earlier perception of their risk of catching the disease.” An editorial published this week in Nature describes a recent investigation that finds an association between COVID vaccination status and bias in how people recall perceived risk of the disease.
  • “Drawing from political science orthodoxy, deliberative democracy theory, and the concept of infrastructure, Wong shows how Big Tech platforms affect our rights, interests, and values in multiple and often hidden ways. With this framing, she convincingly argues that we should hold these corporations accountable to the ethos and full spectrum of human rights, not just to singular issues such as freedom of expression or privacy, and advocates for more transparency, participation, and oversight in their decision-making processes.” A book review published in Science by Duke’s Nita Farahany examines Wendy Wong’s We, the Data: Human Rights in a Digital Age.
  • “We find that freelancers in highly affected occupations suffer from the introduction of generative AI, experiencing reductions in both employment and earnings. We find similar effects studying the release of other image-based, generative AI models. Exploring the heterogeneity by freelancers’ employment history, we do not find evidence that high-quality service, measured by their past performance and employment, moderates the adverse effects on employment. In fact, we find suggestive evidence that top freelancers are disproportionately affected by AI.” A paper by Hui and colleagues, available as a preprint from SSRN, examines associations between the emergence of publicly available generative AI tools and employment prospects for freelance writers and editors.