AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

May 19, 2023

In this week’s Duke AI Health Friday Roundup: neural nets help robots find their place in the world; two studies underscore the toll of inequity on health, wealth; Google presents results from Med-PaLM 2; study puts price tag on manuscript formatting; GPT-4 task: explain GPT-2; CRISPR screening helps identify potential antidote for deadly mushrooms; how small can a language model go (and still make sense?); ChatGPT goes to college; much more:

AI, STATISTICS & DATA SCIENCE

A small yellow toy robot with wheels, seen in profile against a dark background. Image credit: Jochen van Wylick/Unsplash
Image credit: Jochen van Wylick/Unsplash
  • “Here, we report a brain-inspired general place recognition system, dubbed NeuroGPR, that enables robots to recognize places by mimicking the neural mechanism of multimodal sensing, encoding, and computing through a continuum of space and time.” A research article published in Science Robotics by Yu and colleagues reports findings from an attempt to use a neural net AI to mimic the ability of humans (or other animals) to know where they are in space – a vexed problem in robotics.
  • “We believe our methods could begin contributing to understanding the high-level picture of what is going on inside transformer language models. User interfaces with access to databases of explanations could enable a more macro-focused approach that could help researchers visualize thousands or millions of neurons to see high-level patterns across them.” A paper by Bills and colleagues at OpenAI presents the results of an experiment in which the large language model AI GPT-4 was used in an attempt to analyze and earlier, smaller transformer (GPT-2) to understand what was going on under the hood when the model was prompted to produce text outputs (H/T @ChristophMolnar).
  • “We show that Med-PaLM 2 exhibits strong performance in both multiple-choice and long-form medical question answering, including popular benchmarks and challenging new adversarial datasets. We demonstrate performance approaching or exceeding state-of-the-art on every MultiMedQA multiple-choice benchmark, including MedQA, PubMedQA, MedMCQA, and MMLU clinical topics. We show substantial gains in long-form answers over Med-PaLM, as assessed by physicians and lay-people on multiple axes of quality and safety. Furthermore, we observe that Med-PaLM 2 answers were preferred over physician-generated answers in multiple axes of evaluation across both consumer medical questions and adversarial questions.” A new paper by Singhal and colleagues, available as a preprint from arXiv, debuts Google’s Med-PaLM 2, an improved version of the earlier Med-PaLM large language model AI specifically trained to answer medical questions.
  • “In this work, we introduce TinyStories, a synthetic dataset of short stories that only contain words that a typical 3 to 4-year-olds usually understand, generated by GPT-3.5 and GPT-4. We show that TinyStories can be used to train and evaluate LMs that are much smaller than the state-of-the-art models (below 10 million total parameters), or have much simpler architectures (with only one transformer block), yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities.” A preprint by Ronen Eldan and Yuanzhi Li, available from arXiv, explores the lower end of size limits for language models.
  • “Already, Twitter threads and viral YouTube videos promise that AI-assisted search can speed up systematic reviews or facilitate brainstorming and knowledge summarization. If researchers are not aware of the limitations and biases of such systems, then research outcomes will deteriorate.” A viewpoint article by Michael Gusenbauer, published in Nature, advocates a high-priority push to audit powerful and popular large language model AIs before their uptake and use has undesirable side-effects for research.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Underwater photograph of scalloped hammerhead shark, taken from below, showing the shark’s ventral side. Image credit: Kris Mikael Krister/Wikipedia (CC-BY 3.0)
Image credit: Kris Mikael Krister/Wikipedia (CC-BY 3.0)
  • “Some deep-sea fish, such as tuna and lamnid sharks, a family of large and speedy sharks, are partially warm blooded; they can divert body heat to specific organs even in icy temps. But the scalloped hammerhead—one of the larger hammerhead species, named for its rippled crown—has no such plumbing. Every time it dives deep, its environment chills more than 20°C, like a human jumping into a glacial stream on a summer day.” At Science, Kate Hull reports on recent research by Royer and colleagues that suggests scalloped hammerhead sharks have adopted a sort of “breath-holding” during deep dives to conserve heat and allow them to hunt more efficiently.
  • “By combining a genome-wide CRISPR screen with an in silico drug screening and in vivo functional validation, we discover that N-glycan biosynthesis pathway and its key component, STT3B, play a crucial role in α-amanitin toxicity and that ICG is a STT3B inhibitor. Furthermore, we demonstrate that ICG is effective in blocking the toxic effect of α-amanitin in cells, liver organoids, and male mice, resulting in an overall increase in animal survival.” Good news for mushroom hunters: in an article published in Nature Communications by Wang and colleagues, researchers describe the identification of a compound that may mitigate the otherwise inexorable toxin found in so-called “Death Cap” mushrooms.
  • “We identified distinct symptom profiles (or clusters), suggesting the existence of heterogeneous profiles of post-COVID-19 condition caused by these different SARS-CoV-2 strains. However, across variants, three groups of symptoms clustered consistently and were reproduced in a test dataset with additional outcome data: a primary cluster dominated by central neurological symptoms, a second cluster dominated by cardiorespiratory symptoms, and a third more heterogeneous cluster showing systemic inflammatory symptoms.” A research article published in Lancet Digital Health by Canas and colleagues presents findings from a longitudinal study of post-COVID symptoms (“long COVID”) across vaccinated and unvaccinated populations infected with different variants of the SARS-CoV-2 virus.
  • “The reasons for the excess deaths and resulting economic toll are many, including mass incarceration, but the root is the same, according to the reports published Tuesday in the influential medical journal JAMA: the unequal nature of how American society is structured…That includes access to quality schools, jobs with a living wage, housing in safe neighborhoods, health insurance and medical care — all of which affect health and well-being.” The Washington Post’s Akilah Johnson reports on two research articles published in JAMA this week – one by LaVeist and colleagues, the other by Caraballo and colleagues – that plumb the disproportionate health and economic burdens imposed by racial and ethnic inequities in the United States.

COMMUNICATION, Health Equity & Policy

Rows of wooden-backed chairs in what appears to be an empty school auditorium. Image credit: Nathan Dumlao/Unsplash
Image credit: Nathan Dumlao/Unsplash
  • “This is college life at the close of ChatGPT’s first academic year: a moil of incrimination and confusion. In the past few weeks, I’ve talked with dozens of educators and students who are now confronting, for the very first time, a spate of AI ‘cheating.’ Their stories left me reeling. Reports from on campus hint that legitimate uses of AI in education may be indistinguishable from unscrupulous ones, and that identifying cheaters—let alone holding them to account—is more or less impossible.” An essay by Ian Bogost at The Atlantic presents a somewhat grim view of the impact of facile chatbot AIs on teaching in the university setting.
  • “The present study contributes new perspectives on the legal status of ARDs and the thorny issue of proxy leeway. In general, our participants put priority on respecting their ARD during potential future periods of incapacity, and nearly half were comfortable with a legally binding status. Giving weight to an ARD acknowledged the effort people put into thinking about and documenting their wishes in anticipation of a time when they would not be able to make such decisions.” A research article by Bries and Johnston, published in the journal Ethics and Human Research, investigates the possibilities of research participation by older adults with dementia.
  • “Among the analyzed journals, we found a huge diversity in submission requirements. By calculating average researcher salaries in the European Union and the USA, and the time spent on reformatting articles, we estimated that ~ 230 million USD were lost in 2021 alone due to reformatting articles. Should the current practice remain unchanged within this decade, we estimate ~ 2.5 billion USD could be lost between 2022 and 2030—solely due to reformatting articles after a first editorial desk rejection.” An analysis published in BMC Medicine by Clotworthy and colleagues weighs the costs of journal formatting requirements, and finds them formidable (and unnecessary).
  • “While generative AI such as ChatGPT gets all the buzz, AI in other, typically more targeted forms, has been used for years to solve business and research challenges. The utility of AI and the value to businesses is directly related to both the quality of inputs and information about what inputs are used. Even when an AI service is trained on high quality content, without proper documentation and audit trail, users cannot be sure what they are using and may stay away.” At Scholarly Kitchen, Roy Kaufman offers an opinion on why data provenance will prove essential to being able to trust AI-based tools.
  • “The current and former workers, all employed by third party outsourcing companies, have provided content moderation services for AI tools used by Meta, Bytedance, and OpenAI—the respective owners of Facebook, TikTok and the breakout AI chatbot ChatGPT. Despite the mental toll of the work, which has left many content moderators suffering from PTSD, their jobs are some of the lowest-paid in the global tech industry, with some workers earning as little as $1.50 per hour.” Time Magazine’s Billy Perrigo reports on nascent efforts by low-paid African AI workers to unionize.