AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

April 26, 2024

In this week’s Duke AI Health Friday Roundup: WHO debuts healthcare chatbot; eyeing US preparations to counter avian flu; video games for the phylogenetic win; the importance of evidence-based approaches to smoking cessation; AI-assisted email associated with some benefits for docs, but saving time is not one of them; AI sets sight on modern battlefields (and haunts some old ones); surprising results from studies of medical debt relief; much more:


Pixellated text declaring “GAME OVER,” red on a black background, from the final screen of an arcade video game. Image credit: Rivage/Unsplash
Image credit: Rivage/Unsplash
  • “Since its initial release on 7 April 2020, over 4 million players have solved more than 135 million science puzzles, a task unsolvable by a single individual. Leveraging these results, we show that our multiple sequence alignment simultaneously improves microbial phylogeny estimations and UniFrac effect sizes compared to state-of-the-art computational methods. This achievement demonstrates that hyper-gamified scientific tasks attract massive crowds of contributors and offers invaluable resources to the scientific community.” Chalk one up for the (human) gamers: A research article published in Nature Biotechnology by Sarrazin-Gendron describes the creation of “mini-game” that allowed casual users to contribute to mapping of microbial phylogenetics, and in doing so, outperformed cutting-edge computational methods.
  • “SARAH was trained on OpenAI’s ChatGPT 3.5, which used data through September 2021, so the bot doesn’t have up-to-date information on medical advisories or news events. When asked whether the US Food & Drug Administration has approved the Alzheimer’s drug Lecanemab, for example, SARAH said the drug is still in clinical trials when in fact it was approved for early disease treatment in January 2023.” Bloomberg’s Jessica Nix reports on SARAH, the World Health Organization’s medical chatbot, whose debut has raised a number of concerns about accuracy and data privacy issues, despite a number of built-in safeguards.
  • “Improvements in task load and emotional exhaustion scores suggest that generated draft replies have the potential to impact cognitive burden and burnout. Similarly, users expressed high expectations about utility, quality, and time that were either met or exceeded at the end of the pilot. Given the evidence that burnout is associated with turnover, reductions in clinical activity, and quality, even a modest improvement may have a substantial impact… Despite improvements in burden and burnout, no changes in overall reply time, read time, or write time were found when comparing prepilot and pilot periods.” A research article published in JAMA Network Open by Garcia and colleagues presents some intriguing findings from a survey study of physicians who used generative AI to help draft responses to patient inbox messages.
  • “Personalization is a new frontier in LLM development, whereby models are tailored to individuals. In principle, this could minimize cultural hegemony, enhance usefulness and broaden access. However, unbounded personalization poses risks such as large-scale profiling, privacy infringement, bias reinforcement and exploitation of the vulnerable. Defining the bounds of responsible and socially acceptable personalization is a non-trivial task beset with normative challenges.” A perspective article by Kirk and colleagues published in Nature Machine Intelligence explores the implications of personalizing large language models for individuals.
  • “We found high levels of performance across all models using conventional metrics for tissue and subtyping search. Upon testing the models on real patient cases, we found that the results were still less than ideal for clinical use. On the basis of our findings, we propose a minimal set of requirements to further advance the development of accurate and reliable histopathology image search engines for successful clinical adoption.” An article published in NEJM AI by Shang and colleagues takes stock of the performance of AI utilities for indexing and searching histopathology slides.


Artist’s rendering of the Voyager 1 spacecraft, showing large central communication dish, camera boom, and antennae. Image credit: NASA
Image credit: NASA
  • “The team discovered that a single chip responsible for storing a portion of the FDS memory — including some of the FDS computer’s software code — isn’t working. The loss of that code rendered the science and engineering data unusable. Unable to repair the chip, the team decided to place the affected code elsewhere in the FDS memory. But no single location is large enough to hold the section of code in its entirety….So they devised a plan to divide the affected code into sections and store those sections in different places in the FDS.” Whew! Not only has NASA been able to restore communication with the nearly 50-year-old Voyager 1 space probe, currently nearly a full light-day distant from Earth in interstellar space, but accomplished a nifty feat of programming to overcome ailing hardware in doing so.
  • “Somehow this dodgy Soviet-era science became the backbone for the microwave weapon theory. Its flimsy rationale defies both common sense and critical scrutiny. And yet, reputable journalism outlets contort themselves to make the argument.” In a pointed opinion article for Scientific American, Keith Kloor examines a legacy of junk science and excessive credulity regarding theoretical weapons such as those suggested to potentially be responsible for “Havana Syndrome.
  • “The good news: The world makes a lot of flu vaccine and has been doing it for decades. Regulatory agencies have well-oiled systems to allow manufacturers to update the viruses the vaccines target without having to seek new licenses….The bad news: The current global production capacity isn’t close to adequate to vaccinate a large portion of the world’s population in the first year of a pandemic.” As concerns grow about bird flu making the jump from their usual avian hosts to herds of cattle in the US, STAT News’ Helen Branswell investigates our readiness to counter a pandemic of H5N1, should one emerge.
  • “…opportunities exist to educate adult smokers and healthcare providers about the relative risks of tobacco products….it is important for such messaging to be evidence based, including assessment of the messaging with both intended populations and unintended populations. Clinicians should provide evidence-based assistance to patients for smoking cessation, including behavioral counseling and FDA-approved medications as a first-line treatment. Healthcare providers should know that e-cigarettes generally have lower health risks than combustible cigarettes.” An article published in Nature Medicine makes the case for an evidence-based approach to counseling smokers about risks related to the use of tobacco products, including e-cigarettes or vapes.

COMMUNICATION, Health Equity & Policy

Picture of a ceramic piggy bank, positioned face-on to the camera, against a white background. Image credit: Fabian Bank/Unsplash
Image credit: Fabian Bank/Unsplash
  • “…we find no impact of debt relief on credit access, utilization, and financial distress on average. Second, we estimate that debt relief causes a moderate but statistically significant reduction in payment of existing medical bills. Third, we find no effect of medical debt relief on mental health on average, with detrimental effects for some groups in pre-registered heterogeneity analysis.” A paper by Kluender and colleagues, available from the National Bureau of Economic Research, presents some counterintuitive findings from a pair of randomized experiments regarding the effects of efforts to relieve medical debt.
  • “The emergence of AI on the battlefield has spurred debate among researchers, legal experts and ethicists. Some argue that AI-assisted weapons could be more accurate than human-guided ones, potentially reducing both collateral damage — such as civilian casualties and damage to residential areas — and the numbers of soldiers killed and maimed, while helping vulnerable nations and groups to defend themselves. Others emphasize that autonomous weapons could make catastrophic mistakes. And many observers have overarching ethical concerns about passing targeting decisions to an algorithm.” A news feature by Nature’s David Adam examines the chilling prospects of AI-guided weapons on the modern battlefield.
  • …and even off the modern battlefield: meet the AI-generated “virtual veteran” that acts as a docent for memorial gallery collections honoring Australia and New Zealand’s WWI Anzac soldiers: “Virtual Veterans is an AI-driven chatbot that, when interacted with, assumes the persona of a World War I soldier, named ‘Charlie’. It uses AI techniques and algorithms to provide a guide to rich collections of resources from State Library of Queensland, Trove (Queensland digitised newspapers) and the Australian War Memorial.”
  • “…if you were writing a news article about apples, you wouldn’t put a photo of a pear at the top. But if you’re reading a story about large language models, you have a photo of a robot at the top, even though there are no robots anywhere near large language models. I think that it reinforces the opacity and difficulty accessing and understanding the technology even for people in the media and for researchers. This is creating a gap in terms of how well we understand the technology.” In an article posted at the Reuters Institute website, Marina Adami interviews Oxford Internet Institute’s Maggie Mustaklem about the problematic nature of many of the common ways we depict AI in the news and popular media.