AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

February 10, 2023

In today’s Duke AI Health Friday Roundup: examining future prospects for large language model chatbots; Scandinavian study evaluates myocarditis outcomes; Black and Hispanic dialysis patients at greater risk for infections; FDA issues guidance for external controls; “jailbreak” prompting technique overrides chatbot’s ethical brakes; global agricultural use of antibiotics much higher than previously thought; closing the gap on building a culture of open research, much more:


A painting-like image of an android with machine “neck” and “shoulders” but a human-shaped head composed of smoke and flame. Image created with Stable Diffusion Online.
Image created with Stable Diffusion Online
  • “Besides directly producing toxic content, there are concerns that AI chatbots will embed historical biases or ideas about the world from their training data, such as the superiority of particular cultures, says Shobita Parthasarathy, director of a science, technology and public-policy programme at the University of Michigan in Ann Arbor. Because the firms that are creating big LLMs are mostly in, and from, these cultures, they might make little attempt to overcome such biases, which are systemic and hard to rectify, she adds.” A Nature news feature by Chris Stokel-Walker and Richard Van Noorden plumbs the implications of generative AI applications for scientific research.
  • Two recent publications of possible interest to clinical trialists: the FDA’s new guidance on the use of external controls for studies of drugs and biologics, and a new TRIPOD checklist for prediction models that make use of cluster data.
  • “These significant shortcomings, alongside the lack of information to ensure reproducibility and transparency, are indicative of the challenges that AI in mental health needs to face before contributing to a solid base for knowledge generation and for being a support tool in mental health management.” A systematic review by Tornero-Costa and colleagues, published this week in JMIR Mental Health, finds that published reports of mental health research involving the use of AI applications are largely coming up short in terms of providing the needed quality and transparency to guide interpretation and use.
  • “So far that day, Sergeant Watson had fielded seven referrals from 911, four of which he forwarded to the ski patrol. He turned to Ms. Dummer: How many crash-detection calls had come in overall? Eleven, she said, out of 30 calls total.” The New York Times’ Matt Richtel reports that some Apple Watches are creating headaches for EMS call centers, as the watches’ accelerometers mistake a relatively harmless tumble on the ski slopes for something more serious, such as a car crash, and auto-dial 911 for help.
  • [Content warning: the following article reproduces verbatim profanity and other potentially offensive language.] “It’ll be interesting to see whether there’s a protracted game of cat and mouse between companies like OpenAI, which are working to sanitize the outputs of their systems, and devious tinkerers who are trying to figure out ways to get around those controls. Will OpenAI eventually be able to lock ChatGPT down for good, or will it be a back-and-forth between clever pranksters and the company’s morality police?” At Futurism, Jon Christian details how a carefully designed prompt can accomplish a “jailbreak” – that is, bypassing safeguards meant to keep the ChatGPT chatbot from dispensing offensive commentary and exhorting users to engage in illegal behavior.


Photograph of leucistic axolotl salamander in an aquarium. The salamander is white and has a broad head with widely separated eyes and purple fringed gill-like structures around its head. Image credit: LeDameBucolique/Pixabay
Image credit: LeDameBucolique/Pixabay
  • As Valentine’s Day approaches, the National Science Foundation would like to remind everybody that nothing says “I love you” quite like a romantic tardigrade. Or vole. Or axolotl.
  • “This cohort study found that the QOL [quality of life] of TGD [transgender and gender-diverse] children and adolescents was worse than that of not only age-matched peers from the general population but also adolescents with serious mental health conditions, such as anxiety and depression…Our results also suggest multiple areas that can be targeted to improve QOL among TGD young people, including increased efforts to address bullying, coexisting mental and physical health problems, and gender dysphoria.” A study by Engel and colleagues published in JAMA Network Open examines quality of life among transgender and gender-diverse youth in Melbourne, Australia.
  • “The use of antibiotics in animal farming — a major contributor to antimicrobial resistance — is expected to grow by 8% between 2020 and 2030 despite ongoing efforts to curtail their use, according to an analysis…Overuse of antibiotics in agriculture is thought to be a major driver of the rise in humans of bacterial infections that cannot be treated with antibiotics.” A Nature news article by Sara Reardon reports on a recent analysis that finds that the use of antibiotics in agriculture worldwide is substantially greater than previously known, raising worries about increased potential for the development of antimicrobial-resistant strains of pathogens.
  • “…myocarditis after SARS-CoV-2 mRNA vaccination was associated with a lower risk of heart failure within 90 days of admission to hospital compared with myocarditis associated with covid-19 disease and conventional myocarditis…These findings suggest that the clinical outcomes of myocarditis associated with SARS-CoV-2 mRNA vaccination are less severe than the outcomes of other types of myocarditis…” A research article by Husby and colleagues published in BMJ reports results from a cohort study in four Nordic countries that examined myocarditis in persons who had received an mRNA-based COVID vaccine.
  • “One way to avert insect extinctions is to set aside the land they need to survive. But scientists know the ranges for only about 100,000 of the estimated 5.5 million insect species. To determine how well existing protected areas may be aiding insect conservation, Chowdhury and colleagues mapped the known habitats of about 89,000 of those species and compared the ranges with the boundaries of preserves from the World Database on Protected Areas.” An article in Science News by Freda Kreier reports on recent research that suggests attempts to preserve pockets of unspoiled nature are not providing adequate shelter for many of the world’s insect species, many of which are undergoing precipitous population declines.

COMMUNICATION, Health Equity & Policy

Red and blue neon “open” sign in a window. Image credit: Shark Ovski/Unsplash
Image credit: Shark Ovski/Unsplash
  • “Why — despite live examples of seeing the impact of open research practices and the indication from researchers and the academic community that they want open research practices to be the norm — is there such a disparity between awareness, behavior, and action? How can we close this gap so that behaviors align with aspirations around open science?” A guest post at Scholarly Kitchen by Erika Pastrana and Simon Adar explores what still remains to be done in building a mature culture of open research among both academic authors and publishers.
  • “…Hispanic and non-Hispanic Black hemodialysis patients had the highest rates of S. aureus bloodstream infections. Hispanic and Latino dialysis patients had a 40% higher risk of S. aureus infections than white patients in the same time period, the report found. Even though a bigger proportion of white patients on hemodialysis had a central venous catheter (23%, versus 21% of Black people and 14% of Hispanic or Latino people), those from minority groups had higher rates of infection.” An article by STAT News’ Isabella Cueto unpacks findings from a CDC report that reveals stark inequities in rates of Staph infection among patients undergoing dialysis for end-stage renal disease.
  • “Retractions warn users against relying on problematic evidence. Until recently, it has not been possible to systematically examine the influence of retracted research on policy literature. Here, we use three databases to measure the extent of the phenomenon, and explore what it might tell us about the users of such evidence.” A research article by Malkov and colleagues published in Quantitative Science Studies finds a higher-than-expected proportion of retracted research articles being cited in policy literature.
  • “The big take-home lesson was that all these studies were generally considered acceptable by the expert reviewers. This is rather astounding: a chatbot was deemed capable of generating quality academic research ideas. This raises fundamental questions around the meaning of creativity and ownership of creative ideas — questions to which nobody yet has solid answers.” In an article for The Conversation, Brian Lucey and Michael Dowling describe their investigation into the capabilities of ChatGPT when applied to scholarly research.