AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

March 3, 2023

In today’s Duke AI Health Friday Roundup: clinical vs general large language models for medical NLP; Lilly announces cost caps for insulin products; dangers of ‘algorithmic paternalism’; parental social support and mental health of LGBTQ kids; scoring system for housing help may be adding to inequity; rural hospitals see loss of obstetric/maternity services; working to improve health literacy and fighting misinformation; much more:

AI, STATISTICS & DATA SCIENCE

Disorderly jumble of moveable type letters from an old-fashioned mechanical printing press. Image credit: Amador Loureiro/Unsplash
Image credit: Amador Loureiro/Unsplash
  • “We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla70B and PaLM-540B.” In a paper posted at the Meta Research website (also available as a preprint from arXiv), Touvron and colleagues describe the creation of LLaMA, a new large language model that the authors report can match or outperform comparable models. Notably, LLaMA was trained only on publicly available data.
  • “With the success of general-domain LLMs, is there still a need for specialized clinical models? To investigate this question, we conduct an extensive empirical analysis of 12 language models, ranging from 220M to 175B parameters, measuring their performance on 3 different clinical tasks that test their ability to parse and reason over electronic health records…. We show that relatively small specialized clinical models substantially outperform all in-context learning approaches, even when finetuned on limited annotated data…” A preprint article by Lehman and colleagues, available from arXiv, explores the question of whether specialized large language models outperform more generalist LLM models for natural-language processing tasks in clinical settings.
  • “…we demonstrated the ability to serially and comprehensively predict neonatal outcomes from various maternal conditions extracted from EHRs. Using advanced machine learning methodologies, we have found previously unreported associations between maternal conditions (anemia, certain medication exposures, and social determinants of health) and neonatal outcomes such as NEC, BPD, IVH, PDA, and CP that have clinical plausibility.” A paper published in Science Translational Medicine by De Francesco and colleagues presents results from a study of a deep learning model, trained on EHR data, that was used to evaluate risks for a number of potential adverse outcomes in neonates.
  • “’I don’t think we are at a place where we can just let algorithms run and make the decisions,’ said Michael Pencina, director of Duke AI Health, an initiative at Duke University School of Medicine that works on AI and machine-learning research. Generally, medical AI programs use an algorithm or set of algorithms that learn and get better over time with input.” The Wall Street Journal’s Sumathi Reddy reports on the cautious approach some health systems are following as they adapt AI and machine learning tools for healthcare applications.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Two grey wolves in snowy surroundings, one with its head and foreleg draped over the other as if playful or affectionate. Image credit: Yannick Menard/Unsplash
Image credit: Yannick Menard/Unsplash
  • “If you’ve ever heard the term ‘alpha wolf,’ you might imagine snapping fangs and fights to the death for dominance….But it turns out that this is a myth, and in recent years wildlife biologists have largely dropped the term ‘alpha.’ In the wild, researchers have found that most wolf packs are simply families, led by a breeding pair, and bloody duels for supremacy are rare.” At Scientific American, Stephanie Pappas explores the science that has helped dismantle the notion of the “alpha male,” in both literal and figurative senses.
  • “As they continued to do their jobs during the pandemic, essential workers — who were already managing multiple layers of social, economic, and professional precarity — faced new forms of surveillance around their health. While this health data collection was introduced at a time of increasing public visibility for essential workers, it wasn’t matched by meaningful increases in dignity, benefits, or information sharing. Instead, workers faced major regulation-induced gaps in critical information about who was sick in the workplace — information they needed to assess their risks and the risks to their families.” A new report by Garfofalo and colleagues from Data & Society delves into equity issues surrounding the collection of health data from essential workers during the earlier stages of the COVID pandemic.
  • “Results indicated that perceived parental social support and psychological control were each uniquely and independently linked with youth depressive symptoms. Consistent with study hypotheses, perceived parental social support was linked with fewer depressive symptoms whereas perceived parental psychological control was linked with greater depressive symptoms, each controlling for the influence of the other.” A research article by McCurdy and Russell, published this week in the journal Child Development, investigates associations between parental support and mental well-being in LGBTQ children.
  • “Over her decades in government, academia and hospital medicine, she’s seen what happens when people don’t understand or trust their health care provider. The problem can be particularly striking, she says, among Black Americans, who report higher levels of mistrust in the medical system than whites and suffer worse outcomes in everything from maternal mortality to mental health to life expectancy.” NPR’s Ryan Levi and Dan Gorenstein profile physician Lisa Fitzpatrick, founder of Grapevine Health, whose work centers on improving health literacy among the public.
  • “From 2015 to 2019, there were at least 89 obstetric unit closures in rural hospitals across the country. By 2020, about half of rural community hospitals did not provide obstetrics care, according to the American Hospital Association…In the past year, the closures appear to have accelerated, as hospitals from Maine to California have jettisoned maternity units, mostly in rural areas where the population has dwindled and the number of births has declined.” The New York Times’ Roni Caryn Rabin reports on the phenomenon of rapidly dwindling maternity services in rural hospitals.

COMMUNICATION, Health Equity & Policy

A photographic rendering of a simulated middle-aged white woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network. Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0
Image by Alan Warburton/ ©BBC /Better Images of AI/CC-BY 4.0
  • “Professional guidance for clinicians regarding the use of AI should consider how to maintain a commitment to their codes of ethics and values, not only concerns about medicolegal liability. AI can be a catalyst for a renewed humanism in medicine, but this vision will only be achieved by strengthening the values-based commitments of clinicians—with a focus not on achieving perfect accuracy, but on furthering the best interests of patients.” In correspondence published in Nature Medicine, McCradden and Kirsch warn against the dangers of unthinkingly incorporating “algorithmic paternalism” into health AI tools.
  • “An analysis of more than 130,000 VI‑SPDAT surveys taken in the Los Angeles area as far back as 2016 found that White people received scores considered “high acuity”—or most in need—more often than Black people, and that gap persisted year over year…The disparity is particularly stark among those who took a variation of the survey designed for people under 25.” At The Markup, Colin Lecher and Maddy Varner report findings from an investigation of a scoring algorithm for subsidized housing used by the city of Los Angeles – one that may be amplifying systemic inequities by giving unhoused Black and Latino applicants lower “acuity” scores.
  • “Despite these challenges, I don’t regret our decision to retract the paper. It may have been embarrassing and humbling, but it was the right thing to do. And the experience helped me grow as a scientist. I had made my data and code for the Naturepaper openly accessible so others could review and verify my findings, and I have a new appreciation of the value of doing so.” In a perspective article for Science, Jaivime Evaristo describes the difficult but salutary experience of retracting a first-author publication after discovering a major analytical mistake.
  • “Eli Lilly will cut prices for some older insulins later this year and immediately expand a cap on costs insured patients pay to fill prescriptions….The moves announced Wednesday promise critical relief to some people with diabetes who can face annual costs of more than $1,000 for insulin they need in order to live. Lilly’s changes also come as lawmakers and patient advocates pressure drugmakers to do something about soaring prices.” The Associated Press reports that insulin manufacturer Eli Lilly will be capping out-of-pocket costs for several varieties of insulin products.