AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

June 30, 2023

In this week’s Duke AI Health Friday Roundup: reflecting on the future of AI; successful creation of model “human embryoid” raises challenging questions; transparency reporting for generative AI; health imperiled by excessive heat; internet already feeling the strain of AI-generated junk content; chemo shortage becoming acute; a health AI code of conduct; disappointing report card for internet privacy protections for kids; leveraging social networks for contact tracing; much more:

AI, STATISTICS & DATA SCIENCE

A selective photograph of a crystal ball, perched on a ledge or precipice, shows the landscape below refracted and inverted. Image credit: March Schulte/Unsplash
Image credit: March Schulte/Unsplash
  • “To nurture a future where AI is leveraged to the benefit of people and society, it is crucial to bring together a wide array of voices and viewpoints…With this goal in mind, we invited 20 experts, with specialties encompassing a broad spectrum—spanning the fields of business, economics, education, engineering, health, history, law, mathematics, medicine, psychology, and the sciences—to explore the capabilities of GPT-4 and provide their insightful reflections in the form of essays.” Microsoft Chief Science Office Eric Horvitz curates a set of essays exploring the future possibilities of AI technologies in an online anthology series.
  • “The Artificial Intelligence Code of Conduct (AICC) project is a pivotal initiative of the National Academy of Medicine (NAM), aimed at providing a guiding framework to ensure that AI algorithms and their application in health, medical care, and health research perform accurately, safely, reliably, and ethically in the service of better health for all.” The National Academy of Medicine announces its Artificial Intelligence Code of Conduct project, aimed at distilling a broadly shared framework for trustworthy AI in healthcare and medical research.
  • “Careful investigation of LLMs might reveal more about their capacities, but it might also lead to insights about the nature of learning more broadly. As strong statistical learners, LLMS provide a valuable proof of concept of how abstractions can – or cannot – emerge purely from data-driven learning.” A commentary in Nature Reviews Psychology by Michael C. Frank proposes the application of techniques from developmental psychology to unravel what’s going one with otherwise inscrutable large language model AIs.
  • “Knowing which types of harms most significantly affect real people will help researchers understand which harm-mitigation interventions are most in need of development. It will help educators better teach people how to use generative AI responsibly. And most importantly, it will help regulators and civil society hold companies accountable for enforcing their policies.” A viewpoint article by Arvind Narayanan and Sayash Kapoor, published at the Columbia University Knight Institute blog, makes a detailed argument for transparency reporting from companies that offer generative AI products.
  • “The worst outcome would be to have policymaking without general societal awareness of what these things can do and what their limitations are. I don’t know how else to have that legislation…I do think that making it available for use allows the public and others to be part of a very important conversation: How do you make it safer? And there are some fundamental questions that I think are uncomfortable for some of the companies producing these models, such as, which data were used to train this model?” At Medscape, Eric Topol hosts a wide-ranging conversation with NEJM AI editor Isaac S. Kohane about the implications of recent developments in generative AI in medicine (video with transcript available).

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Photograph showing sun, blurred by clouds, setting behind ranges of hills. Image credit: Scott Goodwill/Unsplash
Image credit: Scott Goodwill/Unsplash
  • “Extreme heat is the number-one weather-related cause of death in the U.S., and it kills more people most years than hurricanes, floods and tornadoes combined. Yet research shows that compared with their thinking about dramatic events such as storm surges and wildfires, people tend to feel more uncertain about what to do under the threat of extreme heat and don’t perceive as much personal risk.” A timely article at Scientific American by Terri Adams-Fuller examines the oft-underestimated perils of extreme heat and some possible strategies for mitigating the harm from a phenomenon that is only likely to increase in frequency.
  • “Here, we establish a model of the human post-implantation embryo, a human embryoid, comprised of embryonic and extraembryonic tissues. We combine two types of extraembryonic-like cells generated by transcription factor overexpression with wildtype embryonic stem cells and promote their self-organization into structures that mimic several aspects of the post-implantation human embryo.” A research article by Weatherbee and colleagues published in Nature (accepted but unedited version available online ahead of print) describes the successful creation of a model human “embryoid.” An article by Phillip Ball published (with serendipitous timing) in Quanta this month by examines some of the complexities presented by efforts to create model human embryos.
  • “Intas provided America with a lot of frontline chemotherapy drugs—half of the country’s supply in some cases—that are used to treat more than a dozen types of cancer. When the disastrous inspection led the company to halt production, other manufacturers couldn’t make up the difference. Hospitals are now reeling: In a recent survey, 93 percent of U.S. cancer centers said they were experiencing a shortage of the drug carboplatin, while 70 percent were low on another, cisplatin.” The Atlantic’s Ed Yong reports on the slow-motion medical crisis unfolding as the pipeline of vital generic cancer drugs slows to a trickle.
  • “With the goal of more efficiently identifying and contacting community members at risk of infection, we designed a protocol to identify new cases of SARS-CoV-2 in the community. We used social network analysis to streamline potential contacts of known cases based on descriptions of each case’s contacts, community locations visited, and recent activities, with the aim of minimizing efforts needed to identify new cases in the community.” A research article published in the Journal of Public Health Management and Practice by Pasquale and colleagues describes a scalable approach to contact tracing that leveraged the social networks of study participants in a Durham-area COVID contact tracing study.
  • “Anti-trans legislation and rhetoric frequently suggest that taking gender-affirming hormones is a big health risk. But at the annual meeting of the Endocrine Society last week, researchers presented swaths of evidence on the benefits of gender-affirming hormones and the harms of trans discrimination, with the hopes of combating unscientific opposition.” An article by STAT News’ Theresa Gaffney highlights new research presented at a recent Endocrine Society conference underscoring the potential benefits of gender-affirming hormone therapy for transgender persons – as well as the harms of anti-trans discrimination.

COMMUNICATION, Health Equity & Policy

Small toy robot, shaped roughly like an MP3 player with arms and legs, with an angry, frowning expression on its face, apparently lifting a dumbbell. Image credit: Alejandro Mendoza/Unsplash
Image credit: Alejandro Mendoza/Unsplash
  • “Given money and compute, AI systems — particularly the generative models currently in vogue — scale effortlessly. They produce text and images in abundance, and soon, music and video, too. Their output can potentially overrun or outcompete the platforms we rely on for news, information, and entertainment. But the quality of these systems is often poor, and they’re built in a way that is parasitical on the web today.” A series of articles from the Verge, MIT Technology Review, and Tom’s Hardware document how AI is clogging the discourse on the web and corrupting search, surreptitiously hijacking e-commerce, and engaging in plagiarism at industrial scale, respectively. And for a chaser: this Nature editorial noting that recent warning from tech figures about apocalyptic but low-probability AI scenarios are diverting attention from more substantive risks.
  • “It’s one of the most crucial questions people have when deciding which health plan to choose: If my doctor orders a test or treatment, will my insurer refuse to pay for it?… Yet, how often insurance companies say no is a closely held secret. There’s nowhere that a consumer or an employer can go to look up all insurers’ denial rates — let alone whether a particular company is likely to decline to pay for procedures or drugs that its plans appear to cover.” A ProPublica report by Robin Fields explores the rates at which insurers deny customer claims – and investigates the reasons behind this lack of transparency.
  • “…many women find that their challenges with misogyny have not even remotely abated as they progressed to senior positions in their fields. A lot of men are happy to support and mentor a woman who is in a junior, less-powerful position. Yet, in our experience, willingness to support and advocate generally diminishes for women as they achieve parity or seniority, and it is not replaced with a respectful working relationship of equals.” An article at Nature by Alison Bentley and Rachael Garrett puts a spotlight on more subtle manifestations of misogyny that still plague the scientific workplace.
  • “…our research found that most businesses in the industry are either not in compliance with the new CPRA definition of selling or sharing data, or are not being transparent about how they are really monetizing our data. As a result, companies are misleading kids and families about privacy….If companies can say one thing but do another, then consumers don’t have a meaningful choice if their data is sold, and that’s especially concerning when it comes to kids.” A report recently released by Common Sense Media offers a survey of the current state of internet and social media privacy protections for kids.