AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

March 31, 2023

In this Friday’s Duke AI Health Roundup: guiding patients through the digital care maze; human-centered design for AI; dissecting the Internet Archive ruling; demanding safeguards for biometric data; new organ donation rules create winners and losers; chatbots and a theory of mind; global equity in research collaborations; FDA issues guidances on AI, cybersecurity; US stillbirth rate remains high; trying to understand AI risks without being able to see under the hood; much more:


  • “This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities.” A perspective article by Chen and colleagues, published in the Journal of Medical Internet Research, offers the concept of “human-centered design” as a potential countermeasure for bias in artificial intelligence applications in healthcare.
  • “Microsoft and OpenAI are rolling out extraordinarily powerful yet unreliable systems with multiple disclosed risks and no clear measure either of their safety or how to constrain them. By excluding the scientific community from any serious insight into the design and function of these models, Microsoft and OpenAI are placing the public in a position in which those two companies alone are in a position do anything about the risks to which they are exposing us all.” Continuing a series of articles expressing a skeptical countercurrent amid widespread enthusiasm for the perceived capabilities of recently release large language model AIs, Gary Marcus raises the issue of what the lack of transparency around proprietary AI models means for the public.
  • “The start of the book is a futuristic vision of medical chatbots. A second-year medical resident who, in the midst of a crashing patient, turns to the GPT-4 app on her smartphone for help. While it’s all too common these days on medical rounds for students and residents to do Google searches, this is a whole new look.” In an essay at his Ground Truths blog at Substack, Eric Topol reviews the forthcoming book The AI Revolution in Medicine by Lee, Goldberg Kohane.
  • “Dr. Gopnik compared the theory of mind of large language models to her own understanding of general relativity. “I have read enough to know what the words are,” she said. “But if you asked me to make a new prediction or to say what Einstein’s theory tells us about a new phenomenon, I’d be stumped because I don’t really have the theory in my head.” By contrast, she said, human theory of mind is linked with other common-sense reasoning mechanisms; it stands strong in the face of scrutiny.” In an article for the New York Times, Oliver Whang addresses the evidence for theories recently put forward that new large language model AIs have developed a theory of mind – the ability to model other beings’ mental and emotional states. (For some additional perspective on this question, see this essay by Brian Gallagher at Nautilus.)


People trying to find their way through a maze, photographed from above. Image credit: Susan Q Yin/Unsplash
Image credit: Susan Q Yin/Unsplash
  • “Another key part of the work is addressing the specific language needs of the patients. That starts with accurately collecting demographic information on patients, including race, ethnicity, and language spoken, said Bryant: ‘We really feel strongly that we can’t improve things that we can’t measure. So if we’re not measuring things accurately, we’re not doing anyone any justice.’” At STAT News, Ambar Castillo looks at how a Massachusetts health system has worked to provide technical support for patients in ways that ensure that they are able to access needed care and health information.
  • “One of the reasons stillbirth data often is incomplete or inaccurate is that autopsies, placental exams and genetic testing are not uniformly performed. And even if one or more of those exams are carried out and do reveal a cause of death, that critical piece of information is typically not updated in state or federal databases….ProPublica found that in 2020, placental exams were performed or planned in only 65% of stillbirth cases and autopsies were conducted or planned in less than 20% of cases.” ProPublica’s Duaa Eldeib examines the story behind a report recently released by the NIH that shows an alarmingly high rate of stillbirth persisting in the United States.
  • New rules requiring donated livers to be offered for transplant hundreds of miles away have benefited patients in New York, California, and more than a dozen other states at the expense of patients in mostly poorer states with higher death rates from liver disease…The shift was implemented in 2020 to prioritize the sickest patients on waitlists no matter where they live. While it has succeeded in that goal, it also has borne out the fears of critics who warned the change would reduce the number of surgeries and increase deaths in areas that already lagged behind the nation overall in health care access.” A story by Jess Sutner, jointly published by The Markup and the Washington Post, examines the effects of recent changes to rules governing organ donation.
  • ‌‌“Forty years of research ‌‌into how to make an H.I.V. vaccine ‌‌helped make rapid Covid-19 vaccine development feasible. These tools and others led to breakthroughs ‌that directly informed Covid-19 vaccine development in 2020…‌Still, I am concerned that our social order and national and global governance systems are not keeping pace. Having next-generation vaccine technology without adequate systems for implementation and distribution to all people is a waste.” In an editorial for the New York Times, vaccinologist Barney Graham points out the gulf between our ability to rapidly develop effective vaccines and the actual ability to get them to patients in time to prevent public health disasters.

COMMUNICATION, Health Equity & Policy

Photograph of library bookshelves with a series of circular tunnels cutting across them. Image credit: vnwayne fan/Unsplash
Image credit: vnwayne fan/Unsplash
  • “The legal theory behind this CDL is that a library can digitize a print book and loan out the digitized version so long as it sequesters the print copy. The Internet Archive (IA) did this on a global scale, enabling anyone with an email address to have a virtual library card, and then lifting what controls it had established in the early days of the pandemic. Publishers by and large do not accept this legal theory, at least as practiced by the Internet Archive, and several filed suit.” A group of regular contributors to Scholarly Kitchen offer their views on the recent court ruling against the Internet Archive regarding their practice of “controlled digital lending”; in particular, the IA’s pandemic-era practice of offering unlimited global lending of copies of books the IA had digitized.
  • “Sharing data, for example, requires having enough institutional infrastructure and resources to first curate, manage, store and (in the case of data relating to people) encrypt the data — and to deal with requests to access them. Also, the pressure placed on researchers of LMICs [lower/middle income countries] by high-income-country funders to share their data as quickly as possible frequently relegates them to the role of data collectors for better-resourced teams.” In a commentary for Nature, Horn and colleagues discuss guidelines for making research collaboration more equitable and globally inclusive.
  • “Admittedly, it can be hard work to find this kind of situation. First, you need to know that you have value. Don’t let anyone talk you into believing you’re worth less than what you know you’re truly worth, given your diverse set of skills, knowledge, and experiences. Then you need to locate employers who will see your value and will give you an environment where you can thrive.” A Science career perspective article by Alaina G. Levine underscores the importance of finding a respectful working environment.
  • The US Food and Drug Administration has released a pair of guidances for industry this week: one is a relatively brief notice that describes the agency’s current and future approaches to requirements for cybersecurity provisions in premarket submissions of cyber devices; the other, a draft guidance, pertains to the iterative regulation of AI/machine-learning-enabled devices as they are continuously improved or updated.
  • “It is reasonable to not want our every step in a public space tracked and recorded in a database next to our names. It is reasonable to not want our likenesses stolen to scam our family members. We do not expect that our pictures on social media can be used for training private facial recognition algorithms without our permission. It is reasonable to not be coerced into selling access to our biometrics for use in future applications that we may not comprehend now.” In an op-ed for the Raleigh News and Observer, Duke professor Cynthia Rudin and East Chapel Hill High School student Lance Browne argue for robust federal policy to protect individuals’ biometric data.