AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

January 13, 2023

In today’s Duke AI Health Friday Roundup: developers’ roles in building ethical AI; median US prices for new drugs top $200K in 2022; whole-parasite malaria vaccine tested; the “political economy” of misinformation; are AIs breaking copyright laws?; AI, clinical decision-making, and risks to LGBTQ patients; inpatient adverse events still common, many preventable; much more:


Indoor art installation showing people walking amongst a complex webwork of ropes and cables. Image credit: Alina Grubnyak/Unsplash
Image credit: Alina Grubnyak/Unsplash
  • “Inadequacies in healthcare provision for LGBT+ people raise two sets of concerns. First, there is an increased risk of disparate impact and iatrogenic harm from AI-supported clinical decision making across LGBT+ groups….Second, the lack of reliable data and the insufficient levels of engagement with the wider queer community mean that harms caused by AI are hard to identify and even harder to mitigate.” A commentary published in Nature Medicine by Kormitilzin and colleagues describes some of the risks attendant upon using health AI for clinical decision-making for members of the LGBTQ community, and describes a new initiative to encourage participatory research with the aim of improving understanding of the issues and informing policy recommendations.
  • “Algorithmic systems do not arise out of a morally neutral ether, get used in only morally binary ways, and then return to the neutral ether once the code finishes running. Automated systems are created by humans (who are not morally neutral) for specific ends (which are also not morally neutral) and have ongoing active and passive impacts (which are rarely morally neutral) on human life.” An interview study published in the journal AI and Ethics by Griffin, Green, and Welie explore the extent to which AI developers perceive and act on ethical issues as part of their work.
  • “Artificial intelligence and machine learning is going to be so pervasive in our everyday practice that everyone will need to have some base level of understanding in order to at least evaluate the tools that they’re using. They don’t need to be experts and they don’t need to develop this stuff, but they need to be able to say, ‘I don’t think this works well,’ and then be able to call up the developer and say, ‘I think we have a problem.’” STAT News’ Katie Palmer interviews the University of Michigan’s Erkin Ötleş and Jim Wooliscroft on the pressing need to incorporate AI concepts into clinical learning curricula, starting and the undergraduate level.
  • “Key to this change is acknowledging that ethical and societally responsible AI research must be a collective pursuit, not an individual one, and that individualised notions of harm, entrenched in current solutions, are unsuitable. We must expand our commitments to ethical research and development to being as broad in scope as the potential harms of the work we do.” In a blog post for the Ada Lovelace Institute, Quinn Waeiss describes Stanford University’s Ethics and Society Review, a mechanism integrated into the research process to identify potential societal harms that could ensue from proposed research projects, and how this model departs from standard conceptions of ethics that shape institutional review board oversight.


A single cigarette, burned down nearly to the filter and emitting wisps of smoke, stood on end on a tabletop. A small heap of ash surrounds it at the bottom. Image credit: Mathew MacQuarrie/Unsplash
Image credit: Mathew MacQuarrie/Unsplash
  • “Our results suggest that viral recruitment may be beneficial not only for spreading the intervention, but also for motivating smokers to quit smoking. This may open new opportunities to design digital interventions for smoking cessation as team efforts and build collaborative tools for people who smoke to not only refer, but also engage with one another throughout the intervention.” In a paper published this week in JAMA Network Open, Faro and colleagues report results from a randomized trial that compared a machine learning recommender system vs a standard motivational system in assisting participants to quit smoking, together with a “viral peer recruitment tool” assigned based on how the participant was recruited to the study.
  • “After setting record-high U.S. prices in the first half of 2022, drugmakers continued to launch medicines at high prices in the second half, a Reuters analysis has found, highlighting their power despite new legislation to lower costs for older prescription products.” At Reuters, Deena Beasley reports that the median yearly price for new drugs in the US is nearly a quarter of a million dollars, a number that likely reflects “double digit year-over-year price growth.”
  • “Even today, many U.S. hospitals rely solely on voluntary reporting of adverse events, which results in substantial undercounting and, in some cases, misleading reports of zero harm. Identification of adverse events in EHRs in the future will probably be performed by means of computerization of triggers and also through leveraging of artificial intelligence.” An analysis published in the New England Journal of Medicine by Bates and colleagues finds that adverse events during hospitalization are common, occurring in roughly one in four admissions, and of those, roughly one in four was deemed preventable.
  • “Two weeks after the last dose, antibodies to falciparum circumsporozoite protein and PfSPZ were higher in protected versus unprotected vaccinees. A three-dose regimen of PfSPZ Vaccine demonstrated safety and efficacy against malaria infection in malaria-experienced adults.” In a study reported in Science Translational Medicine, Sirima and colleagues present findings from a combination nonrandomized dose-escalation/randomized trial of a “whole-parasite” malaria vaccine administered to adult volunteers in Mali and Burkina Faso.

COMMUNICATION, Health Equity & Policy

Close-up shot of the Manchester Baby, the world’s first stored-program electronic computer, operational in 1948. Image credit: Parrot of Doom - Own work, CC BY-SA 3.0/Wikipedia
Image credit: Parrot of Doom - Own work, CC BY-SA 3.0/Wikipedia
  • “The lesson learned by the industrialized nations was that progress in science and technology was crucial to the survival of the state, and the subsequent Cold War just hammered that home with the development of spaceflight, satellite and electronic reconnaissance, ICBMs, computing technology, and more. We’re living in the world, and specifically the scientific world, that was forged in 1939-1945, and measurements of scientific activity that start while that gigantic cymbal crash was still ringing are going to be affected by it.” At his In the Pipeline blog, Derek Lowe critiques a new Nature paper by Park and colleagues that posits that scientific papers and patents are reflecting a diminishing pace of scientific achievement – becoming less “disruptive” over time.
  • “Open licenses have tended to be looked upon by users as a free-for-all, without adequate attention to the very real concerns of the creators. In this case, the sheer scale of the alleged violation in terms of works used may well form the basis of the defense….I don’t find this argument convincing given the ability today to license many content types at scale for TDM, including images, music and yes, journal articles…but it is an argument often offered by infringers.” A blog post by Roy Kaufman at Scholarly Kitchen weighs whether certain Creative Commons licenses would permit the use of work release under those licenses for the development of content-generating AI systems, or whether such use might run afoul of copyright law.
  • “…little is known about the political economy of misinformation, particularly those campaigns spreading misleading or harmful content about public health guidelines and vaccination programs. To understand the political economy of health misinformation, we analyze the content and infrastructure networks of 59 groups involved in communicating misinformation about vaccination programs.” In an article published in the Journal of Communication, Herasimenka and colleagues explore the resources and networks that enable the propagation of medical misinformation at large scales.
  • A Fact Sheet released by the White House Office of Science and Technology Policy announces a number of new open science initiatives, including a 2023 “Year of Open Science” affecting multiple domains of research. For biomedical researchers, the Fact Sheet includes a reminder that the National Institutes of Health’s Final Policy and Data Management and Sharing goes into effect on January 25th of this year.