AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

December 15, 2023

In this week’s Duke AI Health Friday Roundup: applying large language models to robotics; new insights into severe morning sickness; New England Journal reflects on historical medical injustices; clarifying the economics of generative AI; ozone pollution responsible for elevated risk of low birth weight in many LMICs; “productivity paradox” may temper expected benefits of AI in healthcare; prompt injection risks for customized GPT; surge of article retractions in 2023; much more:


Industrial robot comprising a single large manipulator arm in a brightly lit white space, poised as if preparing to retrieve one of multiple items on numbered shelves arrayed around it. Image credit: Zhenyu Luo/Unsplash
Image credit: Zhenyu Luo/Unsplash
  • “This collection contains 8 papers which cover an extensive range of topics including robotic applications such as chemistry, robotic control, task planning, anomaly detection and many more. LLMs play a significant role in each one of these articles, crucially contributing to and enabling new approaches that were not possible before….In order to further understand the impact of LLMs on the research process even beyond robotics, we encouraged the authors of the submissions to use LLMs as the authors see fit in the process of producing the paper. This includes ideation, figure and text generation, and proofreading.” The November 17 issue of the journal Autonomous Robots is dedicated to the use of large language models in robotics – as well as the drafting of the papers included in the issue.
  • “While optimism regarding genAI is certainly warranted, so too is skepticism. In 1993, one of us (E.B.) described the ‘the productivity paradox of information technology,’ citing repeated examples in various fields in which promising technologies initially failed—sometimes for decades—to deliver on the promise of improving productivity. Health care’s recent experience with digital transformation, largely through the implementation of electronic health records (EHRs), has hewed closely to the ‘productivity paradox’ model.” A Special Communication published in JAMA by Robert Wachter and Erik Brynjolfsson invokes the “productivity paradox” as the authors examine the merits of claims for the transformative potential of generative AI in medicine.
  • “I think that this paradox of impressive intelligence being associated with less impressive money-making ability might be an unavoidable theme for AI, at least for now, simply because the success of contemporary AIs is based almost entirely on the massive size and quality of the data sets used to train them….Call it the supply paradox of AI: the easier it is to train an AI to do something, the less economically valuable that thing is. After all, the huge supply of the thing is how the AI got so good in the first place.” An essay by Erik Hoel at The Intrinsic Perspective takes a step back from the currently swirling LLM hype to ask what, exactly, the current crop of AIs are “disrupting.”
  • “Through comprehensive testing of over 200 user-designed GPT models via adversarial prompts, we demonstrate that these systems are susceptible to prompt injections. Through prompt injection, an adversary can not only extract the customized system prompts but also access the uploaded files. This paper provides a first-hand analysis of the prompt injection, alongside the evaluation of the possible mitigation of such attacks. Our findings underscore the urgent need for robust security frameworks in the design and deployment of customizable GPT models.” A preprint article by a group of researchers from Northwestern University examines the vulnerability of an array of custom builds of GPT large language model AIs to the tactic of “prompt injections” designed to make the AI engage in undesirable behavior.


Aerial photograph of a city skyline and waterfront, with skyscrapers and other building enveloped in smog and haze. Image credit: Alex Gindin/Unsplash
Image credit: Alex Gindin/Unsplash
  • “The effect of O3 on birthweight in low- and middle-income countries (LMICs) remains unknown. A multicenter epidemiological study was conducted to evaluate the association between maternal peak-season O3 exposure and birthweight, using 697,148 singleton newborns obtained in 54 LMICs between 2003 and 2019. We estimated the birthweight reduction attributable to peak-season O3 exposure in 123 LMICs based on a nonlinear exposure-response function (ERF)…. The mean reduction in birthweight reduction attributable to O3 across the 123 LMICs was 43.8 g (95% CI: 30.5 to 54.3 g) in 2019.” A research article by Tong and colleagues, published in Science Advances, finds that air pollution (ozone in particular) is affecting the birthweight of infants across large portions of African and Asia.
  • “The researchers found that women who had high levels of the hormone GDF15 before they got pregnant had minimal reactions to it while carrying their baby. The findings suggest that giving GDF15 to those at high risk of hyperemesis gravidarum before pregnancy could protect them from the condition. O’Rahilly says that although their study suggests that GDF15 influences the risk of severe sickness, other factors might have a role.” Nature’s Carissa Wong reports on recent research that sheds light on the causes of a significant source of misery – and potential risk – in pregnancy: severe morning sickness, or hyperemesis gravidarum.
  • “The deep penetration of FUS waves allows the volumetric fabrication of opaque (nano)composites and printing through centimeter-thick tissues that are not attainable through state-of-the-art light-based printing techniques (table S9). The self-enhancing sono-ink design can be generalized for different systems, greatly expanding the materials library for acoustic printing techniques.” A research article published in Science by Kuang and colleagues describes the development of a technique for 3-D printing using ultrasound-sensitive “sono-inks” – an engineering feet that could potentially permit highly focused 3-D printing to be done through layers of intervening material.

COMMUNICATION, Health Equity & Policy

Tracings of anatomical illustrations of bones spread out on a table, with a clipboard showing an anatomical illustration of a human spine placed on top in the center. Image credit: Joyce Hankins/Unsplash
Image credit: Joyce Hankins/Unsplash
  • “…we recognize that the Journal and other medical institutions have in the past justified and advocated the mistreatment of groups on the basis of their race, ethnicity, religion, sex or gender, and physical or mental conditions. We have therefore commissioned an independent group of historians to produce a series of articles covering various aspects of the biases and injustice that the Journal has helped to perpetuate in its more than 200 years. Today we publish the first in this series.” The New England Journal of Medicine has embarked on a series of commissioned articles that are part of a historical reckoning for past medical injustices – including those that the Journal may have had a part in.
  • “It was no surprise when the Biden administration recently outlined in a broad executive order the challenges AI poses and what is needed to address them, including within health care. AI has enormous potential to shape health care’s future, but its use comes with serious ethical responsibilities….Those of us who work in health technology need to build on the administration’s initiative and see to it that AI benefits all patients while ensuring more accurate diagnoses.” A “first opinion” article by Peter Shen (login required) appearing in STAT News makes the case for applying medical ethics to the development of AI intended for healthcare applications.
  • “The bulk of 2023’s retractions were from journals owned by Hindawi, a London-based subsidiary of the publisher…So far this year, Hindawi journals have pulled more than 8,000 articles, citing factors such as ‘concerns that the peer review process has been compromised’ and ‘systematic manipulation of the publication and peer-review process’, after investigations prompted by internal editors and by research-integrity sleuths who raised questions about incoherent text and irrelevant references in thousands of papers.” Nature’s Richard Van Noorden reports on the substantial uptick in the number of article retractions in 2023, and explores the reasons for the increase.
  • “What would an epistemically healthy society look like? Institutions in government, science, and the media would be diverse, inclusive, transparent, and accountable. Citizens would feel that they have a stake in public decision making and have the resources and opportunities to participate within it. Experts and policymakers would engage in honest communication, acknowledging uncertainty and treating the public as rational agents capable of handling hard truths, not panic-prone populations to be managed with strategic messaging.” An essay by Daniel Williams appearing in AIA News attempts to counter misapprehensions about the nature and causes of misinformation and points toward some constructive ways to grapple with the problems it poses.