AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

January 10, 2025

In this week’s Duke AI Health Friday Roundup: TRIPOD releases LLM reporting guidelines; scrutinizing patient-facing genAI; prospects for a cytomegalovirus vaccine; protecting academia from predatory publishing; health implications of proteomic markers for loneliness; FDA releases draft guidance for use of AI in developing drugs and biologics; protein folding contest continues to evolve; the case for letting kids take risks in play; more:

AI, STATISTICS & DATA SCIENCE

The painting shows a person standing on a staircase made of green and pink cubes, symbolising a Penrose staircase, in a cosmic environment. The person is reaching towards a glowing cross-shaped structure emitting binary code, representing AI's reach into the future. Surrounding the figure are outlined boxes showing various elements, such as glasses, medical tools, a self-driving car, and financial symbols, interconnected by white lines. The background is dark with star-like dots and features colour-coded boxes which mark different elements as relating to AI, human involvement, a combination of both, or an area uncharted by AI and humans. Image credit: Yutong Liu & The Bigger Picture/ Better Images of AI / CC-BY 4.0
Image credit: Yutong Liu & The Bigger Picture/ Better Images of AI / CC-BY 4.0
  • “TRIPOD-LLM provides a comprehensive checklist of 19 main items and 50 subitems, covering key aspects from title to discussion. The guidelines introduce a modular format accommodating various LLM research designs and tasks, with 14 main items and 32 subitems applicable across all categories. Developed through an expedited Delphi process and expert consensus, TRIPOD-LLM emphasizes transparency, human oversight and task-specific performance reporting.” A new TRIPOD-AI consensus statement, published this week in Nature Medicine by Gallifant and colleagues, offers reporting guidelines for studies involving the use of large language models.
  • “Our experiments revealed critical insights into the limitations of current LLMs in terms of clinical conversational reasoning, history-taking and diagnostic accuracy. These limitations also persisted when analyzing multimodal conversational and visual assessment capabilities of GPT-4V. We propose a comprehensive set of recommendations for future evaluations of clinical LLMs based on our empirical findings. These recommendations emphasize realistic doctor–patient conversations, comprehensive history-taking, open-ended questioning and using a combination of automated and expert evaluations.” An article published in Nature Medicine by Johri and colleagues presents an evaluation framework for the use of large language models in patient-facing clinical tasks.
  • “The Pentagon’s Chief Digital and AI Office recently completed a pilot exercise with tech nonprofit Humane Intelligence that analyzed three well-known large language models in two real-world use cases aimed at improving modern military medicine, officials confirmed Thursday….In its aftermath, the partners revealed they uncovered hundreds of possible vulnerabilities that defense personnel can account for moving forward when considering LLMs for these purposes. In an article for Defense Scoop, Brandi Vincent reports on a recent Pentagon exercise that revealed potential biases that could affect their use in military healthcare system applications.
  • “Focusing on patients’ use of large language models (LLMs) for health care purposes, this article explores critical issues in the management of generative PAI [patient AI] in the United States, including constitutional limits on government’s regulatory authority, and identifies opportunities for public and private actors to help patients take advantage of generative PAI safely. With respect to the public sector, there is a critical need for federal action to fund research on the benefits and risks of PAI and on the development of valid metrics and performance standards for PAI.” An article by Blumenthal and colleagues, published in NEJM AI, examines arising from patients’ use of generative AI health applications.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Photograph of a lone person, silhouetted against a hazy sunset on a beach with small waves rolling in, with back turned to the camera’s perspective. Photograph has been cropped from original presentation. Image credit: Abhishek Babaria/Unsplash
Image credit: Abhishek Babaria/Unsplash
  • “…leveraging data from 42,062 participants across 2,920 plasma proteins in the UK Biobank, we characterized the proteomic signatures of social isolation and loneliness through proteome-wide association study and protein co-expression network analysis. Proteins linked to these constructs were implicated in inflammation, antiviral responses and complement systems. More than half of these proteins were prospectively linked to cardiovascular disease, type 2 diabetes, stroke and mortality during a 14 year follow-up.” A research article by Shen and colleagues, published in Nature Human Behavior, examines the intersection of proteomic markers for loneliness and their implications for human health.
  • “HCMV is the leading infectious cause of birth defects, including damage to the brain, and is a common cause of complications in organ transplantation. The complex biology of HCMV has made vaccine development difficult, but a recent meeting sponsored by the National Institute of Allergy and Infectious Diseases in September of 2023 brought together experts from academia, industry, and federal agencies to discuss progress in the field….Discussion in the meeting revealed that, with the numerous candidate vaccines that are under study, it is clear that a safe and effective HCMV vaccine is within reach.” A review article published in the Journal of Clinical Investigation by Permar and colleagues examines prospects for the development of a working vaccine for cytomegalovirus.
  • “It may feel like CASP has added on side quests now that the main quest has been completed, but Moult doesn’t really see it that way. The scientists in the CASP community are in the business of calculating the structure of cellular machinery, he said, and AlphaFold’s solution only addresses the tip of the iceberg.” STAT News’ Brittany Trang reports on the CASP protein-folding competition that launched DeepMind’s AlphaFold to fame – and finds that the work initiated there is still ongoing.

COMMUNICATION, Health Equity & Policy

Closeup photograph of a mostly submerged crocodile in profile, showing a single eye above the water’s surface. Image credit: Sebastien Varin/Unsplash
Image credit: Sebastien Varin/Unsplash
  • “Academic institutions and funders should be invested in helping their constituents avoid predatory journals. They can achieve this by making the resources mentioned herein available via institutional channels such as training materials, especially to those early in their careers, and routinely reviewing where faculty and grantees publish. Institutional librarians are familiar with the journals that people at their institution read and seek to publish in and can play an important role in helping guide authors to legitimate journals. Like authors, librarians who become aware of concerns about a journal’s legitimacy should share that information with their constituents as well as with librarians at other institutions.” An editorial published in JAMA by a group of medical journal editors asks what can be done to prevent researchers and academics from falling victim to predatory publishers.
  • “The goal in training is the creation of a new intellect. One that results from a fusion of disciplines in understanding the laws of nature and how these are executed in living beings. The key here is that this actual fusion occurs within the same cerebrum. This is not simply teaching an engineer to be a physician or vice versa. It is a conceptual blending of engineering and medicine that brings a deeper fundamental understanding of the problem space and greater potential for transformations.” A viewpoint article published in JAMA by Roderic Ivan Pettigrew speculates on the growth of a domain of expertise that combines clinical knowledge and engineering skills (H/T @danbuckland.me)
  • “Over the past two decades, research has emerged showing that opportunities for risky play are crucial for healthy physical, mental and emotional development. Children need these opportunities to develop spatial awareness, coordination, tolerance of uncertainty and confidence…Despite this, in many nations risky play is now more restricted than ever, thanks to misconceptions about risk and a general undervaluing of its benefits”. A Nature news feature by Julian Nowogrodzki examines the science behind the benefits of allowing children to take a degree of risk in play activities.
  • “The brief, 20-page document focuses on AI models used to produce data that supports regulatory decision-making about the safety, effectiveness, or quality of drugs. That could include anything from modeling to cut down on animal-based toxicology studies to developing AI-based clinical trial endpoints to evaluating adverse events after FDA approves a drug.” STAT News’ Katie Palmer has the rundown on the FDA’s recently released draft guidance on the use of AI in therapeutic development.