AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

May 3, 2024

In this week’s Duke AI Health Friday Roundup: foundation models for reading echocardiograms; FDA weighs in on lab-developed tests; telling human from AI in conference abstracts; NIST addresses Generative AI; USPSTF revises age recommendations for mammogram screenings; Dana Farber describes institutional rollout of GPT for staff; how some drugs “hijack” brain’s reward circuits; ensuring publication integrity in the age of AI; good advice for responding to peer review; much more:

AI, STATISTICS & DATA SCIENCE

A shiny rock of silicon is on a plain white reflective surface, against a plain white background. Embedded in the rock is a rough shape of a silicon chip, which is made from the same rock of silicon. Image credit: Catherine Breslin & Team and Adobe Firefly / Better Images of AI / Chipping Silicon / CC-BY 4.0
Image credit: Catherine Breslin & Team and Adobe Firefly / Better Images of AI / CC-BY 4.0
  • “…downstream evaluation is an engineering question that helps inform a procurement decision. Here, cost is the actual construct of interest. The downsides of cost measurement aren’t downsides at all; they are exactly what’s needed. Inference costs do come down over time, and that greatly matters to downstream developers. It is unnecessary and counterproductive for the evaluation to stay frozen in time.” At the AI Snake Oil blog, Sayash Kapoor, Benedikt Stroebl, Arvind Narayanan assert that the “AI leaderboard” approach that focuses narrowly on model evaluation to assess the relative strengths of models is insufficiently illuminating and should be replaced with a more nuanced approach.
  • “Participants’ ability to distinguish human-generated from AI-generated abstracts was limited regardless of their prior experience and training. The ethical use of AI in research and writing is still debated, although over 70% of survey participants believed AI was ethical to use in writing research abstracts… We have no reservations about using AI to generate abstracts or even full articles as long as the final product can be reviewed and edited. All scientific content warrants critical appraisal, regardless of its origin.” A research letter published in JAMA Pediatrics by Ren and colleagues examined the ability of healthcare professionals to discriminate AI-generated abstracts from ones created by humans.
  • “The development of robust artificial intelligence models for echocardiography has been limited by the availability of annotated clinical data. Here, to address this challenge and improve the performance of cardiac imaging models, we developed EchoCLIP, a vision–language foundation model for echocardiography, that learns the relationship between cardiac ultrasound images and the interpretations of expert cardiologists across a wide range of patients and indications for imaging.” A research article published in Nature Medicine by Christensen and colleagues describes the creation and testing of a foundation model used to interpret echocardiograms.
  • “After engaging in discussions over many months and employing a process framework for ethical implementation of AI in our cancer center, we believed it would be better to tackle these challenges as a community, rather than prohibit the use of LLMs altogether. Here, we detail aspects of sponsorship, governance, technical implementation, program launch, socialization, user feedback, and ongoing support and user training in preparation to make generative AI LLMs broadly available to our 12,500-member workforce in a compliant, auditable, and secure manner.” A case study published in NEJM AI by Umeton and colleagues presents the experience of the Dana Farber Cancer Center in deploying GPT-4 for administrative uses for all of its staff.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Photograph shows a nest of deep-sea tube worms (Lemellibrachia) with a mussel and shrimp clustered around a “cold seep” undersea vent in the Gulf of Mexico. Public Domain image via NOAA/Operation Deep Slope 2007.
Public domain image via NOAA/Operation Deep Slope 2007
  • “In deep-sea cold seeps, microbial communities thrive on the geological seepage of hydrocarbons and inorganic compounds, differing from photosynthetically driven ecosystems. However, their biosynthetic capabilities remain largely unexplored. Here, we analyzed 81 metagenomes, 33 metatranscriptomes, and 7 metabolomes derived from nine different cold seep areas to investigate their secondary metabolites….these results demonstrate that cold seep sediments serve as a reservoir of hidden natural products and sheds light on microbial adaptation in chemosynthetically driven ecosystems.” Talk about digging deep! In a paper published in Science Advances, Dong and colleagues survey the biological diversity of microbial populations found in deep-sea “cold seeps.”
  • “…the protein RHEB (Ras homolog enriched in brain), a signaling partner of mammalian target of rapamycin, is a crucial molecular substrate that enables drugs to gain access to neurons that process natural reward. This molecular mechanism is engaged in dissociable ensembles of neurons of a brain region called the nucleus accumbens. These neuronal ensembles are at the center of the addictive effects of these drugs, where conflict between drug-taking and the homeostatic regulation of hunger and thirst takes place.” A research article published in Science by Tan and colleagues sheds new light on the molecular processes by which some commonly abused drugs usurp signaling pathways that regulate basic “reward” circuits in the brain.
  • “The USPSTF recommends biennial screening mammography for women aged 40 to 74 years. (B recommendation) The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of screening mammography in women 75 years or older. (I statement) The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of supplemental screening for breast cancer using breast ultrasonography or MRI in women identified to have dense breasts on an otherwise negative screening mammogram. (I statement)” Earlier this week, in a statement published in JAMA, the US Preventive Services Task Force (USPSTF) revised its previous 2016 recommendation regarding breast cancer screening and now suggests that women undergo their initial screening at age 40.

COMMUNICATION, Health Equity & Policy

Photograph shows a closeup view of a pipette filled with purple liquid in the middle of being used to fill a rack containing smaller glass tube containers in a laboratory. Image credit: Louis Reed/Unsplash
Image credit: Louis Reed/Unsplash
  • “The new rules finalized Monday center on the multibillion-dollar industry of laboratory developed tests, known as LDTs, which are designed, manufactured and analyzed in a single laboratory as a screening and diagnostic tool. Major industry players, including academic medical centers that develop their own tests and large commercial laboratories, opposed the plan to further regulate the medical tests when it was proposed in September. Some analysts have said they expect the opponents to sue the FDA to prevent the new rules from going into effect.” The Washington Post’s Rachel Roubein and Daniel Gilbert report on recent action by the FDA with implications for the class of medical assays known as laboratory-developed tests (LDTs).
  • “As the landscape of image integrity issues continues to change, with papermills developing new techniques to create more authoritative content, proactive image checks will only become more important. We must continuously invest in research and development to counteract future image manipulation methods. The future of trust in scientific research will require collaboration between image integrity experts, research scientists, and reputable institutions in order to safeguard the authenticity of images in academic publications.” A guest post by Dror Kolodkin-Gal at The Scholarly Kitchen examines the vexed problem of assuring the integrity of scientific images in the age of generative AI.
  • “Importantly, some GAI risks are unknown, and are therefore difficult to properly scope or evaluate given the uncertainty about potential GAI scale, complexity, and capabilities. Other risks may be known but difficult to estimate given the wide range of GAI stakeholders, uses, inputs, and outputs. Challenges with risk estimation are aggravated by a lack of visibility into GAI training data, and the generally immature state of the science of AI measurement and safety today. The National Institute for Standards and Technology (NIST) has released the first public draft edition of their risk management framework for generative AI for review and comment.
  • “There is a small, admittedly delusional part of my brain that holds out hope that the reviews will all come back saying “This is amazing! Well done! Publish immediately!” That is….not how it works. And, to reiterate something I said above and in the earlier post: it’s generally a good thing that there are comments to respond to! They pretty much always lead to a better paper.” Words of wisdom for responding to peer review, by Meghan Duffy at her Dynamic Ecology blog (H/T @RetractionWatch).