AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

January 26, 2024

In this week’s Duke AI Health Friday Roundup: the importance of sharing imaging data; NSF debuts National AI Research Resource; microbe genomes give up food preferences; groundswell gathers against ‘paper mills’; AMIE boasts high performance as conversational medical AI; how AI may change liability; new model for how error correction works in brains; dodging dataset shifts; NASEM recommends training on social media impacts for healthcare providers; much more:


Black and white artistic x-ray photograph of two flowers with many of the plants’ structures showing as faint traces. Image credit: Mathew Schwartz/Unsplash
Image credit: Mathew Schwartz/Unsplash
  • “We understand the hesitancy to share data and code openly given the personal apprehension of criticism. However, from our experience, we found that sharing data and code has many benefits, such as facilitating collaborations and follow-up studies….Science has always been an iterative process, and in this dynamic field of AI for medical imaging, we must learn to embrace open science to accelerate the translation of our tools into clinical practice.” A special report by Laura C. Bell and Efrat Shimron appearing in the journal Radiology: Artificial Intelligence makes an urgent case for the sharing of imaging datasets to foster the growth of AI-based applications in medical imaging.
  • “Partnering with 10 other federal agencies as well as 25 private sector, nonprofit and philanthropic organizations, the NAIRR pilot will provide access to advanced computing, datasets, models, software, training and user support to U.S.-based researchers and educators. By connecting researchers and educators with the resources needed to support their work, the NAIRR pilot will power innovative AI research and, as it continues to grow, inform the design of the full NAIRR ecosystem. This pilot is a proof of concept to ignite the level of investment needed to realize the full NAIRR vision.”  The National Science Foundation announces the debut of the National AI Research Resource (NAIRR) pilot program.
  • “One way of preventing models from learning these spurious correlations is to feed them the same medical note in many different writing styles. This way, the model learns to focus on the content rather than the writing style, the researchers say….But rather than having each caregiver rewrite other physicians’ notes—which would severely drain the already scarce resource of caregivers’ time—the team used large language models to automate this process and create datasets that are resistant to the learning of faulty correlations based on writing style.” An article at the Johns Hopkins Whiting School of Engineering website by Jaimie Patterson examines recent work by researchers at Hopkins and Columbia to develop methods to avoid problems with AI models that are due to “dataset shifts.”
  • “We compared AMIE’s performance to that of primary care physicians (PCPs) in a randomized, double-blind crossover study of text-based consultations with validated patient actors in the style of an Objective Structured Clinical Examination (OSCE). The study included 149 case scenarios from clinical providers in Canada, the UK, and India, 20 PCPs for comparison with AMIE, and evaluations by specialist physicians and patient actors. AMIE demonstrated greater diagnostic accuracy and superior performance on 28 of 32 axes according to specialist physicians and 24 of 26 axes according to patient actors.” A preprint posted to arXiv by Tu and colleagues from Google Research and Deepmind has some head-turning findings from an evaluation of a conversational medical AI model called AMIE.
  • “Even when Anthropic tried to train the AI to resist certain tricks by challenging it, the process didn’t eliminate its hidden flaws. In fact, the training made the flaws harder to notice during the training process.” At Ars Technica, Benj Edwards explores a recent paper by Anthropic AI researchers that warns of the possibility of AI “sleeper agents” – large language models that can be prompted to write code with hidden vulnerabilities or backdoors under certain conditions.
  • “History provides the tools necessary to richly contextualize datasets and understand the changes over space and time that can produce dataset shift. The etymology of the word data, from the Latin “the given,” belies the complexity and contingency of the making of data. These pieces of information are not “given” but “made”: more facta than data. We believe physicians who are enthusiastic about the promise of big data must remember this lesson.” A perspective article Andrew S. Lea and David S. Jones published in the New England Journal of Medicine invokes the early history of algorithmic medicine to demonstrate why medical data cannot be considered in isolation from their complex contexts.


Neon sign showing a social media-style “like” icon with a white neon heart next to the numeral zero, enclosed by a red neon caption bubble. Image credit: Prateek Katyal/Unsplash
Image credit: Prateek Katyal/Unsplash
  • “Patients need providers who can counsel them on social media use and spot potential warning signs. Young people who are struggling with underlying psychological problems may use social media to cope and may not necessarily realize that the coping mechanism is itself a stressor. The report recommends that accrediting bodies for the education of doctors, nurses, and social workers should incorporate training on the effects of social media on child and adolescent health into curricula.” A new report from the National Academy of Science, Engineering & Medicine on the health implications of social media for children and adolescents includes a recommendation for healthcare providers to be trained on issues related to social media use.
  • “…we propose that the brain instead solves credit assignment with a fundamentally different principle, which we call ‘prospective configuration’. In prospective configuration, before synaptic weights are modified, neural activity changes across the network so that output neurons better predict the target output; only then are the synaptic weights (hereafter termed ‘weights’) modified to consolidate this change in neural activity. By contrast, in backpropagation, the order is reversed; weight modification takes the lead, and the change in neural activity is the result that follows.” A research article published in Nature Neuroscience by Song and colleagues uses insights from machine learning to posit a new model for understanding how learning and error correction work in in organic brains.
  • “Last year in Cordero’s lab, research led by the microbiologist Matti Gralka identified a set of microbial functions that could be predicted without species information. After characterizing the metabolisms of 186 different bacterial strains collected from the Atlantic Ocean, he found that he could predict a given microbe’s basic food preferences based on its genome alone.” Quanta’s Dan Samorodnitsky reports on recent research that uses genomic information to predict microbial function – even without information about the microbe’s species.

COMMUNICATION, Health Equity & Policy

Shredded and torn remnants of posters and handbills stapled to a wall. Image credit: Jan Huber/Unsplash
Image credit: Jan Huber/Unsplash
  • “A high-profile group of funders, academic publishers and research organizations has launched an effort to tackle one of the thorniest problems in scientific integrity: paper mills, businesses that churn out fake or poor-quality journal papers and sell authorships. In a statement released on 19 January, the group outlines how it will address the problem through measures such as closely studying paper mills, including their regional and topic specialties, and improving author-verification methods.” Nature’s Katharine Sanderson reports that problems created by so-called paper mills that churn out shoddy or bogus research articles have finally garnered some serious attention from academia and scholarly publishing.
  • “What keeps me up at night? Thinking about keeping up! — with changes in the ecosystem, the steadily increasing flow of papers into arXiv, and expectations of our community (submitters, readers, moderators, related infrastructures). Getting on a better footing in terms of our technology migration will definitely help on all these fronts, but what is exciting about operating in a dynamic environment can also sometimes give you pause.” Scholarly Kitchen interviews ArXiv program director Stephanie Orphan.
  • “Ordinarily, when a physician uses or recommends a product and an injury to the patient results, well-established rules help courts allocate liability among the physician, product maker, and patient…for both kinds of defendants, plaintiffs must show that the defendant owed them a duty, the defendant breached the applicable standard of care, and the breach caused their injury; plaintiffs must also rebut any suggestion that the injury was so unusual as to be outside the scope of liability…Several factors make these determinations difficult with respect to AI and other software, especially for claims against developers.” An article published in the New England Journal of Medicine by Michelle M. Mello and Neel Guha explores the ways AI-based applications for healthcare may change the landscape of legal liability.