AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

March 20, 2026

In this week’s Duke AI Health Friday Roundup: LLM use may subtly influence users’ viewpoints; study critiques attribution methods for explainable AI; PhD students confront complexities of AI use in academia; sleeping sickness on the ropes as new therapy shows curative potential; Microsoft debuts Copilot for health information; concerns grow over RAM shortage as AI gobbles up resources; much more:

AI, STATISTICS & DATA SCIENCE

A photographic rendering of a smiling face emoji seen through a refractive glass grid, overlaid with a diagram of a neural network. Image credit: Alan Warburton / © BBC / Better Images of AI / CC-BY 4.0
Image credit: Alan Warburton / © BBC / Better Images of AI / CC-BY 4.0
  • “Unlike the explicit persuasion attempts studied previously, AI suggestions may shape users’ cognition in ways that bypass conscious awareness, representing a more indirect and covert way in which AI can influence humans….When biased AI autocomplete suggestions inspire people to think of certain viewpoints—and make it easier for people to elaborate on those viewpoints than on others—cowriting with a biased AI writing assistant may powerfully shift people’s attitudes on various issues.” A research article published by Williams-Ceci and colleagues in Science Advances examines how LLM writing assistants can subtly influence users’ attitudes.
  • “The shortage is driven by the rise of artificial-intelligence systems, which has created a voracious demand for high-speed memory chips. Over the course of 2025, some forms of RAM tripled in price, causing problems for resource-constrained laboratories that already faced barriers to accessing powerful computing tools. The shortage is also pushing researchers to develop more efficient algorithms and hardware, to reduce the amount of memory needed.” Nature’s Heidi Ledford reports on the looming shortage of computer memory due to AI consuming much of the available stock of RAM.
  • “…physicians said there might be upsides to chatbot-assisted health care, like helping people gain insight into their health at a time when health care is becoming increasingly unaffordable. But sharing health records with tech companies creates a host of privacy risks. Like past technologies that made people overly anxious about their health, the chatbots could also lead to unnecessary trips to the doctor.” The New York Times’ Brian X. Chen and Teddy Rosenbluth report on the debut of new features in Microsoft’s Copilot AI that will allow users to give the LLM access to their electronic health records and offer summaries of personalized health information.
  • “So how do we get over the wall, scientifically speaking? Most of the experts I asked suspect that it will take a new blend of hardware and software advances. Tactile sensors for better data collection and robot hands that combine high power, compliance, and transparency with low inertia would accomplish a lot…” At Quanta, John Pavlus looks at the progress – and the refractory challenges – in developing humanoid robots capable of performing everyday tasks.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

A 12-lead ECG of a 26-year-old male with an incomplete right bundle branch block (RBBB). Image credit: MoodyGroove via Wikipedia
Image credit: MoodyGroove via Wikipedia
  • “Attribution methods demonstrated limited reliability, instability across model variants and incomplete dependence on learned parameters, constraining their utility in high-stakes settings such as healthcare. These findings suggest that attribution techniques should be used cautiously and supported by task-specific sanity checks.” A research article published in the European Heart Journal Digital Health by Arends and colleagues critiques the use of attribution methods to explain the output of deep neural nets applied to the analysis of ECG data (H/T @f2harrell.bsky.social).
  • “It’s an antiprotozoal compound, and Trypanasoma is one of the organisms it targets. That is of course the infectious agent for “sleeping sickness” (trypanosomiasis) in humans, infamously spread in tropical regions of Africa by the bite of the tsetse fly. It has historically been a terrible disease….he DNDI [Drugs for Neglected Diseases Initiative] has been partnering with Sanofi to develop the above-mentioned acoziborole, which cures the disease in a single oral dose of three pills. You can’t ask much more than that!” At his In the Pipeline blog, Derek Lowe describes a new therapy for sleeping sickness that appears to be genuinely curative.
  • “…digital, self-guided, single-session interventions (SSIs) deliver structured psychological support within one interaction. Here we crowdsourced 66 diverse 10-min SSIs for depression and… selected 11 for testing in a preregistered online randomized controlled trial…Nearly all SSIs improved psychological outcomes immediately after completion…However, only two SSIs significantly reduced depression at 4-week follow-up…” A study published in Nature Human Behavior by Kaveladze and colleagues explored the effects of digital single-session interventions for the treatment of depressive symptoms.
  • “…the years I have helped care for my wife, I have learned that Alzheimer disease is not a single loss, but many. The losses arrive gradually: memory, independence, judgment, recognition. Each progressive loss requires adjustment. Each adjustment comes with grief. And every caregiver learns, eventually, that the medical system does not always recognize the emotional cost of those adjustments.” In an opinion article for JAMA, Wesley Burks provides a highly personal view of the complex landscape of care for patients with dementia – particularly when the treatment algorithm falls short.

COMMUNICATIONS & Policy

A lone person walks a stone labyrinth laid out on a beach promontory with rocks and surf in the immediate background. Image credit: Ashley Batz/Unsplash
Image credit: Ashley Batz/Unsplash
  • “Doctoral students are now charting paths through territory their supervisors never had to navigate. Some use AI daily and swear by it; others refuse to touch it, worried about the cost to their development as researchers. Most fall in between, working out their own rules for when AI helps and when it hinders.” At Nature, Linda Nordling probes the ambiguous territory of AI use by graduate students whose feelings about the technology range from enthusiastic acceptance to profound skepticism.
  • “I loaded up one of my own drafts into Grammarly, and once again clicked the ‘expert review’ button. As before, Grammarly seemed to be looking for ‘inspiration’ from the experts who would be certain to hate this feature the most…. you can refine your suggestions by clicking automatically generated topic tags. Once when I did this, Grammarly offered to show me hallucinated experts in ‘media ethics,’ and the force of the irony was sufficient that I had to briefly lay down.” At his Platformer blog, Casey Newton describes his experience with Grammarly’s AI-based expert writing advice feature, which purported to provide suggestions as from the perspectives of a myriad of prominent writers, some of them deceased, none of them approached beforehand for permission, and some of whom are now lawyering up.
  • “This discussion was revealing – not because it changed my position, but because it exposes fundamental patterns in the AI debate…Defensive deflection, TINA rhetoric, resignation, victim mentality, nihilism…These aren’t fringe phenomena, but precisely the arguments we must contend with, again and again, whenever we talk about AI. After all this, I stand by my proposal: Anyone who submits hallucinated references should be sanctioned. Desk reject and one-year ban.” A blog post by business ethics expert Dorothea Baur addresses and dismisses a series of arguments that arise during discussions of responsibility and accountability for errors introduced by the use of AI systems in publication.
  • “AI Surrogates are envisioned as expanding the diversity of populations and contexts that we can feasibly study with the tools of cognitive science. Here, we caution that investing in AI Surrogates risks entrenching research practices that narrow the scope of cognitive science research, perpetuating ‘illusions of generalizability’ where we believe our findings are more generalizable than they actually are.” An opinion article published in Trends in Cognitive Science by Crockett and Messeri warns against the potential pitfalls of using AI simulations of human behavior in research.