AI Health Roundup – January 23, 2026

AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

January 23, 2025

In this week’s Duke AI Health Friday Roundup: interview with Yann LeCun on going all in on world models for AI; cows discover tool use; challenges to integrity of peer review grow; should people get comfortable with imperfect health AI?; too many elderly patients still getting meds that affect central nervous system; questioning productivity boosts from AI; benefits of nature (real and perceived) for urban dwellers; more:

AI, STATISTICS & DATA SCIENCE

Close up photograph of the northern hemisphere of a colorful modern globe, with other globes out of focus in the background. Image credit: Juliana Kozoski/Unsplash
Image credit: Juliana Kozoski/Unsplash
  • “What’s easy for us, like perception and navigation, is hard for computers, and vice versa. LLMs are limited to the discrete world of text. They can’t truly reason or plan, because they lack a model of the world. They can’t predict the consequences of their actions. This is why we don’t have a domestic robot that is as agile as a house cat, or a truly autonomous car….An agentic system that is supposed to take actions in the world cannot work reliably unless it has a world model to predict the consequences of its actions.” In an interview with Caiwei Chen at MIT Technology Review, AI expert Yann LeCun, having recently parted ways with Meta, provides some glimpses into where he sees the field headed next.
  • “…in August, I temporarily disabled the ‘data consent’ option because I wanted to see whether I would still have access to all of the model’s functions if I did not provide OpenAI with my data. At that moment, all of my chats were permanently deleted and the project folders were emptied — two years of carefully structured academic work disappeared. No warning appeared. There was no undo option. Just a blank page.” In an article for Nature, scientist Marcel Bucher details what happened when he opted out of ChatGPT’s data consent option.
  • “Mountains of research as well as cases of workplace deployment of AI have suggested that the tech is far from being ready for primetime. One notable MIT study found that 95 percent of companies that integrated AI saw zero meaningful growth in revenue. For coding tasks, one of AI’s most widely hyped applications, another study showed that programmers who used AI coding tools actually became slower at their jobs.” At Futurism, Frank Landymore reports on an analysis that suggests that business productivity boosts expected in the wake of AI adoption have so far not materialized.
  • “An editor from a well-known journal in his field had asked him to review a paper that they were considering for publication. It seemed like a straightforward piece of science. Nothing set off any alarm bells, until Quintana looked at the references and saw his own name. The citation of his work looked correct—it contained a plausible title and included authors whom he’d worked with in the past—but the paper it referred to did not exist.” In an article for The Atlantic, Ross Andersen sounds the alarm for peer-reviewed research as the sheer volume of AI slop threatens to overwhelm the system.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

A low-angle photograph shows a group of brown cows facing the camera’s perspective. Image credit: Grant Brookes/Unsplash
Image credit: Grant Brookes/Unsplash
  • “Over the next 2 weeks, Osuna-Mascaró conducted 70 trials with Veronika. He placed a heavy-duty deck brush in front of her in various orientations and recorded what happened. In almost every case, the cow used the sweeper as a tool. She wrapped her long tongue around the handle, flipped it so the brush faced her body, and changed the length of the broom so the rough bristles scratched various hard-to-reach areas of her back.” Science’s David Grimm reports on a remarkable Austrian cow who, in the search for a superior back-scratching, joined the ranks of tool-using animals.
  • “Despite decades of guidelines cautioning against their use, many older adults receive potentially inappropriate CNS-active medications. Patients with cognitive impairment were more likely than those with normal cognition to receive such medications….Encouraging declines in potentially inappropriate CNS-active prescriptions were driven by reductions in prescriptions lacking clinical indications and benzodiazepines and nonbenzodiazepine hypnotics. Nevertheless, among beneficiaries receiving potentially inappropriate CNS-active prescriptions, more than two-thirds lacked documented clinical indications…” A research letter published in JAMA by Yang and colleagues presents findings suggesting that contrary to suggestions contained in clinical practice guidelines, many elderly patients are receiving potentially problematic medications that affect the central nervous system.
  • “We found urban dwellers’ perceived exposure to nature had a direct effect on happiness, whereas the indirect effects of objective exposures on happiness were mediated by the perceived exposure…. The different impacts of objective and perceived exposures to nature on happiness, as well as the unequal effects of three types of objective exposures on perceived exposure, call for considerations in sustainable urban planning and management policies.” A research article published in NPJ Urban Sustainability by Li and colleagues examines how exposure to natural environments – real or perceived – affects happiness for city-dwellers.

COMMUNICATIONS & Policy

Sixteenth-century painting by Titian of Sisyphus struggling up a steep slope with a large boulder on his shoulders. Museo Prado, Madrid
Museo Prado, Madrid
  • “…this is a Sisyphean task. As Retraction Watch has documented, paper mills — shady organisations selling scholarly manuscripts and authorship to researchers who want to get ahead — are rapidly proliferating, overwhelming a system that has never had enough peer reviewers to ensure that everything that is published is reliable.” In article for the Sunday Times, Retraction Watch’s Ivan Oransky and Alice Dreger trace the expanding dimensions of challenges to the integrity of peer-reviewed publication.
  • “The agreements, nicknamed “most favored nation” deals, were aimed at getting lower prices for American consumers and pushing other wealthy countries to pay higher prices for new drugs….But drug companies, including the 16 that made deals, raised the prices of 872 brand-name drugs in the first two weeks of 2026, according to a new analysis by 46brooklyn, a drug price research firm.” NPR’s Sydney Lupkin reports that despite agreeing to lower prices earlier in 2025, many drug manufacturers have since raised prices on at least some of their offerings.
  • “Some people argue that A.I.’s imperfections mean that we shouldn’t use the technology in high-stakes fields like medicine or that it should be tightly regulated before we do. But the biggest mistake now would be to overly restrict A.I. tools that could improve care by setting an impossibly high bar, one far higher than the one we set for ourselves as doctors. A.I. doesn’t have to be perfect to be better. It just has to be better.” A New York Times editorial by UCSF physician Robert Wachter makes the bullish case for AI in hospitals and clinics.