AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

January 19, 2024

In this week’s Duke AI Health Friday Roundup: new ML approach boosts geometry problem-solving; GPU architecture allows LLM eavesdropping; “anthrobots” suggest future therapeutic possibilities; new kind of AI bias identified; biological retinas inspire improvements in computer color vision; paper mills branching out into bribery; UK Post Office software disaster offers AI lessons; how AI tools could reshape organizations; many docs unfamiliar with how FDA evaluates devices; much more:

AI, STATISTICS & DATA SCIENCE

Photograph of colorful mosaic forming repeating diamond shapes with alternating colors (image rotated 90 degrees clockwise from original orientation). Image credit: Max Williams/Unsplash
Image credit: Max Williams/Unsplash
  • “On a test set of 30 latest olympiad-level problems, AlphaGeometry solves 25, outperforming the previous best method that only solves ten problems and approaching the performance of an average International Mathematical Olympiad (IMO) gold medallist. Notably, AlphaGeometry produces human-readable proofs, solves all geometry problems in the IMO 2000 and 2015 under human expert evaluation and discovers a generalized version of a translated IMO theorem in 2004.” A research article by Trinh and colleagues published this week in Nature describes a machine learning system that is capable of successfully solving difficult geometry problems, outperforming previous ML approaches.
  • “…with a little creativity, attackers could likely target many GPU applications, including those used within privacy-sensitive domains….as demonstrated by LeftoverLocals, open-source LLMs are particularly susceptible to our vulnerability given our ability to fingerprint these models to obtain remaining weights as needed.” At the Trail of Bits blog, security researchers Tyler Sorensen and Heidy Khlaaf demonstrate how certain kinds of GPUs (the favored processors for AI applications) permit surreptitious surveillance of chats between users and large language model applications.
  • “In this case study, we introduce a new source of bias termed “induced belief revision,” which we have discovered through our experience developing and testing an AI model to predict obstructive hydronephrosis in children based on their renal ultrasounds. After a silent trial of our hydronephrosis AI model, we observed an unintentional but clinically significant change in practice — characterized by a reduction in nuclear scans from 80 to 58% (P=0.005). This phenomenon occurred in the absence of any identifiable changes in clinical workflow, personnel, practice guidelines, or patient characteristics over time.” A research article published in NEJM AI by Kwong and colleagues identifies a new kind of bias that can affect the users of clinical AI systems.
  • “Large language models’ (LLMs) abilities are drawn from their pretraining data, and model development begins with data curation. However, decisions around what data is retained or removed during this initial stage is under-scrutinized. In our work, we ground web text, which is a popular pretraining data source, to its social and geographic contexts. We create a new dataset of 10.3 million self-descriptions of website creators, and extract information about who they are and where they are from: their topical interests, social roles, and geographic affiliations. Then, we conduct the first study investigating how ten ‘quality’ and English language identification (langID) filters affect webpages that vary along these social dimensions.” A preprint by Li Lucy and colleagues, available from arXiv, demonstrates the effect that pretraining quality and language filters have on the resulting machine learning training corpus.
  • “…the latest findings demonstrate how computations needed for complex pattern recognition can be encoded at the molecular level in the biophysical process of self-assembly. The study also illustrates how previous theoretical and experimental work on DNA-tile assembly can support the design of sophisticated new experiments.” At Nature, Andrew Phillips describes recent research by Evans and colleagues that demonstrates how a self-assembling DNA system can successfully recognized complex patterns.
  • In a podcast for the JAMA Network, JAMA editor Kirsten Bibbs-Domingo interviews Google chief clinical officer Michael Howell on the evolution of clinical AI.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Abstract painting showing vertical streaks of colored paint in a rainbow pattern. Image credit: Steve Johnson/Unsplash
Image credit: Steve Johnson/Unsplash
  • “Inspired by the retinal system, we combine the R/G/B perovskite NB chips (mimicking the R/G/B cone cells) with a trilayer algorithm (mimicking the intermediate network of the retinal system) and successfully realized panchromatic imaging….This light-driven panchromatic imaging may trigger further development in applications such as battery-free cameras, light-driven sensing, artificial retina replaceable for dead retina cells in vision damages, etc.” A research article published in Science Advances by Hou and colleagues debuts a new approach to high-fidelity color imaging that mimics organic retinal systems.
  • “A more in-depth understanding of marine microbes could have wide-ranging benefits. ‘Genes and proteins derived from marine microbes have endless potential applications,’ says Duarte. ‘We can probe for new antibiotics, we can find new enzymes for food production,’ he says. ‘If they know what they’re searching for, researchers can use our platform to find the needle in the haystack that can address a specific problem.’” Nature’s Carissa Wong reports on the public release of a new database containing genomic information on hundreds of millions of marine microbe species.
  • “Antivenom is the only evidence-based treatment available for Bothrops snakebites…However, the efficacy of antivenom in reducing local tissue damage has been shown to be limited…Bothrops venom acts immediately at the bite site and has long-lasting local effects due to its activation of endogenous pathways that promote tissue damage…even after adequate antivenom treatment (AVT)…In this regard, there is an urgent need to find complementary therapeutic approaches that can rapidly be deployed at the envenomation site to prevent severe local complications.” A research article published in JAMA Internal Medicine by da Silva Carvalho reports results from a feasibility study that evaluated the usefulness of laser therapy in conjunction with conventional antivenom to mitigate the effects of snakebite – in this case, the bite of the Amazonian (and Trinidadian) pit viper known as the fer-de-lance (Bothrops atrox).
  • “In the medicine of the future, molecular physicians built from a patient’s own cells might ferret out cancer, repair injured tissue, and even remove plaque from blood vessels. Researchers have now taken a step toward that vision: They’ve coaxed tracheal cells to form coordinated groups called organoids that can propel themselves with tiny appendages. When added to wounded neurons in the lab, these “anthrobots” helped neurons repair themselves.” Think of it as a tiny Roomba for your immune system: At Science, Elizabeth Pennisi reports on recently reported research in which microscopic “anthrobots” showed promise as potential therapeutic avenue.

COMMUNICATION, Health Equity & Policy

A small toy robot constructed out of boxy shapes stands on the keyboard of an open laptop computer, facing the camera. Image credit Jem Sahagun/Unsplash
Image credit Jem Sahagun/Unsplash
  • “Human attention remains finite, our emotions are still important, and workers still need bathroom breaks. The technology changes, but workers and managers are just people, and the only way to add more intelligence to a project was to add people or make them work more efficiently….But this is no longer true. Anyone can add intelligence, of a sort, to a project by including an AI. And every evidence is that they are already doing so, they just aren’t telling their bosses about it: a new survey found that over half of people using AI at work are doing so without approval, and 64% have passed off AI work as their own.” At his One Useful Thing blog, Ethan Mollick draws on historical precedent to examine some possible ways organizations may need to change as AI affects the way we work.
  • “The most important lesson we should take from these scandals is that the social and organisational processes around software and AI systems should never assume their outputs are completely reliable. It is not simply a question of having humans in the loop but demonstrating humanity. Processes wrapped around automated decision-making should operate under an assumption of ‘innocent until proven guilty’ and a core principle of care for those whose lives could be affected. This is particularly important when the targets of AI and data-based decisions are already in precarious positions.” At Connected by Data, Jeni Tennison unpacks the implications for AI of the recent UK Post Office Horizon debacle, in which faulty software, compounded by unresponsive human systems, resulted in legal penalties and blasted reputations for hundreds of innocent people.
  • “…cash-rich paper mills have evidently adopted a new tactic: bribing editors and planting their own agents on editorial boards to ensure publication of their manuscripts. An investigation by Science and Retraction Watch, in partnership with Wise and other industry experts, identified several paper mills and more than 30 editors of reputable journals who appear to be involved in this type of activity.” A joint investigation by Science and Retraction Watch reveals a new wrinkle in paper-mill fraud: bribing editors of legitimate journals to run articles.
  • “In her final presentation for health policy class at the University of Chicago, first-year medical student Robin Ji informed her classmates that the Food and Drug Administration does not require randomized controlled trials of most medical devices. Her peers’ immediate reaction was disbelief….’One classmate kept asking me, are you sure?’ Ji said. ‘I think he asked me twice. Then he went on his phone to check to see if it was actually true.’”An article (log-in required) by STAT News’ Lizzy Lawrence examines recent survey findings that suggest physicians may not be fully acquainted with the Food and Drug Administration’s approaches to clearing medical devices for clinical use.