In this week’s Duke AI Health Friday Roundup: digital determinants of health; determining when a pandemic is “over”; despite law, academia and institutions slow to return Native American art and remains; cancer cells siphon mitochondria from T cells; AI deciphers scorched scrolls from Roman ruins; addressing “ecosystem level” bias in AI; who’s legally on the hook when LLMs do harm?; writing grants with ChatGPT; much more:
AI, STATISTICS & DATA SCIENCE
- “Improving technologies to eliminate bias needs to occur at all levels. Researchers and manufacturers must develop unbiased technologies from the initial stages of their system design, whether in the choice of wavelengths and imaging methods for tools involving the patient’s skin, in the calibration of clinical equations or medical devices, or in the selection of their patient cohort.” A review article by Charpignon and colleagues published in PLoS Digital Health examines how “digital determinants of health” perform as a means for assessing bias in medical technologies and measurements.
- “Luke Farritor, who is at the University of Nebraska–Lincoln, developed a machine-learning algorithm that has detected Greek letters on several lines of the rolled-up papyrus, including πορϕυρας (porphyras), meaning ‘purple’. Farritor used subtle, small-scale differences in surface texture to train his neural network and highlight the ink.” Nature’s Jo Marchant reports on a 21-year-old computer science student’s successful bid to use machine learning to decipher text contained inside an ancient Roman scroll that was reduced to charcoal in the eruption of Vesuvius that destroyed the town of Herculaneum.
- “…these findings highlight the importance of a patient-centered digital care program as a tool to address health inequities in musculoskeletal pain management. The idea of investigating social deprivation within a digital program provides a foundation for future work in this field to identify areas of improvement.” A research article published last week in NPJ Digital Medicine evaluates the use of digital tools to address inequities in pain management.
- “A clear trend emerged in every context they considered: Commercial ML systems are prone to systemic failure, meaning some people always are misclassified by all the available models — and this is where the greatest harm becomes apparent. If every voice assistant product on the market uses the same underlying algorithm, and that algorithm can’t recognize an individual’s unique way of speaking, then that person becomes effectively excluded from using any speech-recognition technology.” A web article by Nikki Goth Itoi, available from Stanford’s Human-Centered Artificial Intelligence institute website, highlights a recent investigation into “ecosystem-level” bias in AI systems.
- “The new method is part of a broad movement toward bringing molecular precision to diagnosing tumors, potentially allowing scientists to develop targeted treatments that are less damaging to the nervous system. But translating a deeper knowledge of tumors to new therapies has proved difficult.” The New York Times’ Benjamin Mueller describes a new AI tool that can rapidly scan brain tumor DNA to help surgeons decide on the best approach to resecting the tumor.
- “recent advancements in Large Language Models (LLMs) have garnered widespread acclaim for their remarkable emerging capabilities. However, the issue of hallucination has parallelly emerged as a by-product, posing significant concerns. While some recent endeavors have been made to identify and mitigate different types of hallucination, there has been a limited emphasis on the nuanced categorization of hallucination and associated mitigation methods.” A review article by Rawte and colleagues, available as a preprint from arXiv, attempts to provide a detailed taxonomy of the different kinds of “hallucination” – that is, the fabrication of untrue responses to prompt or queries that LLMs sometimes produce – as well as the potential strategies for countering them (H/T @GaryMarcus).
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “These historical precedents make clear that it is neither epidemiology nor any political declaration that determines the end of a pandemic, but the normalization of mortality and morbidity by means of a disease’s routinization and endemicization — what in the context of the Covid-19 pandemic has been called ‘living with the virus.’ What ends a pandemic, too, is governments’ conclusion that the associated public health crisis is no longer a threat to the economic productivity of a society or to the global economy.” A perspective article published in the New England Journal of Medicine by Abi-Rached and Brandt asks what determines when and whether a pandemic is “over.”
- “As elite hunters of the immune system, T cells are constantly prowling our bodies for diseased cells to attack. But when they encounter a tumor, something unexpected can happen. New research shows that some cancer cells can fire a long nanotube projection into the T cell that, like a vampire’s fang, sucks energy-creating mitochondria from the immune cell, turning the predator into prey.” STAT News’ Angus Chen reports on a new study published in Cancer Cell that elucidates how some cancer cells are able to siphon energy from T cell mitochondria.
- “Politics is a natural vehicle for status elevation and ego fulfillment. The pursuit of power, the desire to be in charge and recognized, the sense of entitlement that comes with the prestige of authority and status in the political sphere, and feelings of invincibility that go hand-in-hand with self-delusion, are common traits among political leaders…All politicians should have some healthy level of narcissism. But, in its extreme manifestations, overly narcissistic leadership results in negative outcomes.” An article published in the journal Electoral Studies by Sendinc and Hatemi examines the prevalence of narcissistic personality traits among people who run for political office.
- “Ultimately, this new study adds almost nothing new to our understanding of exercise or running as a treatment for mental health problems. To the extent that the results tell us anything, they seem to really show that medication is more useful than running two or three times a week, but even that result is very questionable….Frankly, every single study we’ve ever run suggests that exercising is good for your health, physical and mental. But the evidence also seems to show that as a practical intervention, exercise has limited applicability to real people in the real world.” In an article for Slate, Gideon Meyerowitz-Katz dissects headlines based on a recent study that examined the effectiveness of running (vs medication) for controlling symptoms of depression.
COMMUNICATION, Health Equity & Policy
- “…what happens when the safeguards fail and models are used for harm? Who is liable? In two new papers, my co-authors and I find that it will not be so easy to impose liability on general foundation model creators and deployers — assuming current large language-style models do not take autonomous actions in the world. Those seeking to impose liability on creators and deployers will generally be constrained by the First Amendment, face other difficult-to-meet statutory requirements, and in some cases face defendants that could retain immunity from liability under a law referred to as ‘Section 230.’” A blog post by Peter Henderson at Stanford’s Human-Centered Artificial Intelligence institute examines the legal complications of assigning responsibility when a chatbot causes harm.
- “The law required institutions to publicly report their holdings and to consult with federally recognized tribes to determine which tribes human remains and objects should be repatriated to. Institutions were meant to consider cultural connections, including oral traditions as well as geographical, biological and archaeological links….Yet many institutions have interpreted the definition of ‘cultural affiliation’ so narrowly that they’ve been able to dismiss tribes’ connections to ancestors and keep remains and funerary objects.” In one of a series of articles, ProPublica reporters examine the widespread failure of museums, academic institutions to return Native American art, cultural objects, and human remains to their respective nations, tribes, or families – despite a law requiring them to do so having been in force for decades.
- “Some people might see the use of ChatGPT in writing grant proposals as cheating, but it actually highlights a much bigger problem: what is the point of asking scientists to write documents that can be easily created with AI? What value are we adding? Perhaps it is time for funding bodies to rethink their application processes.” In a column for Nature, Juan Manuel Parrilla describes tasking ChatGPT to produce a grant application and asserts that the results reveal fundamental problems with the entire process.
- Again with the Mozart. Retraction Watch has the story: “Despite how prolific he was, however, Mozart did not write an album called “Bedtime Mozart.” That has now created a headache for the authors of a study published in Pediatric Research in August that found the particular set of melodies helped soothe babies during a particular blood test…Like many “Mozart Effect” studies before it, the new research prompted a press release referring to Mozart in its headline, and plenty of press coverage. But Hinnerk Feldwisch-Drentrup, a correspondent for Frankfurter Allgemeine, thought something was off-key…”