In this week’s Duke AI Health Friday Roundup: how AI is reshaping society; NLP evaluation experiments reveal flaws; new study illuminates the reason why insects circle streetlights; AI automation of jobs may proceed gradually; cardiologists call for better collection of SOGIE data; survey examines AI governance; sizeable proportion of dementia cases may be due to liver dysfunction; sifting EHR data for diseases transmitted via transfusion; much more:
AI, STATISTICS & DATA SCIENCE
- “While conducting a coordinated set of repeat runs of human evaluation experiments in NLP, we discovered flaws in every single experiment we selected for inclusion via a systematic process. In this paper, we describe the types of flaws we discovered which include coding errors (e.g., loading the wrong system outputs to evaluate), failure to follow standard scientific practice (e.g., ad hoc exclusion of participants and responses), and mistakes in reported numerical results (e.g., reported numbers not matching experimental data). If these problems are widespread, it would have worrying implications for the rigour of NLP evaluation experiments as currently conducted.” A research article published in the journal Computation Linguistics by Thomson and colleagues finds widespread issues in research evaluating human evaluation experiments in natural language processing.
- “We focus on computer vision, where cost modeling is more developed. We find that at today’s costs U.S. businesses would choose not to automate most vision tasks that have ‘AI Exposure,’ and that only 23% of worker wages being paid for vision tasks would be attractive to automate…Overall, our findings suggest that AI job displacement will be substantial, but also gradual – and therefore there is room for policy and retraining to mitigate unemployment impacts.” A working paper by Svanberg and colleagues takes a critical look at the cost-effectiveness of automating a number of tasks that are considered to have “exposure” to AI automation.
- “Prediction tools driven by artificial intelligence (AI) and machine learning are becoming increasingly integrated into health care delivery in the United States. However, organizational approaches to the governance of AI tools are highly varied. There is growing recognition of the need for evidence on best governance practices and multilayered oversight that could provide appropriate guardrails at the organizational and federal levels to address the unique dimensions of AI prediction tools. We sought to qualitatively characterize salient dimensions of AI-enabled predictive model governance at U.S. academic medical centers (AMCs).” A new perspective article published in NEJM AI by Nong and colleagues presents findings from a survey of academic medical centers that assessed their approach to the governance of AI tools.
- “In this before-and-after quasi-experimental study, we demonstrated that the implementation of a real-time deep-learning model to predict sepsis in two EDs was associated with a 5.0% absolute increase in sepsis bundle compliance and a 1.9% absolute decrease in-hospital sepsis-related mortality. This ﬁnding represents, to our knowledge, the ﬁrst instance of prospective use of a deep-learning model demonstrating an association with improved patient-centered outcomes in sepsis.” A research article published in NPJ Digital Medicine by Boussina and colleagues describes results from the deployment of a deep-learning sepsis prediction model.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “Experts and lay observers alike have come up with all sorts of reasons to try to explain this behavior, but none of the descriptions have offered any hard proof. Now research published this week in Nature Communications might have finally solved the mystery: artificial light confuses insects’ ability to orient themselves to the horizon, scrambling their sense of what is up and down and causing them to confusedly fly in circles.” Scientific American’s Rachel Nuwer reports on a new study that upends long-standing theories about why insects are attracted to and circle artificial lights.
- “In this large nationwide cohort study, we performed a phenome-wide search for unknown transfusion-transmitted disease using a combination of large-scale registers analysed using a two-pronged statistical approach. We failed to detect any strong evidence of unknown, widespread transfusion transmission beyond the expected findings of transmission of HIV and viral hepatitis. This reassuring finding indicates that it is unlikely that unknown transmissible agents exist in the Swedish blood donor pool that have a sufficiently high prevalence and clinical penetrance to result in widespread transfusion-transmission.” A research article published in Lancet Digital Health by Dahlén and colleagues describes results from a Swedish cohort study that sifted routinely collected EHR data from unknown diseases transmitted via blood transfusion.
- “…a crucial tie between liver disease and dementia is what occurs in the brains of about 50% of people with cirrhosis: hepatic encephalopathy. When the liver stops removing toxins and waste from the blood, those bits of trash circulate to the brain. There, toxins like ammonia and manganese have a poisonous effect on brain cells. Once encephalopathy moves from covert to overt, patients can experience an array of changes to their cognition, motor skills, sleep and mood — a profile strikingly similar to that seen in dementia, except it’s reversible.” STAT News’ Isabella Cueto reports on a recent study published in JAMA Network Open that suggests an appreciable percentage of dementia cases could actually be due to undiagnosed liver cirrhosis.
COMMUNICATION, Health Equity & Policy
- “Artificial intelligence is reshaping society, but human forces shape AI. Getting governance wrong could mean narrowing cultural narratives, de-incentivizing creativity, and exploiting workers. In these 11 essays, social scientists and humanities experts explore how to harness the interaction between AI and society, revealing urgent avenues for research and policy.” A theme issue of, er, Issues tackles the broader social and legal implications of recent developments in AI technologies.
- “The first step to improving the cardiovascular health of this vulnerable population of individuals is collecting SOGIE data. SOGIE data collection has the opportunity to provide better insight into cardiovascular risk factors unique to the LGBTQ+ population. The availability of SOGIE data will ultimately provide opportunities to conduct research aimed at better understanding of health disparities, identify cardiovascular risks unique to LGBTQ+ people, and implement therapeutic interventions for cardiovascular risk reduction.” A special communication published in JAMA Cardiology by Deb and colleagues calls for a systematic effort to improve collection of patient data regarding sexual orientation and gender identity and expression (SOGIE).
- “…we here propose a new approach: the Personalized Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently ‘fine-tuned’ on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient’s preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient’s own reasons and values.” A thought-provoking article by Earp and colleagues appearing in the American Journal of Bioethics explores the potential for using large language models to predict personal wishes and preferences for patients who are incapacitated and incapable of expressing their own wishes.