AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
January 20, 2023
In today’s Duke AI Health Friday Roundup: looking ahead to clinical trials for 2023; algorithm solves shortest-path problem for negative graphs; study finds widespread PFAS contamination in freshwater fish; relatively few hospitals compliant with federal price transparency mandates; ChatGPT creates artificial abstracts that pass scientific review; injection drug use fuels rise in endocarditis; Getty Images sues Stability AI over image scraping; much more:
AI, STATISTICS & DATA SCIENCE
- “Artificial intelligence-assisted translation tools like StoryWeaver can bring more languages into conversation with one another—but the tech is still new, and it depends on data that only speakers of underserved languages can provide. This raises concerns about how the labor of the native speakers powering A.I. tools will be valued and how repositories of linguistic data will be commercialized.” In an article for Slate’s Future Tense, Madhuri Karak describes recent interest in AI translation tools that are being touted as a solution for imperiled languages – but which also exact hidden costs.
- “Now a trio of computer scientists has solved this long-standing problem. Their new algorithm, which finds the shortest paths through a graph from a given “source” node to every other node, nearly matches the speed that positive-weight algorithms achieved so long ago. What’s more, the new approach uses decades-old mathematical techniques, eschewing more sophisticated methods that have dominated modern graph theory research.” Quanta’s Ben Brubaker reports on a new mathematical method that offers efficient solutions for optimizing paths on a graph that incorporates both positive and negative weights.
- “The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% the generated abstracts. But the human reviewers didn’t do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated.” In an article appearing in Scientific American, Nature’s Holly Else describes how the large language model AI known as ChatGPT has been able to produce synthetic abstracts capable of going undetected by seasoned scientific reviewers.
- “Based on these results, we are developing a theory that books with DOIs perform better in Google Scholar because they benefit from the structured, open metadata associated with those DOIs – which are used by hundreds of platforms and services, and therefore are “seeded” throughout the mainstream web, which Scholar may draw on for indexing, linking, etc….however, these results also suggest that publishers are best served by a metadata strategy that is well attuned to the protocols expected of each channel for book search and discovery.” In a guest post for Scholarly Kitchen, Lettie Y. Conrad and Michelle Urberg present findings from a preliminary study evaluating the effect of metadata on the findability of books via Google Scholar.
- In a blog post at Google Research, Jeff Dean summarizes a year of AI development at Google, focusing on projects in generative AI.
- The complete set of slide presentations from last November’s NuerIPS 2022 is now available online.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “2022 has been a rollercoaster year for biopharma, as it has faced an industry-wide slowdown and late-stage clinical trial failures, as well as breakthroughs and regulatory approvals. COVID-19 has continued to disrupt nearly all aspects of clinical trial infrastructure, from patient recruitment to supply chains, but despite this, 2023 promises to bring many new readouts from different branches of medicine…” At Nature Medicine, Carrie Arnold and Paul Webster speak with clinical trialists about some of the ongoing research expected to yield notable findings in 2023.
- “Exposure assessment suggests that a single serving of freshwater fish per year with the median level of PFAS as detected by the U.S. EPA monitoring programs translates into a significant increase of PFOS levels in blood serum. The exposure to chemical pollutants in freshwater fish across the United States is a case of environmental injustice that especially affects communities that depend on fishing for sustenance and for traditional cultural practices. Identifying and reducing sources of PFAS exposure is an urgent public health priority.” In a paper published ahead of print in the journal Environmental Research, Barbo and colleagues present findings from a study that found widespread contamination by PFAS compounds (so-called “forever chemicals”) in freshwater fish across the United States.
- “In laboratory experiments, Halteria that were living in water droplets and given only chloroviruses for sustenance reproduced, DeLong and colleagues found. As the number of viruses in the water dwindled, Halteria numbers went up. Ciliates without access to viral morsels, or any other food, didn’t multiply. But Paramecium, a larger microbe, didn’t thrive on a virus-only diet, hinting that viruses can’t satisfy the nutritional requirements for all ciliates to grow.” In an article for Science News, Erin Garcia de Jesús reports on a recent study of the ciliate microorganism Halteria that confirms the genus is capable of existing exclusively on a diet of viruses.
- “With drug deaths hovering at an all-time high and endocarditis cases among drug users up nearly tenfold in the last decade, physicians, researchers, and health officials have begun to confront the problem with more urgency. In particular, doctors are coming to terms with a basic reality: Their hospitals often have few protocols for treating endocarditis patients who use opioids and the withdrawal they’ll likely experience upon admission.” STAT News’ Lev Facher reports on the fallout from a surge in cases of endocarditis that has accompanied an ongoing epidemic of injection drug use.
COMMUNICATION, Health Equity & Policy
- “There are numerous conferences, workshops, and keynotes about how or whether techniques developed under the moniker ‘Artificial Intelligence’ (AI) can support (or ruin!) scholarly publishing (not to mention two recent Scholarly Kitchen posts on ChatGPT and the issues it presents). But what is actually meant by AI, according to people who do this for a living? How, precisely, can this mysterious set of technologies help or harm scholarly publishing, and what are some current trends? What are the risks of AI, and what should we look out for?” The Scholarly Kitchen posts a summary recap of a webinar that invited experts to discuss the implications of AI technology for the world of scholarly publishing.
- “A total of 64 acute-care hospitals (non-teaching, non-profit n = 28; teaching, non-profit n = 15; non-teaching, for-profit n = 14; teaching, for-profit n = 7) across eight hospital referral regions were sampled from 3155 Medicare-registered acute care hospitals, all subject to the hospital price transparency final rule. Only 19% (n = 64) were fully adherent to the CMS checklist…”An analysis by Loccoh and colleagues, published in the Journal of General Internal Medicine, reports that from a sample of U.S. Medicare-registered acute care hospitals, a relatively small proportion are in compliance with recent federal mandates requiring price transparency.
- “The lawsuit marks an escalation in the developing legal battle between AI firms and content creators for credit, profit, and the future direction of the creative industries. AI art tools like Stable Diffusion rely on human-created images for training data, which companies scrape from the web, often without their creators’ knowledge or consent. AI firms claim this practice is covered by laws like the US fair use doctrine, but many rights holders disagree and say it constitutes copyright violation.” The Verge’s James Vincent reports on a lawsuit filed by Getty Images against Stability AI, alleging that its Stable Diffusion generative AI is violating copyright laws.