AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
August 30, 2024
In this week’s Duke AI Health Friday Roundup: confidence, nonsense and medical questions; framework assesses health AI cost-effectiveness; computational approach sheds light on music theory; ‘AI scientist’ cranks out papers by itself; heat-related deaths in US trend upward; fierce competition for AI expertise; evaluating clinical trial papers for trustworthiness; using health AI in conflict zones; black market for paper citations; JAMA publishes draft guidance on inclusive language; much more:
AI, STATISTICS & DATA SCIENCE
- “The problem with large language models, and all modern AIs in general, is that they have no real comprehension of the subject matter they talk or write about. All they do is predict what the next word in a sentence should be based on probabilities obtained from a huge amount of text …In Kirpalani’s study, there were a few cases where ChatGPT experienced some infamous AI hallucinations and was obviously way off the mark. In most cases, though, it was like a skilled public speaker with irresistible charisma, answering all the questions in plain, simple English with a special confidence. It can take some time before you realize he’s talking nonsense.” Ars Technica’s Jacek Krywko dissects the implications of using general-purpose LLMs to answer medical questions.
- “…despite increasing calls for AI-specific legislation, many governments—including the previous UK Government—maintain a pro-innovation approach, intimating that it is too soon to legislate effectively on these evolving technologies and warning that premature restrictions might be counterproductive. Strong regulation is needed in at least three key areas to minimise potential harms.” An editorial (with a UK-specific perspective) in Lancet Digital Health describes key areas of focus for regulatory efforts related to medical applications for AI.
- “Evidence about the cost-effectiveness of health interventions is usually provided by economic evaluation studies. For economic evaluations of AI-enabled health care, it is important that decision makers are provided with key information about the nature of the AI intervention and potential implications for its cost effectiveness. If decision makers are to feel suitably informed to determine whether an AI-driven technology should be used, such information must be reported in a transparent and reproducible way.” An article published in Lancet Digital Health by Elvidge and Dawoud introduces the CHEERS-AI framework for reporting cost-effectiveness studies on health AI applications.
- “AI integration in healthcare can potentially enhance clinical care, planning, resource allocation, protection and community healthcare strengthening. However, it is vital to establish clear ethical guidelines and frameworks to govern the use of AI in healthcare in conflict areas, ensuring that these technologies support, rather than undermine, equitable and ethical healthcare services in such settings.” An article appearing in BMJ Global Health by Alkhali and colleagues examines the potential benefits and dangers of deploying health AI in conflict zones.
- “…competition for that talent is stiff. Expertise in artificial intelligence is suddenly a coveted asset across industries, and companies with deep pockets are similarly keen to hire. Last year, just 25 percent of more than 400 new Ph.D. candidates in AI/machine learning went into academe, according to the Computing Research Association’s 2023 Taulbee Survey. And those newly minted faculty members — along with those already in higher ed — are seeing wider interest in their talents as artificial intelligence becomes an increasingly interdisciplinary field.” An article by Taylor Swaak appearing in The Chronicle of Higher Education illustrates the fierce competition between academia and industry (and among academia) for AI talent as demand for expertise in the field skyrockets.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “The standard theory of musical scales since antiquity has been based on harmony, rather than melody. Some recent analyses support either view, and we lack a comparative test on cross-cultural data. We address this longstanding problem through a rigorous, computational comparison of the main theories against 1,314 scales from 96 countries. There is near-universal support for melodic theories, which predict step-sizes of 1-3 semitones. Harmony accounts for the prevalence of some simple-integer-ratio intervals…we show that the historical emphasis on harmony is misguided and that melody is the primary determinant of the world’s musical scales.” In a preprint paper available from arXiv, McBride and colleagues bring modern computational approaches to bear on some fundamental questions of music theory.
- “Recent breakthroughs in cellular omics technologies have paved new pathways for understanding the regulation of genomic elements and the relationship between gene expression, cellular functions, and cell fate determination. The advent of spatial omics technologies, encompassing both imaging and sequencing-based methodologies, has enabled a comprehensive understanding of biological processes from a cellular ecosystem perspective.” A review article published in Cell by Liu and colleagues describes recent advances in “spatial omics,” which allows granular insight into the inner workings of biology.
- “…peer reviewing has focused on the importance of research questions/hypotheses, appropriateness of research methods, risk of bias, and quality of writing. Until recently, the issues related to trustworthiness—including but not limited to plagiarism and fraud—have been largely neglected because of lack of awareness and lack of adequate tools/training. peer reviewing has focused on the importance of research questions/hypotheses, appropriateness of research methods, risk of bias, and quality of writing. Until recently, the issues related to trustworthiness—including but not limited to plagiarism and fraud—have been largely neglected because of lack of awareness and lack of adequate tools/training.” A review article by Alfirevic and Weeks, published in Cochrane Evidence Synthesis and Methods, examines the extent and degree to which assessments of trustworthiness are applied in the assessment of papers reporting results from clinical trials.
- “…heat-related mortality rates in the US increased between 1999 and 2023, especially during the last 7 years. Although a study using data through 2018 found a downward trend in heat-related mortality in the US, this study is the first to our knowledge to demonstrate a reversal of this trend from 2016 to 2023. These results align with site-specific data analyzed in a global study that suggest increases in heat-related mortality. As temperatures continue to rise because of climate change, the recent increasing trend is likely to continue.” A research letter by Howard and colleagues, published this week in JAMA, describes recent increases in heat-related deaths in the United States (H/T @alimkakeng).
COMMUNICATION, Health Equity & Policy
- “One concern is that, if AI-generated papers flood the scientific literature, future AI systems may be trained on AI output and undergo model collapse…There are already bad actors in science, including “paper mills” churning out fake papers. This problem will only get worse when a scientific paper can be produced with US$15 and a vague initial prompt. The need to check for errors in a mountain of automatically generated research could rapidly overwhelm the capacity of actual scientists. The peer review system is arguably already broken, and dumping more research of questionable quality into the system won’t fix it.” An article at The Conversation by Karin Verspoor takes a critical look at an “AI Scientist” that is touted as being able to gin up scientific questions and then write papers all on its own.
- “Collectively, these reforms were intended to put an end to the era of impact-chasing, false-positives, and unpublished truths. In its place would arise a new culture centered on the routine publication and open dissemination of unembellished, robust results….Rather than solving existing problems, some of these scientific reforms have created new and perhaps worse ones as researchers and publishers converged on unanticipated strategies inadvertently incentivized by these new policies. Central to this corruption of science has been pay-as-you-publish ‘gold’ OA publishing. The remedy is to abandon author-paid OA publishing and seek less harmful alternatives.” In what is sure to stir some lively discussion, an analysis by Morgan and Smaldino, available as a preprint at OSF, makes a blistering case against “author-pays” models of open-access publishing.
- “In their sting operation, Zaki and his colleagues created a Google Scholar profile for a fictional scientist and uploaded 20 made-up studies that were created using artificial intelligence…The team then approached a company, which they found while analysing suspicious citations linked to one of the authors in their data set, that seemed to be selling citations to Google Scholar profiles…The company offered 50 citations for $300 or 100 citations for $500. The authors opted for the first option and 40 days later 50 citations from studies in 22 journals — 14 of which are indexed by scholarly database Scopus — were added to the fictional researcher’s Google Scholar profile.” A Nature news article by Dalmeet Singh Chawla describes a recent effort to expose a black market for scientific citations.
- “The AMA Manual of Style: A Guide for Authors and Editors provides extensive guidance on use of inclusive language for authors and editors…This includes specific guidance regarding usage and reporting of demographic characteristics of individuals and groups….This draft guidance recommends use of accurate terms when reporting individual or population characteristics or to describe the evolving range of identities as currently understood and to avoid a reductionist description, or ‘labeling,’ of people with a single characteristic.” JAMA has published a draft guidance (and call for input) outlining recent evolution in reporting of attributes such as gender, gender identity, sex, and age in scientific publications.