AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
September 27, 2024
In this week’s Duke AI Health Friday Roundup: cardiovascular medicine looks at AI; wildfire smoke exposure exacts significant health tolls; bigger not necessarily better for AI models; preserving knowledge against a rising tide of digital decay; NIH announces funding for genomics learning health; big academic publishers face lawsuit; new benchmark for auto-replicating analyses with AI; minority physicians provide care for a disproportionate number of Medicaid recipients; much more:
AI, STATISTICS & DATA SCIENCE
- The heart of the matter: a flurry of interesting papers have appeared in the last couple of weeks at the intersection of cardiovascular medicine and AI, including this article by Mihan and colleagues on mitigating AI bias in cardiovascular care (itself part of a theme collection by Lancet Digital Health) and this JAMA Cardiology article by Zinzuwadia and colleagues that describes a machine learning approach to improving calibration of the AHA-PREVENT cardiovascular risk equations.
- “…we…refute two common assumptions underlying the ‘bigger-is-better’ AI paradigm: 1) that improved performance is a product of increased scale, and 2) that all interesting problems addressed by AI require large-scale models. Rather, we argue that this approach is not only fragile scientifically, but comes with undesirable consequences. First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint. Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate. Finally, it exacerbates a concentration of power…” A research article by Varoquaux and colleagues, available as a preprint from arXiv, critically examines the prospects for ever-increasing model sizes in AI.
- “Ultimately, the GAIRA strives to support researchers’ ability to collaborate to advance research on AI development and deployment. The GAIRA emphasizes international collaboration and prioritizes further research to assess the impact of AI in the developing world. Effective engagement in international development using AI, as outlined in the AI in Global Development Playbook, requires rigorous evidence to identify effective strategies and mitigate risks, particularly in low- and middle-income countries (LMICs).” The US State Department has released a “global AI research agenda” (GAIRA) and accompanying global development playbook.
- “Both human-written and AI-generated reports can contain errors, ranging from clinical inaccuracies to linguistic mistakes. To address this, we introduce ReXErr, a methodology that leverages Large Language Models to generate representative errors within chest X-ray reports. Working with board-certified radiologists, we developed error categories that capture common mistakes in both human and AI-generated reports. Our approach uses a novel sampling scheme to inject diverse errors while maintaining clinical plausibility. ReXErr demonstrates consistency across error categories and produces errors that closely mimic those found in real-world scenarios.” A research article by Rao and colleagues, available as a preprint from arXiv, describes the creation of a system for deliberately injecting realistic errors into radiology reports as a means for training error-detection algorithms.
- If this pattern of being able to easily adapt a generalist agent to produce a task-specific agent holds in other areas, it should make us rethink generality. Generality roughly translates to being able to use the same model or agent without modification to perform a variety of tasks….But at least from the point of view of economic impacts, generality might be a red herring. For a task such as computational reproducibility on which expert humans collectively spend millions of hours every year, being able to automate it would be hugely impactful — regardless of whether the AI system did so out of the box, or after a few person days (or even a person year) of programmer effort.” At their AI Snake Oil Substack blog, Sayash Kapoor and Arvind Narayanan describe a recent paper (on which they appear as coauthors) that introduces CORE-Bench, which offers a benchmark for evaluating an AI can automate the process of replicating a paper’s findings, given access to code and data.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “We found that average exposure to wildland fire smoke PM5 in the past 1 y was associated with increases in nonaccidental, cardiovascular, ischemic heart disease, digestive, endocrine, diabetes, mental, and chronic kidney disease mortality. In addition to the well-documented mortality burden from nonsmoke PM2.5, in total, we estimated that smoke PM2.5 contributed to over 10,000 nonaccidental deaths in the contiguous United States each year.” A research article by Ma and colleagues recently published in PNAS examines relationships between exposure to wildfire smoke, extreme heat events, and mortality in the United States.
- “…we documented the contribution of URiM physicians in caring for Medicaid beneficiaries and factors associated with FPs’ [family physicians] meaningful participation in Medicaid….Our findings suggest that FP race and ethnicity are associated with the size of their Medicaid patient panel. We also observed that URiM physicians not only care for a greater number of Medicaid beneficiaries on average but also see a greater proportion of Medicaid beneficiaries from racial and ethnic minority groups.” A paper published this month in the Annals of Family Medicine by Vichare and colleagues presents findings from a study that examined the contributions of physicians from “underrepresented in medicine” (URIM) minority groups in caring for Medicaid beneficiaries (H/T @uche_blackstock).
- “The new Genomics-enabled Learning Health System (gLHS) Network aims to identify and advance approaches for integrating genomic information into existing learning health systems. As genomic testing becomes increasingly common, more and more genomic data are available in clinical settings, and learning health systems present an opportunity to translate this evidence quickly and directly into improvements in medical care….The network consists of six clinical study sites and a coordinating center, all of which have an operating learning health system. Each clinical site will propose a project that uses patient data to develop and refine some aspect of genomic medicine.” The National Institutes of Health has announced the award of $27MM to create a new network of “genomics-enabled learning health systems.”
COMMUNICATION, Health Equity & Policy
- “A quarter of all web pages that existed at some point between 2013 and 2023 now… don’t. That’s according to a recent study by Pew Research Center, a think tank based in Washington, DC, which raised the alarm of our disappearing digital history. Researchers found the problem is more acute the older a web page is: 38% of web pages that Pew tried to access that existed in 2013 no longer function. But it’s also an issue for more recent publications. Some 8% of web pages published at some point 2023 were gone by October that same year.” In the wake of a recent US court decision against the Internet Archive’s access policies, Chris Stokel-Walker makes a case for its potential importance as a bulwark against the evaporation of digital-only knowledge in a society that has developed little in the way of contingencies for long-term, robust archiving.
- “The 2024 ACO REACH HEBA changes generally benefit Northeastern and Western coastal states and cities, while the South and Midwest fare less well. Measured against the Federal Medical Assistance Percentage as an accepted standard of state average per-capita income, these shifts are in the opposite direction of need nationally. In addition, the PY2024 area-level HEBA change helps and hurts nearly the same number of large metropolitan areas.” A research letter published this month in JAMA Health Forum by Powell and colleagues critiques the effects of changes to federal policy for Medicare/Medicaid payments designed to address health disparities.
- “…at the heart of the lawsuit is an inherent failure to understand the very nature of scholarly publishing — that it is a service industry, not a product-based industry. Simply put, there are processes that researchers are required to go through in order to further their careers…. As others in this post note, there remain long-running and newly arisen problems in the world of scholarly communication. This lawsuit is, in my opinion (which, I repeat, is entirely free from legal expertise), a frivolous distraction from real efforts to improve things.” Scholarly Kitchen collects reactions from its contributors to the recent class action lawsuit launched against prominent publishers of academic research.
- “Particle alleges that Epic has used its control over patient data to expand its dominance to so-called “payer platforms,” a type of software that allows health insurers to retrieve and analyze large amounts of patient data to help run their businesses and make decisions about patient care and coverage.” STAT News’ Casey Ross reports that EHR giant Epic is being sued over alleged monopolistic practices by Particle Health – which has also filed a complaint with the Office of the National Coordinator.
- “Words convey empathy, demonstrate competence, and generate trust. Small wording changes during patient-clinician encounters can affect visit interactions as well as visit outcomes. How clinicians word or frame questions or statements, and where these questions or statements are placed during patient encounters, are impactful.” A JAMA Insights essay by Robinson and Opel examines the importance of the right word when providing patient care.