In this week’s Duke AI Health Friday Roundup: AI needs to get the lay of the healthcare land; drones beat ambulances for AED delivery; AI stress validation tool stumbles on validation; lessons from COVID misinformation; more worries for screen time and kids; when not just the content but the author are AI-generated; LLMs can’t fix healthcare by themselves; using GPT-4-ADA to cook up bogus research dataset; adapting quality assurance methods for AI; much more:
AI, STATISTICS & DATA SCIENCE
- “Doctors have been burned by information technology: Electronic health records (EHRs). Initially introduced as a tool to enhance healthcare delivery, EHRs have increasingly been utilized primarily for documenting care for reimbursement purposes. This shift in focus has led to a significant disconnect between the potential of these systems and their actual use in clinical settings.” A blog post by NEJM AI editor Isaac Kohane surveys the landscape that health AI developers need to understand and assimilate before trying to “fix” healthcare with technological applications.
- “The regulations for healthcare software are evolving. Software may or may not be regulated based on its intended use or by changes to regulatory agency enforcement. A QMS that facilitates compliance with applicable legal and regulatory requirements enables HCOs to design, implement, and deploy healthcare software to clinical practice while minimizing overall operational risk.” A commentary by Overgaard and colleagues, published this week in NPJ Digital Medicine, advocates for adapting industrial quality assurance methods known as Quality Management System principles to help assure trustworthy implementation of AI in healthcare settings.
- “The CSWT [Cigna StressWaves Test] is presented as a clinical grade tool and offered as a part of a broader stress management toolkit. The results herein fail to support the claim of clinical grade performance and raise questions as to whether the tool is effective at all. This external validation study found that the CSWT has poor test–retest reliability and poor validity. The convergent validity results suggest that the CSWT has limited agreement with the PSS [Perceived Stress Scale]. Even when both test administration results were used to predict the PSS using linear regression, the model explained only 6.9% of the variance in the PSS. Our findings align with previously-highlighted concerns that widespread adoption of AI technologies are being prioritized over ensuring the devices work.” A research article by Yawer, Liss, and Berisha, published in Scientific Reports, reports on an attempt to test and externally validate a publicly available AI-powered stress evaluation tool.
- “The LLM created a seemingly authentic database, showing better results for DALK than PK. Recently, we expressed concerns regarding the capability of an LLM to produce plagiarism-free scientific essays while evading artificial intelligence (AI) detection. ADA may pose a greater threat, being able to fabricate data sets specifically designed to quickly produce false scientific evidence, such as the better outcomes of DALK over PK that have not been proved, to our knowledge, by scientific evidence. Illegitimate data manipulation has been repeatedly reported in academia; however, recognition of research misconduct is still an outstanding issue with no definitive solutions.” In a research letter published in JAMA Ophthalmology by Taloni and colleagues, the authors describe the use of a version of the GPT-4 ADA large language model to generate a convincing fake clinical research dataset.
- “The notion of (mis)understanding individual health within the confines of population-based metrics and standards served as a recurring theme throughout the semester. We designed the bulk of the course content to help students to understand the value of interpreting changes to individual health with respect to their own trends and norms as opposed to population-derived standards. However, we did not adequately emphasize to students that seeing one’s own data may trigger anxiety.” A paper published in NPJ Digital Medicine by Ward and colleagues describes the creation and evaluation of a course designed to train healthcare workers in the use of wearable technology to capture health-related data for care and research.
- “Our study found that the commercialization of FDA-approved AI products is still nascent but growing, with over 50% of CPT codes effective since 2022. However, only a handful of these devices have reached substantial market adoption, suggesting that the medical AI landscape is still in its early stages. Such usage patterns underscore key themes regarding the deployment of AI in medicine, including clinical implementation challenges, payment, and equal access.” A research article by Wu and colleagues, published in NEJM AI, sifts through insurance claims data to create a portrait of clinical uptake of AI-based medical devices.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “In this prospective real-life study, to our knowledge, we have shown for the first time that AEDs can be delivered by drones to the site of a suspected out-of-hospital cardiac arrest before the arrival of an ambulance, in most cases in which a drone takes off. The delivery was made with a clinically important time benefit (median, 3 min 14 s), which made AED attachment before ambulance arrival possible in six patients. AED-drones might be an important complement to ambulances, given that in several recent studies, ambulance response times have been shown to be increasing.” A research article published in Lancet Digital Health by Schierbeck and colleagues evaluates the viability of drone delivery of automated external defibrillators in rural Sweden, versus standard ambulance service.
- “It’s a puzzle that Branch has spent years trying to figure out. Racism and a lack of access to adequate health care surely factor in, she said. But sometimes the disease also looks different depending on the patient, she said. Could there also be “something in their environment that is contributing to this different picture that we see?” she asked. Building on a paper from earlier this year, the researchers decided to study a mix of chemicals instead of individual toxins. That way, their analysis would be more reflective of real life.” STAT News’ Isabella Cueto covers recent research that suggests that higher levels of exposure environmental exposure to toxins – lead, in this case – may be also be contributing to higher risk of liver disease for African Americans.
- “Many dementia care providers are moving quickly to determine how to implement comprehensive dementia care that meets the criteria for GUIDE model participation. Adopting an existing evidence-based program will allow sites to accurately assess costs, payments, and benefits for their organization, staff, patients, and caregivers to provide care that improves patient and caregiver outcomes and is financially sustainable. This is a crucial step in helping communities meet the needs of the growing population of people with dementia and their caregivers.” An article at Health Affairs Forefront blog by Haggerty and colleagues describes the use of an evidence-based approach to caring for patients with dementia.
- “Our study demonstrated that the negative cross-lag association from screen time to developmental scores remained consistent throughout toddlerhood. Notably, we found a bidirectional association between TV/DVD screen time and developmental scores in the communication domain from age 1 to 2 years. Additionally, we observed negative associations between TV/DVD screen time at age 2 years and the developmental scores in gross motor, fine motor, and personal-social domains at age 3 years. A negative association between the developmental score at age 2 years and screen time at age 3 years was observed in the communication domain.” A research article published in JAMA Pediatrics by Yamamoto and colleagues examined the effects of screen time on a cohort of infants in Japan.
- “The early signals from influenza suggest the virus is settling back into the seasonal pattern it followed — to the degree the always mercurial bug follows any pattern — before the pandemic, said Alicia Budd, team lead for domestic flu surveillance at the Centers for Disease Control and Prevention. ‘All I can say is at this point we are at a pretty typical point in flu activity,’ she told STAT….Overall, the signs to date appear to portend a winter more like what we knew before the arrival of Covid, said Megan Culler Freeman, an assistant professor of pediatrics specializing in infectious diseases at the University of Pittsburgh.” STAT News’ Helen Branswell reports that COVID’s disruption to the usual seasonal rhythm of respiratory viral infections appears to be damping down.
COMMUNICATION, Health Equity & Policy
- “…the AI content marks a staggering fall from grace for Sports Illustrated, which in past decades won numerous National Magazine Awards for its sports journalism and published work by literary giants ranging from William Faulkner to John Updike….But now that it’s under the management of The Arena Group, parts of the magazine seem to have devolved into a Potemkin Village in which phony writers are cooked up out of thin air, outfitted with equally bogus biographies and expertise to win readers’ trust, and used to pump out AI-generated buying guides that are monetized by affiliate links to products that provide a financial kickback when readers click them.” A head-turning bit of investigative reporting by Futurism reveals once-ubiquitous US sports magazine Sports Illustrated has descended into a hodgepodge of AI-generated text – and AI-generated “authors.”
- “There is no silver bullet for mitigating health misinformation….To more clearly discern the effects of various intervention designs and outcomes and make research actionable for public health efforts, the field urgently needs to include more public health experts in intervention design and to develop a health misinformation typology and agreed-upon outcome measures, as well as more global, more longitudinal, more video-based, and more platform-diverse studies.” A systematic review published in Health Affairs by Smith and colleagues attempts to extract the generalizable lessons imparted by the surge of misinformation that accompanied the COVID pandemic.
- “There is certainly enough copyrightable material available under license to build reliable, workable, and trustworthy AI. Just because a developer wants to use “everything” does not mean it needs to do so, is entitled to do so, or has the right to do so. Nor should governments and courts twist or modify the law to accommodate them.” A guest post at Scholarly Kitchen by Copyright Clearance Center’s Roy Kaufman examines the copyright implications of large language models’ voracious consumption of text-based training material.
- “After signing into their ACT account, if a student accepted cookies on the following page, Facebook received details on almost everything they clicked on—including scrambled but identifiable data like their first and last name, and whether they’re registering for the ACT. The site even registered clicks about a student’s ethnicity and gender, and whether they planned to request college financial aid or needed accommodations for a disability.” An investigation by the Markup’s Colin Lecher and Ross Teixeira reveals that Meta’s Pixel tracking software has been following teenagers and harvesting information as they visit college prep, testing, and other educational websites.
- “LLMs are clearly an exciting technology, but the current market environment is far from optimized to enable this technology to provide a solution for practicing physicians. In fact, adding LLMs to this milieu might exacerbate billing challenges for physicians. For example, if LLMs were used to support clinical documentation, health insurers could challenge the documentation as an LLM ‘hallucination.’ In such a dispute, a source of the truth of what services were actually provided would no longer exist. Technology could grind the billing process to a halt.” A viewpoint article published in JAMA by Schulman and colleagues takes a skeptical view of the potential for AI applications, by themselves, to tame healthcare costs related to administrative burdens.