In this week’s Duke AI Health Friday Roundup: large language models and evidence-based medicine; “digital bridge” restores mobility after paralysis; impact of legislation on access to gender-affirming care; switching endpoints common in clinical trials, reporting less so; prospects for generative AI in medicine; publishing credits join college entrance arms race; Surgeon General addresses concerns about social media and youth mental health; much more:
AI, STATISTICS & DATA SCIENCE
- “We found not only a high rate of PEP [primary end point] changes in clinical trials after trial initiation but also considerable variability in reporting of PEP changes across available methods. There was substantial underreporting of PEP changes within published articles, and only one-third of RCTs provided protocols.” A cross-sectional analysis published in JAMA Network Open by Florez and colleagues finds that clinical trials in oncology often changed the study’s primary endpoint in the midst of the trial, and then frequently failed to note this change when reporting results in published manuscripts.
- “With the increasing use of machine learning and artificial intelligence in health care research, this incomplete list of common research design and analysis pitfalls may seem somewhat old-fashioned. Despite the arguably more complex nature of such analyses, many of the aforementioned issues also apply to such studies.” An article by statistician Maarten van Smeden, appearing in PRiMER, offers a introductory overview of some common mistakes in research design and data analysis.
- “Our results indicate that some researchers have prioritized reporting a “good” result in their abstract that will help them publish their paper. By doing so, the wider issues of what is needed to produce a high-quality prediction model are downplayed. An AUC value alone cannot determine if a model is ‘acceptable’ or ‘excellent. As a measure of model discrimination, the AUC represents just one aspect of prediction model performance.” A preprint by White and colleagues presents evidence of problematic analytical practices in clinical research examining prediction models, particularly those that use a measure known as area under the receiver operating (characteristic) curve, or AUC.
- “Lay summaries generated by LLMs may suffer from inadequate accuracy and should not be used in isolation. Some information may be omitted, and recommendations could be ambiguous or confusing. Artificial intelligence (AI)-generated lay summaries should always be reviewed by experts and corrected or clarified where necessary, so that they provide accurate and comprehensible information. More importantly, both human-written summaries from systematic reviews and lay summaries should be made available to the patients to support information provenance.” A viewpoint article by Peng and colleagues published in Nature Medicine examines some of the potential use cases – and limitations – for using large language model (LLM) AIs in evidence synthesis and the creation of patient-facing materials.
- “It remains unclear, for example, how well these models will perform, and what privacy and ethical quandaries will arise, when they’re exposed to new types of data, such as genetic sequences, CT scans, and electronic health records. Even knowing exactly how much data must be fed into a model to achieve peak performance on a given task is still largely guesswork.” STAT News reporters Casey Ross, Brittany Trang, and Mario Aguilar speak with various experts to get their takes on the prospects for generative AI models in healthcare applications.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “There is broad concern among the scientific community that a lack of access to data and lack of transparency from technology companies have been barriers to understanding the full scope and scale of the impact of social media on child and adolescent mental health and well-being. While more research is needed to fully understand the impact of social media, this gap in knowledge cannot be an excuse for inaction.” A report from the US Surgeon General broaches concerns about the unknown mental health consequences of youth exposure to social media.
- “The digital bridge was implanted in one participant who had become tetraplegic nearly a decade before the trial. At enrolment, he was unable to take steps on his own. A reliable digital bridge was established in less than a few minutes, and it required only infrequent recalibration over the subsequent 20 months. The bridge enabled the participant to regain intuitive control over the movements of his legs, giving him the ability to stand, walk, climb stairs and even traverse complex terrains.” A clinical summary of a research article by Lorach and colleagues published in Nature describes the use of an implanted “digital bridge” to restore mobility in a partially paralyzed research volunteer.
- “…while wearables have transformed everyday life, their impact on clinical trials for drugs remains largely hypothetical. In part, that’s because digital biomarkers, just like the other rigorously validated measures that drug companies use, are time-consuming, expensive, and difficult to develop.” At STAT News, Mario Aguilar reports on the challenges of clinical research that hinges on the analysis of digital biomarkers.
- “Multiple sclerosis (MS) is an inflammatory disease of the central nervous system, for which and Epstein-Barr virus (EBV) infection is a likely prerequisite. Due to the homology between Epstein-Barr nuclear antigen 1 (EBNA1) and alpha-crystallin B (CRYAB), we examined antibody reactivity to EBNA1 and CRYAB peptide libraries in 713 persons with MS (pwMS) and 722 matched controls (Con).” A research article published in Science Advances by Thomas and colleagues adds mechanistic detail to earlier findings implicating the Epstein-Barr virus (EBV) as a factor in the development of multiple sclerosis.
COMMUNICATION, Health Equity & Policy
- “Regardless of the enforcement of these laws, the chilling effect caused by both perceived and actual legal threat, as well as harassment and threats against physicians, has had an adverse effect on gender-affirming care practices. The targeting of physicians through these legal penalties impedes them from practicing evidence-based medicine and blocks patients from accessing standard of care treatments.” A viewpoint article in JAMA by Mallory and colleagues examines the fallout of recent legislative activity targeting gender-affirming care for transgender persons.
- “There is a myth that older people are unable to participate in digital studies due to a lack of digital competence. However, there is ample evidence that the participation has rapidly increased for this age group, especially during the COVID-19 pandemic. Large-scale sensor-based studies started to include older populations…and our ongoing longitudinal online study also suggests that tele-research is feasible with older adults from Black, Asian, and minority ethnic communities.” In a commentary published in Lancet Digital Health, Guu and colleagues advocate for greater inclusion of older persons in studies involving wearable devices.
- “Health care organizations often praise themselves in press releases for addressing social needs through increased patient screenings and referrals, while local organizations ranging from food pantries to housing agencies find themselves swamped and underfunded. In the process, those of us in health care delivery roles are missing the opportunity to connect with our patients, hear their stories, and truly understand their needs. By reducing our patient interactions to pressured button-clicking, we are denying people the dignity and respect they deserve.” An article at STAT News by Waymark’s Sanjay Basu compares the relative merits of a narrative medicine approach with “screen and refer systems” for ensuring that patients’ social service needs are met.
- “…online student journals now present work that ranges from serious inquiry by young scholars to dubious papers whose main qualification seems to be that the authors’ parents are willing to pay, directly or indirectly, to have them published. Usually, the projects are closely directed by graduate students or professors who are paid to be mentors. College admissions staff, besieged by applicants proffering links to their studies, verify that a paper was published but are often at a loss to evaluate its quality.” An investigation by Daniel Golden, ProPublica, and Kunal Purohit explores the growing industry that supplies high-schoolers with yet another advantage in the scramble for college admissions – an ostensibly peer-reviewed “publication.”