In this week’s Duke AI Health Friday Roundup: quantum computing plus LLMs; the case for zero-shot translation in scientific LLM applications; FTC warns model-as-service companies to toe the line on privacy; semaglutide use associated with reduced suicidal ideation; series examines developing, validating clinical prediction models; using LLMs to surface social determinants of health; FDA warns over declining vaccination rates; predatory publishing in medical education; much more:
AI, STATISTICS & DATA SCIENCE
- “The big unanswered question is whether there are scenarios in which quantum machine learning offers an advantage over the classical variety. Theory shows that for specialized computing tasks, such as simulating molecules or finding the prime factors of large whole numbers, quantum computers will speed up calculations that could otherwise take longer than the age of the Universe. But researchers still lack sufficient evidence that this is the case for machine learning. Others say that quantum machine learning could spot patterns that classical computers miss — even if it isn’t faster.” Nature’s Davide Castelvecchi examines vistas recently opened by advances in both LLM AI and quantum computing – particularly if the two are combined.
- “Factors like housing, transportation, financial stability, and community support play a critical role in patients’ health once they leave the doctor’s office. But it takes concerted effort to screen patients for gaps in these so-called social determinants of health — and even when screening occurs, this critical information is usually scattered in the rambling clinical notes that providers write each time a patient has a visit.” STAT News’ Katie Palmer reports on recent research exploring applications for LLM AI that could help reveal patients’ needs for social support – but also potentially expose them to risks.
- “Despite the increasing number of models, very few are routinely used in clinical practice owing to issues including study design and analysis concerns (eg, small sample size, overfitting), incomplete reporting (leading to difficulty in fully appraising prediction model studies), and no clear link into clinical decision making. Fundamentally, there is often an absence or failure to fairly and meaningfully evaluate the predictive performance of a model in representative target populations and clinical settings. Lack of transparent and meaningful evaluation obfuscates judgments about the potential usefulness of the model…” A research statistics methods article (the first of three) published in the BMJ by Collins and colleagues addresses the creation and validation of clinical prediction models.
- “Zero-shot translation grounds the output of an LLM in factual or reliable human-authored content (for example, working code or vetted facts). The user provides both facts and intent to the model, which enables the output to be more easily scrutinized for the introduction of new information and hallucinations. This is not, however, a silver bullet for truth in LLMs.” A viewpoint article published in Nature Human Behavior by Mittelstadt, Wachter, and Russell advocates for the use of a kind of machine learning training known as “zero-shot translation” in situations where accuracy and reliability of information produced by the AI is paramount.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “During the period when the Delta variant was dominant, the BNT162b2 vaccine was associated with strong protection in adolescents, with effectiveness higher than 95%, and with little evidence of waning during the follow-up period.…during the period when the Omicron variant was dominant, the estimated vaccine effectiveness was approximately 70% for children and 85% for adolescents. The estimated protection decreased by roughly 10% around 4 months from the first dose and slightly waned over time.” An analysis published in the Annals of Internal Medicine by Wu and colleagues examines the effectiveness of the Pfizer-BioNTech COVID vaccine in preventing infection and severe disease in cohorts of adolescents and children during waves of the Delta and Omicron variants of the virus.
- “Tracking the patients’ medical histories through six months after they were prescribed medication, the researchers found people prescribed semaglutide for weight loss had a 0.11% risk of first-time suicidal ideations (among those without a prior history) and approximately a 7% risk of recurrent suicidal ideation (among those with a prior history), compared to 0.43% and 14%, respectively, for the group prescribed other weight loss medications….In patients with type 2 diabetes, semaglutide prescription was associated with 0.13% risk of first-time suicidal ideations and 10% for recurrent ideation, compared to 0.36% and 18%, respectively, for other diabetes medications.” A National Institutes of Health press release highlights recent study findings that reveal an association between use of the diabetes/obesity drug semaglutide and a reduction in risk of suicidal ideation.
- “The Department of Health and Human Services will name Stacy Sanders as its new chief competition officer, the agency announced Monday. The new position was recently created as part of an administration crackdown on corporate greed in health care….As chief competition officer, Sanders will look for ways the health department can promote competition in health care markets. She will also work with the Federal Trade Commission and the Department of Justice to develop new policy initiatives to address consolidation, and coordinate data-sharing and reciprocal training programs with the other two agencies.” STAT News’ Brittany Trang reports on the Department of Health Human Services’ naming of its first “chief competition officer.”
- “Setting aside for now the controversial issue of vaccine mandates at the federal, state, or local level in the US, which are not within the purview of the Food and Drug Administration (FDA), the situation has now deteriorated to the point that population immunity against some vaccine-preventable infectious diseases is at risk, and thousands of excess deaths are likely to occur this season due to illnesses amenable to prevention or reduction in severity of illness with vaccines.” A viewpoint article published last week in JAMA by FDA Center for Biologics Evaluation and Research Director Peter Marks and FDA Commissioner Robert Califf warns of the consequences of declining uptake of vaccination in the US.
COMMUNICATION, Health Equity & Policy
- “Given the aforementioned understanding (or lack thereof) and utilisation of predatory publishers by medical students it is evident that training and education on the topic will become necessary for this group to prevent adding further legitimacy to these publishers and erosion of the literature base.” Just how bad a problem is predatory publishing for medical education? A scoping review by Owen W. Tomlinson published in BMC Medical Education attempts to find out.
- “While research misconduct is often handled by separate offices from those that support data management at academic institutions, it is beneficial for data management specialists to be aware of how misconduct can occur and how this relates to data management. Data librarianship involves relationship building and thinking about the entire data lifecycle, making research ethics a natural avenue for building further connections.” A commentary article published by Coates and colleagues in the Journal of eScience Librarianship uses case study examples to explore the role of data management in preventing research misconduct.
- “Mur also emailed the editors of The Lancet Public Health in late May, asking if they would share any other authors’ email addresses they had. He explained that he’d been having trouble replicating some of the findings in the paper, and the corresponding author hadn’t responded to his emails.” Retraction Watch dives into the saga of a high-profile paper that linked hearing-aid use with a reduced risk of dementia after alert scientists attempting to replicate its findings noticed problems with the paper’s conclusions.
- “Model-as-a-service companies that fail to abide by their privacy commitments to their users and customers, may be liable under the laws enforced by the FTC. This includes promises made by companies that they won’t use customer data for secret purposes, such as to train or update their models—be it directly or through workarounds.” At its blog, the Federal Trade Commission reminds AI companies selling model-as-a-service products to honor their obligations to protect user and customer privacy – or else.