AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
September 15, 2023
In this week’s Duke AI Health Friday Roundup: navigating multimodal learning in AI; fine particulate pollution and breast cancer; surreptitious ChatGPT use pops up in scientific literature; the challenges of safeguarding generative AI against prompt injection; FDA panel gives thumbs-down to ubiquitous decongestant phenylephrine; study surveys standards for employing AI in journalism; twin study of WWII veterans sheds light on consequences of traumatic brain injuries; much more:
AI, STATISTICS & DATA SCIENCE
- “…as we’ve navigated through a plethora of challenges and innovations, one message stands out: the road to effective multimodal AI systems built responsibly demands rigorous evaluation, an understanding of real-world complexities, and a commitment to continual improvement. We hope that these recent results will inspire ambitious work forward in the space of reframing the evaluation of multimodal models such that it properly captures their performance from initial evidence to rigorous benchmarks, complex skills, and eventually real-world and human-centered scenarios.” A post last week at Microsoft’s Research Blog documents a collaborative experiment in multimodal learning that yielded some insights into ways the approach could introduce risks of bias and harm.
- “Despite their known shortcomings, algorithms already recommend who gets hired by companies, which patients get priority for medical care, how bail is set, what television shows or movies are watched, who is granted loans, rentals or college admissions and which gig worker is allocated what task, among other significant decisions….In practice, however, news reports and research have shown these algorithms are prone to some alarming errors. And their decisions can have adverse and long-lasting consequences in people’s lives.” An article by Ananya at Scientific American delves into some of the ways machine learning systems can conserve and perpetuate bias – and those systems may already be impacting people’s daily lives.
- “Direct prompt injections happen when someone tries to make the LLM answer in an unintended way—getting it to spout hate speech or harmful answers, for instance. Indirect prompt injections, the really concerning ones, take things up a notch. Instead of the user entering a malicious prompt, the instruction comes from a third party. A website the LLM can read, or a PDF that’s being analyzed, could, for example, contain hidden instructions for the AI system to follow.” Wired’s Matt Burgess reports on one of the most worrisome security flaws afflicting generative AIs – so-called prompt injection attacks.
- STAT News has assembled a continuously updated tracker that follows health systems that are adopting generative AI for clinical applications (subscription required).
- “WHR has the strongest associations with the risks of common health conditions. Despite these findings, extending now over several decades, WHR is rarely measured in clinical or home settings. One reason is that healthcare workers and people with obesity are not well trained on the nuances of anthropometric measurements…Recent developments in computer vision now have the potential to transform the measurement of biometrics, including WHR.” A research article by Choudhary and colleagues published in NPJ Digital Medicine describes the creation of a smartphone app capable of providing sufficiently accurate waist-to-hip circumference ratios, a biometric measurement associated with a number of health outcomes.
- “’There are places where, as a computer scientist, I cringe a bit because the issues with predictive AI machine learning—the tools used for decision-making—are very different from some of the issues associated with generative AI,’…” Fast Company’s Issie Lapowsky talks with Suresh Venkatasubramanian about his work on developing the White House’s “Blueprint for an AI Bill of Rights.”
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “The panel unanimously voted against the effectiveness of orally administered phenylephrine as a nasal decongestant, adding that no more trials were required to prove otherwise.” Reuters Health reports news that an FDA advisory committee recently found that a ubiquitous decongestant drug, phenylephrine, used in over-the-counter cold remedies, is not actually effective.
- “In this large, prospective cohort of women across the U.S., we observed an 8% increase in breast cancer risk for a 10 μg/m3 increase in estimated historic PM2.5 concentrations during a period 10-15 years before enrollment. This association was evident for ER [estrogen receptor]+, but not ER-, tumors.” A research article published in the Journal of the National Cancer Institute by White and colleagues reports results from a cohort study designed to assess associations between particulate air pollution and breast cancer (H/T @JenniferPlichta).
- “The study of identical and fraternal twins allows researchers to compare participants to each other while controlling for some, if not all, of the underlying genetic factors and some of the twins’ early life conditions. Identical twins share 100 percent of their genes, while fraternal twins share about half.” The Washington Post’s Teddy Armenabar reports on a registry study by Duke scientists, recently published in Neurology, that followed twins who served in World War II in an attempt to elucidate connections between brain injuries and the development of dementia in later life.
- A brief research article recently published in JAMA Network Open by Schlemm and colleagues reports findings from a cohort study that showed that expert physician assessment of patients out-performed existing triage scales in identifying acute stroke with large vessel occlusion, suggesting the possibility that this diagnostic approach could be expanded to telehealth settings.
- Life, uh, finds a way: “It all started in 1996 when raging floods swept six young bull sharks from a nearby river into a 51-acre lake near the golf course’s 14th hole. When the floodwaters receded, the sharks found themselves stuck, surrounded by grassy hills and curious golfers. The sharks spent 17 years in the lake, sustaining themselves on its large stock of fish and on the occasional meat treat provided by the club’s staff.” The New York Times’ Annie Roth has the story.
COMMUNICATION, Health Equity & Policy
- “Cabanac has detected typical ChatGPT phrases in a handful of papers published in Elsevier journals. The latest is a paper that was published on 3 August in Resources Policy that explored the impact of e-commerce on fossil-fuel efficiency in developing countries… Cabanac noticed that some of the equations in the paper didn’t make sense, but the giveaway was above a table: ‘Please note that as an AI language model, I am unable to generate specific tables or conduct tests …’” Nature’s Gemma Conroy reports on recent instances of surreptitious uses of ChatGPT popping up in the published literature (H/T @RetractionWatch).
- “Our study shows that publishers have already begun to converge in their guidelines on key points such as transparency and human supervision when dealing with AI-generated content. However, we argue that national and organisational idiosyncrasies continue to matter in shaping publishers’ practices, with both accounting for some of the variation seen in the data. We conclude by pointing out blind spots around technological dependency, sustainable AI, and inequalities in current AI guidelines and providing directions for further research.” A preprint by Becker and colleagues, available at SocArXiv, compares professional guidelines developed and adopted by news publishers that relate to the use of AI for content generation and other applications.
- “Wilson, who is head of research development at Durham University, UK, describes being overlooked during an external meeting with collaborators where attendees were asked to introduce themselves. She was the only woman and professional services representative in the room.” A Nature Careers article by Dom Byrne, part of a podcast series on research culture, details some of the ways nonfaculty research managers may feel marginalized on team science projects.
- “Health-freedom groups spent the height of the pandemic stoking mistrust as a fundraising ploy, and now court new audiences for hawking products and ideology alike. A company that says its products are backed by “science” — yet stands to profit as science deniers in its sales force tout them as a panacea of choice — harms consumers, health misinformation experts said, by pushing falsehoods along with pills.” A report by STAT News’ Lindsay Gellman examines the intersection of burgeoning mistrust in scientific medicine and companies that leverage mistrust and confusion as part of their marketing approach.