AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
March 15, 2024
In this week’s Duke AI Health Friday Roundup: implementing generative AI in healthcare; landmark study looks at health consequences of microplastics; using AI to distill summaries from patient discharge notes; scientific misconduct haunts Alzheimer research; foundation models on the cutting edge of biological discovery; lean budget times may be ahead for research agencies; study flags bias regarding use of GPT in hiring decisions; plagiarism in peer review; much more:
AI, STATISTICS & DATA SCIENCE
- “When asked to rank those resumes 1,000 times, GPT 3.5 — the most broadly-used version of the model — favored names from some demographics more often than others, to an extent that would fail benchmarks used to assess job discrimination against protected groups. While this test is a simplified version of a typical HR workflow, it isolated names as a source of bias in GPT that could affect hiring decisions. The interviews and experiment show that using generative AI for recruiting and hiring poses a serious risk for automated discrimination at scale.” A report by Bloomberg’s Leon Yin, Davey Alba and Leonardo Nicoletti suggests that the use of ChatGPT to screen employee applications and resumes may be reinforcing bias.
- “…AI safety is not a model property. With a few exceptions, AI safety questions cannot be asked and answered at the levels of models alone. Safety depends to a large extent on the context and the environment in which the AI model or AI system is deployed. We have to specify a particular context before we can even meaningfully ask an AI safety question.” At their AI Snake Oil blog, Arvind Narayanan and Sayash Kapoor critique current conceptions of AI safety and propose an alternative approach.
- “Leadership will be required first and foremost to push continued model development, validation, and implementation. Currently, generative AI models have been developed by startup companies, research groups, as well as academic healthcare systems. Given these varied developers, guidance from a leadership body is needed to clarify the practical path towards implementation. Leadership should focus on developing guidelines for model performance (i.e. minimizing model hallucination), data sharing, finding the optimal healthcare settings for clinical trials using generative AI tools, as well as clarifying the needs of different healthcare settings.” An editorial by Raza and colleagues published in NPJ Digital Medicine surveys the challenges of implementing generative AI in healthcare.
- “In this cross-sectional study of 50 discharge summaries, we found that generative AI successfully transformed discharge summaries into a format that was more readable and understandable for patients. Conventional standards of readability recommend creating materials appropriate for a sixth grade reading level.31 We were able to show that our patient-friendly discharge summaries were consistently at a sixth or seventh grade reading level…” In a research article published in JAMA Network Open, Zaretsky and colleagues present findings from a study that evaluated the use of generative AI to convert patient discharge notes into user-friendly and readable summaries.
- “The software is one of several new A.I.-powered programs, known as foundation models, that are setting their sights on the fundamentals of biology. The models are not simply tidying up the information that biologists are collecting. They are making discoveries about how genes work and how cells develop….As the models scale up, with ever more laboratory data and computing power, scientists predict that they will start making more profound discoveries.” A New York Times article by Carl Zimmer examines how AIs are being used to probe the frontiers of biology – and whether that role is sustainable.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “Plastics are just about everywhere — food packaging, tyres, clothes, water pipes. And they shed microscopic particles that end up in the environment and can be ingested or inhaled by people….Now the first data of their kind show a link between these microplastics and human health. A study of more than 200 people undergoing surgery found that nearly 60% had microplastics or even smaller nanoplastics in a main artery…Those who did were 4.5 times more likely to experience a heart attack, a stroke or death in the approximately 34 months after the surgery than were those whose arteries were plastic-free.” Nature’s Max Kozlov’s unpacks the implications of recent findings, published in the New England Journal of Medicine, that show that microplastics – now a pervasive source of environmental pollution – have potentially significant health implications for humans.
- “Dozens of mollusks, three fish, a shrimp and a cephalopod that is a type of predatory mollusk were among the new species found in the expedition, which was led by Ocean Census, a nonprofit dedicated to the global discovery of ocean life, the National Institute of Water and Atmospheric Research in New Zealand, and the Museum of New Zealand Te Papa Tongarewa.: In happier news: The New York Times’ Rebecca Carballo reports on a recent survey of deep-sea marine life off the coast of New Zealand that netted scores of new species.
- “Research supports the idea that gender, ethnicity, and sexual orientation diversity benefits institutions, including attracting high-level talent and increasing profitability, to name a few….But the representation of disabled principal investigators in academia has declined, dropping from 2% to 1.3% between 2008 and 2022, even though disabled workers’ overall labor force participation rate increased during that period. The NIH needs to use its power over the scientific ecosystem — and its newly published set of landmark recommendations on disability inclusion — to reverse this trend.” An opinion article published in STAT News by Elizabeth Weaver II and Kiana Jackson addresses the shortcomings of efforts to engage with disabled scientists in academic research.
- “The experimental evidence suggests that people have moral preferences, that is, preferences for doing what they view as the ‘right thing’. These preferences cannot be expressed solely in terms of monetary payoffs. We review the literature, and discuss attempts to construct a utility function that captures peoples’ moral preferences. We then consider more broadly the issue of language. The key takeaway message is that what matters is not just the monetary payoffs associated with actions, but also how these actions are described.” A forthcoming article published in the Journal of Economic Literature by Capraro and colleagues proposes a new framework for thinking about preferences in the context of behavioral economics.
COMMUNICATION, Health Equity & Policy
- “Scientists, prepare to tighten your belts. This week, the U.S. Congress is expected to approve six 2024 spending bills that call for sizable cuts or essentially flat budgets at a number of major federal research agencies….The National Science Foundation (NSF) is the biggest loser, with lawmakers imposing an 8.3% cut to $9.06 billion, some $820 million below 2023. NASA’s science programs will fall by 5.9% to $7.3 billion. Congress also reduced research-related spending at the Environmental Protection Agency (EPA), the U.S. Geological Survey (USGS), and the National Institute of Standards and Technology (NIST). The U.S. Department of Agriculture’s (USDA’s) research spending remained flat.” Science reports some grim news regarding funding outlooks for a number of federal research agencies in upcoming Congressional budgets.
- “Considering the above, it should not be perhaps so surprising that some reviewers may shamelessly take the easy way out by resorting to “copy and paste”, the main benefit being saving time. Another possible motivation for a plagiarized review could be simply feeling insecure to use one’s own words to write reviews…” In an article for Scientometrics, Piniewski and colleagues tackle a new wrinkle in publication shenanigans – plagiarized peer reviews.
- “The fallout from fraud has a wide impact. Taxpayer dollars are wasted in the creation of the fraud and in attempts to replicate the failed experiment. Grad students — the workhorses of labs — waste time trying to repeat studies or may be bullied or intimidated into misconduct, said Elisabeth Bik, PhD, a former Stanford microbiologist who is now a full-time fraud investigator.” Medscape’s Alicia Ault explores the alarming amount of research fraud and other forms of misconduct permeating the field of Alzheimer disease research.