AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
March 24, 2023
In today’s Duke AI Health Friday Roundup: simple strategies for countering bias in large language models; debate swirls about new childhood obesity guidelines; pumping the brakes on AI; genetics of dogs living near Chernobyl’s ruins; researchers intrigued by GPT4 but want more info; military aviators, groundcrew at heightened risk for some cancers; White House releases equitable data report; how to work with a data commons; finding collaborators across academic medical centers; much more:
AI, STATISTICS & DATA SCIENCE
- “An unfortunate trend has emerged in recent years of emphasizing a false dichotomy between statistics and machine learning, with the latter framed not as an approach to building learning computers but rather as a specific collection of data analytic models serving as a drop-in alternative to classical statistics. This betrays a limited understanding of machine learning and its history, as machine learning was codeveloped with and is inseparable from modern statistics.” A viewpoint article published in JAMA Pediatrics by Finlayson, Beam, and van Smeden makes a case for reconciling what the authors describe as a false dichotomy between statistics and machine learning for clinical research.
- “…a data commons is a shared resource to support a scientific community. Some of the challenges with shared resources were identified in 1968 when Joseph Hardin published an article in Science called the The Tragedy of the Commons that focused attention on problems arising when a shared finite resource is used by a community. The governance structure is critical.” In a commentary for the journal Scientific Data, Robert L. Grossman shares data-sharing tips for researchers interested in working with community-oriented data resources known as data commons.
- “The team found that just prompting a model to make sure its answers didn’t rely on stereotyping had a dramatically positive effect on its output, particularly in those that had completed enough rounds of [reinforcement learning from human feedback] and had more than 22 billion parameters, the variables in an AI system that get tweaked during training. (The more parameters, the bigger the model. GPT-3 has around 175 million parameters.) In some cases, the model even started to engage in positive discrimination in its output.” MIT Technology Review’s Niall Firth reports on a study that reveals a surprising pathway for reducing the potential for large language models to engage in undesirable or harmful behaviors.
- “…there is frustration in the science community over OpenAI’s secrecy around how the model was trained and what data were used, and how GPT-4 actually works. ‘All of these closed-source models, they are essentially dead ends in science,’ says Sasha Luccioni, a research scientist specializing in climate at HuggingFace, an open-source AI cooperative. ‘They [OpenAI] can keep building upon their research, but for the community at large, it’s a dead end.’” At Nature, Katharine Sanderson collects impressions from research scientists following the debut of OpenAI’s new large language model, GPT4.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “The Pentagon said the new study was one of the largest and most comprehensive to date. An earlier study had looked at just Air Force pilots and had found some higher rates of cancer, while this one looked across all services and at both air and ground crews. Even with the wider approach, the Pentagon cautioned that the actual number of cancer cases was likely to be even higher because of gaps in the data, which it said it would work to remedy.” The Associated Press’ Tara Copp reports that a Department of Defense study has revealed higher than normal rates of certain kinds of cancer among military pilots and aviation ground crews.
- “And yet, a population of dogs somehow endured. They found fellowship with Chernobyl cleanup crews, and the power plant workers who remained in the area sometimes gave them food. (In recent years, adventurous tourists have dispensed handouts, too.)…Today, hundreds of free-ranging dogs live in the area around the site of the disaster, known as the exclusion zone. They roam through the abandoned city of Pripyat and bed down in the highly contaminated Semikhody train station.” Atomic dogs: The New York Times’ Emily Anthes reports on the intriguing story of scientists who are studying the genetic results of a natural experiment created when a population of stray dogs established itself in the radioactive environs of the ruined Chernobyl nuclear facility in Ukraine.
- “Ummy Mwalimu, Tanzania’s health minister, said 3 patients are hospitalized, and 161 contacts are under monitoring. An earlier media report said the initial illnesses were reported from two villages in Bukoba Rural District in the northwestern part of the country….Like Ebola, Marburg virus spreads through contact with body fluids of infected people. It has a case-fatality rate as high as 88%, and so far there are no approved vaccines or specific treatments.” The University of Minnesota’s Center for Infectious Disease Research and Policy reports on an outbreak of Marburg virus in Tanzania.
- “…in January, the American Academy of Pediatrics released its first formal clinical practice guidelines centered on the screening and treatment of young patients with obesity, many eyes turned to the document…. Now that experts have had a couple of months to comb through the 100-page document, from executive summary to supporting material, one thing is clear: There is still no consensus on how best to approach obesity in children.” In an article for STAT News, Isabella Cueto and Theresa Gaffney capture the uncertainty surrounding the medical community’s understanding of how to treat obesity in children.
COMMUNICATION, Health Equity & Policy
- “Here’s the weird thing, though. The very same researchers who are most worried about unaligned AI are, in some cases, the ones who are developing increasingly advanced AI. They reason that they need to play with more sophisticated AI so they can figure out its failure modes, the better to ultimately prevent them….But there’s a much more obvious way to prevent AI doom. We could just … not build the doom machine. Or, more moderately: Instead of racing to speed up AI progress, we could intentionally slow it down.” In an article at Vox by Sigal Samuel makes a case for applying the brakes amid an onrush of new AI applications – many of them based on large language models – making their public debuts recently.
- “Many Black Californians report adjusting their appearance or behavior — even minimizing questions — all to reduce the chances of discrimination and bias in hospitals, clinics, and doctors’ offices. Of the strategies they describe taking, 32% pay special attention to how they dress; 35% modify their speech or behavior to put doctors at ease. And 41% of Black patients signal to providers that they are educated, knowledgeable, and prepared.” At California Healthline, Annie Sciacca reports on study findings that demonstrate the efforts that Black Californians feel obliged to take to minimize the impact of discrimination and bias at the hands of healthcare providers.
- “In 2018, a novel analytic resource navigation process was developed at Duke University to connect potential collaborators, leverage resources, and foster a community of researchers and scientists. This analytic resource navigation process can be readily adopted by other academic medical centers. The process relies on navigators with broad qualitative and quantitative methodologic knowledge, strong communication and leadership skills, and extensive collaborative experience.” A paper published in Academic Medicine by Pomann and colleagues describes a program piloted at Duke University that was designed to facilitate partnerships between researchers at academic medical centers
- The White House Office of Science and Technology Policy has just announced the publication of a report that documents “…progress the Biden-Harris administration has made in collecting and analyzing data to help identify disparities in federal policies and programs in order to deliver more equitable outcomes for the American people.”