AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
October 21, 2022
In today’s Duke AI Health Friday Roundup: the invisible work underpinning AI; meta-research study reveals unexplained variance; dish of neurons learns to play Pong; toolkits for ameliorating AI bias; results from Moderna vaccine trial in kids; inequities in internet access; confronting racism in the culture of science; factors behind steep US life expectancy declines; AI translators for spoken language; much more:
AI, STATISTICS & DATA SCIENCE
- “Far from the sophisticated, sentient machines portrayed in media and pop culture, so-called AI systems are fueled by millions of underpaid workers around the world, performing repetitive tasks under precarious labor conditions.” An essay at Noēma by Adrienne Williams, Milagros Miceli, and Timnit Gebru addresses the role that “ghost work” – hidden and often poorly compensated labor – plays in propping up the global AI industry.
- “…we find that research teams reported widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predicted the wide variation in research outcomes. More than 90% of the total variance in numerical results remained unexplained even after accounting for research decisions identified via qualitative coding of each team’s workflow. This reveals a universe of uncertainty that is hidden when considering a single study in isolation.” A preprint version of an article by Breznau and colleagues, forthcoming from PNAS, provides results from large meta-study that engaged scores of research teams to independently analyze the same set of social-science data.
- “…as an olfactory neuroscientist for Google Research’s Brain Team, Wiltschko used machine learning to dissect our most ancient and least understood sense….Their findings significantly improved researchers’ ability to compute the smell of a molecule from its structure. Moreover, the way they improved those calculations gave new insights into how our sense of smell works, revealing a hidden order in how our perceptions of smells correspond to the chemistry of the living world.” Quanta’s Allison Parshall reports on recent AI modeling work that maps the connections between perceived smells and metabolic processes.
- “The increasing use of machine learning (ML) algorithms in clinical settings raises concerns about bias in ML models. Bias can arise at any step of ML creation, including data handling, model development, and performance evaluation. Potential biases in the ML model can be minimized by implementing these steps correctly. This report focuses on performance evaluation and discusses model fitness, as well as a set of performance evaluation toolboxes…” A Special Report published by Faghani and colleagues at Radiology: Artificial Intelligence discusses different components of a toolkit for avoid bias in machine learning.
- “Collecting sufficient data was a significant obstacle we faced when setting out to build a Hokkien translation system. Hokkien is what’s known as a low-resource language, which means there isn’t an ample supply of training data readily available for the language, compared with, say, Spanish or English. In addition, there are relatively few human English-to-Hokkien translators, making it difficult to collect and annotate data to train the model.” A report from Meta AI researchers, published on the Meta AI blog, describes an AI speech translation system designed to render translations for languages that do not have a significant corpus of written examples.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- Nature’s Jo Marchant reports on a remarkable archaeological find: an ancient star chart tucked into a pile of medieval manuscripts that may represent a key scientific milestone: “Evans says it proves that Hipparchus, often considered the greatest astronomer of ancient Greece, really did map the heavens centuries before other known attempts. It also illuminates a crucial moment in the birth of science, when astronomers shifted from simply describing the patterns they saw in the sky to measuring and predicting them.”
- Not sure how to feel about this, given how indifferent our own console Pong skills were back in the day, and that was with the benefit of an entire brain devoted to the exercise. Nevertheless: “A dish of living brain cells has learned to play the 1970s arcade game Pong. About 800,000 cells linked to a computer gradually learned to sense the position of the game’s electronic ball and control a virtual paddle, a team reports in the journal Neuron….The novel achievement is part of an effort to understand how the brain learns, and how to make computers more intelligent.” NPR’s Jon Hamilton has the story.
- Sobering news on US life expectancy, released by the National Center for Health Statistics this past August, is unpacked by Scientific American’s Tanya Lewis: “With a few notable exceptions—such as during the 1918 influenza pandemic, World War II and the HIV crisis—life expectancy in the U.S. has had gradual upward trajectory over the past century. But that progress has steeply reversed in the past two years as COVID and other tragedies have cut millions of lives short.”
- At the New England Journal of Medicine, a flurry of papers of interest this week: A report on myocarditis incidence in Israeli adolescents who received the Pfizer/BioNTech COVID vaccine reports that “…BNT162b2 vaccine–induced myocarditis in adolescents appears to be a rare adverse event that occurs predominantly in males after the second vaccine dose. The clinical course appears to be mild and benign over a follow-up period of 6 months, and cardiac imaging findings suggest a favorable long-term prognosis.” Also out this week: a review article on myocarditis by Christina Basso and a report on the safety and efficacy of the Moderna COVID vaccine in children 6 months to 5 years.
COMMUNICATION, Health Equity & Policy
- “Using an A.I. program is not “plagiarism” in the traditional sense—there’s no previous work for the student to copy, and thus no original for teachers’ plagiarism detectors to catch. Instead, a student first feeds text from either a single or multiple sources into the program to begin the process. The program then generates content by using a set of parameters on a topic, which then can be personalized to the writer’s specifications. With a little bit of practice, a student can use AI to write his or her paper in a fraction of the time that it would normally take to write an essay.” An article at Slate by Aki Peritz examines the academic implications of ubiquitous, free AI applications that can spit out plausible undergraduate essays with minimal effort.
- “…his ‘a-ha’ moment came when he joined the computer-science faculty at Auburn University in Alabama, where there was one other Black faculty member and two Black PhD students in the department. It clarified something for him he feels should have been obvious: all across the country, there were Black PhDs just like him who were struggling with isolation just like him, enduring microaggressions just like him and fighting the urge to quit, just like him.” A Nature news feature by Melba Newsome examines the problems of racial inequities in computer science programs. Also at Nature this week: an editorial feature by Melissa Nobles, Chad Womack, Ambroise Wonkam, and Elizabeth Wathuti that serves as a capstone to a theme issue addressing the larger problems of racism in the culture and practice of science.
- “The neighborhoods offered the worst deals had lower median incomes in nine out of 10 cities in the analysis. In two-thirds of the cities where The Markup had enough data to compare, the providers gave the worst offers to the least white neighborhoods….These providers also disproportionately gave the worst offers to formerly redlined areas in every one of the 22 cities examined where digitized historical maps were available. These are areas a since-disbanded agency created by the federal government in the 1930s had deemed ‘hazardous’ for financial institutions to invest in, often because the residents were Black or poor.” A story written for the Markup by Leon Yin and Aaron Sankin and co-published by the Associated Press details the racial and socioeconomic inequities that affect access to the internet in the US.
- “We quantified attrition dynamics of more than 1000 epidemiological estimates first reported in 100 preprints matched to their subsequent peer-reviewed journal publication. Point estimate values changed an average of 6% during review; the correlation between estimate values before and after review was high (0·99) and there was no systematic trend. Expert peer-review scores of preprint quality were not related to eventual publication in a peer-reviewed journal. Uncertainty was reduced during peer review, with CIs reducing by 7% on average.” A paper published in Lancet Global Health by Nelson and colleagues examines the degree to which reported results changed between preprint and final published versions of papers related to COVID topics.