AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
June 16, 2022
In this week’s Duke AI Health Friday Roundup: transformer models can tackle a myriad of predictive clinical applications; the surprising health burdens of noise; doctors, scientists receiving elevated levels of online harassment; large quantities of LLM-generated text swamp Amazon’s mTurk platform; helping kids navigate the AI landscape; digital media survey shows shifts toward image- and video-centric social media; how AI image generators can supercharge existing societal bias; much more:
AI, STATISTICS & DATA SCIENCE
- “Here we show that unstructured clinical notes from the electronic health record can enable the training of clinical language models, which can be used as all-purpose clinical predictive engines with low-resistance development and deployment. Our approach leverages recent advances in natural language processing…to train a large language model for medical language (NYUTron) and subsequently fine-tune it across a wide range of clinical and operational predictive tasks.” A research article by Jiang and colleagues, published in Nature, describes the use of a large language model AI for multiple clinical applications. (With some additional discussion from Isaac Kohane at his blog.)
- “Because it simultaneously amplifies both gender and racial stereotypes, Stable Diffusion tends to produce its most skewed representations of reality when it comes to women with darker skin. This demographic made up the majority of images generated for ‘social worker,’ ‘fast-food worker’ and ‘dishwasher-worker.’ Of all the higher-paying occupations in our analysis, ‘judge’ was the only one that featured more than a single image of a woman with the darkest skin type.” A deep dive by Bloomberg’s Leonardo Nicoletti and Dina Bass into how AI-powered image generators work reveals the influence of societal-level bias – as well as how AI can reinforce that bias further.
- “A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: ‘We find that use of model-generated content in training causes irreversible defects in the resulting models.’” An article by VentureBeat’s Carl Franzen addresses the problems inherent to allowing large language model AIs to be trained with AI-generated output. Although this is not a new problem for the field, a recent preprint paper by Veselovsky and colleagues showing that Amazon’s mTurk crowdsourcing platform is rife with LLM-generated text content adds some plangency to the story.
- “…there’s a lot health systems don’t know about how AI can impact care. That includes how big a risk a phenomenon known as automation bias — when providers are so used to AI being accurate that they miss when it makes mistakes — poses to patients.” An article at STAT News by Mohana Ravindranath plumbs the uncertainty some hospitals and health systems are encountering as they decide how (or whether) to integrate AI applications into patient interactions.
- “Goals of the RAISE-Health initiative include enhancing clinical care outcomes through responsible integration of AI; accelerating research to solve the biggest challenges in health and medicine; and educating patients, care providers and researchers to navigate AI advances.” Stanford University has announced the creation of its new RAISE-Health initiative, focused on AI ethics and safety, that represents a collaboration between Stanford Medicine and the Stanford Human-Centered Artificial Intelligence (HAI) group.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “We’ve all been told to limit the volume on our headphones to protect our hearing. But it is the relentless din of daily life in some places that can have lasting effects throughout the body. Anyone who lives in a noisy environment, like the neighborhoods near this Brooklyn highway, may feel they have adapted to the cacophony. But data shows the opposite: Prior noise exposure primes the body to overreact, amplifying the negative effects.” A multimedia feature article by the New York Times’ Emily Baumgaertner, Jason Kao, Eleanor Lutz, Josephine Sedgwick, Rumsey Taylor, Noah Throop and Josh Williams explores the surprisingly extensive (and serious) health burdens imposed by exposure to environmental noise.
- “…the clinical research workforce is required to meet rising expectations for quality, safety, speed, study complexity, novel technologies, and diversity—all amid rapidly shrinking human resources and lack of professional infrastructure.” A viewpoint article by Freel and colleagues, published in the journal Clinical Trials, describes a coming crisis affecting the workforce for clinical research and identifies some possible solutions.
- “Physicists have been experimenting with a range of hardware for building quantum computers, including traps for individual ions or neutral atoms. IBM’s approach — which is also used by Google and other companies — encodes each qubit in a tiny superconducting circuit. For quantum computers to be effective, the qubits have to keep their quantum state for long enough for a calculation to be carried out.” Nature’s David Castelvecchi reports on the achievement of major milestone on the path to practical applications for quantum computing.
- “Physicians and biomedical scientists experience high levels of harassment online, a problem that appears to have been worse during the COVID-19 pandemic… Social media plays a role in disseminating medical and scientific knowledge to the public; however, high levels of reported harassment may lead more physicians and scientists to limit the way they use social media, thus leaving propagation of misinformation unchecked by those most qualified to combat it.” A research letter by Royan and colleagues, published in JAMA Network Open, revisits the experience of physicians and scientists with online harassment during the COVID-19 pandemic.
COMMUNICATION, Health Equity & Policy
- “As schools ban, unban, and try to figure out how to best use generative AI, will zero-tolerance policies be weaponized against vulnerable students? Who will be scrutinized and policed, and who—like my son—will be encouraged to experiment and gain AI literacy? We need to think hard about these dynamics, especially in schools with preexisting enforcement cultures.” At the Markup, Nabiha Syed ponders some disquieting questions prompted by the prospect of raising children amid ubiquitous AI.
- “Putting it bluntly: if we have the right regulation; things could go well. If we have the wrong regulation, things could badly. If big tech writes the rules, without outside input, we are unlikely to wind up with the right rules.” In a Substack article, Gary Marcus worries that the wrong approach to regulating AI could result in regulatory capture by Big Tech firms.
- “…the 2017 paper attracted immediate and sustained scrutiny from other experts, one of whom attempted to replicate it and found a key problem. Nothing happened until this April, when the authors admitted the work was flawed and retracted their article. By then, it had been cited 134 times in the scientific literature…and received so much attention online that the article ranks in the top 5% of all the research tracked by Altmetric.” At STAT News, Ellie Kincaid explores how a sensational publication, now retracted, that described an AI-powered model for detecting suicide risk from brain scans made it to publication despite reviewers flagging numerous problems.
- “Perhaps the most striking findings in this year’s report relate to the changing nature of social media, partly characterised by declining engagement with traditional networks such as Facebook and the rise of TikTok and a range of other video-led networks. Yet despite this growing fragmentation of channels, and despite evidence that public disquiet about misinformation and algorithms is at near record highs, our dependence on these intermediaries continues to grow.” The joint Reuters Institute – Oxford University Digital News Report for 2023 has been published, and among the many takeaways are some shifts in which social media platforms are dominating attention and serving news to readers.
- “Six years ago I was diagnosed with ALS (also known as Motor Neurone Disease). Shortly after that, knowing I had an 80% chance of dying within 5 years, I enrolled in a phase 3 clinical trial for a promising experimental treatment. For me, the clinical trial was about trying to extend my life. But by the time the trial concluded 11 months later, I found myself thoroughly enamored with being part of the research process.” A guest post at Scholarly Kitchen by Wiley vice president Bruce Rosenblum shares the author’s experience – and unusual perspective – of volunteering to participate in clinical research after being diagnosed with ALS.