AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

March 25, 2022

In today’s AI Health Friday Roundup: drone delivery for blood products; geometry, human cognition & AI; the FTC & “algorithmic disgorgement”; magpies: even smarter than we realized; revisiting data dashboards after 2 years of COVID; rethinking disability and the workplace; credit reporting companies’ new approach to medical debt; NIST publishes report on AI bias standards; brain implant allows “locked-in” person to communicate; much more:

Deep Breaths

A magpie (a bird with dark head, beak, back and tail plumage but white belly and sides) walking across a grassy area. Image credit: Rossano D’Angelo/Unsplash
Image credit: Rossano D’Angelo/Unsplash
  • “Magpies’ latest mischief has been to outwit the scientists who would study them. Scientists showed in a study published last month in the journal Australian Field Ornithology just how clever magpies really are and, in the process, revealed a highly unusual example in nature of birds helping one another without any apparent tangible benefit to themselves.” At the New York Times’ “Trilobites” feature, Anthony Ham reports on a recent study that provides further evidence, if any were needed, that magpies are really, really smart (and have an altruistic streak, to boot).

AI, STATISTICS & DATA SCIENCE

Abstract art consisting of colorful rectangular geometric shapes. Image credit: Daniele Levis Pelusi/Unsplash
Image credit: Daniele Levis Pelusi/Unsplash
  • “In tackling such questions, Josh Tenenbaum, a computational cognitive scientist at the Massachusetts Institute of Technology and an author of the new paper under review, likes to ask: How do we humans manage to extract so much from so little — so little data, time, energy?” A New York Times feature by Siobhan Roberts explores recent research into whether humans have a special affinity for grasping geometry – and whether that in turn sheds light on how cognition, whether human or artificial, works.
  • “They were talking about a little-used enforcement tool called algorithmic disgorgement, a penalty the agency can wield against companies that used deceptive data practices to build algorithmic systems like AI and machine-learning models. The punishment: They have to destroy ill-gotten data and the models built with it. But while privacy advocates and critics of excessive data collection are praising the concept in theory, in practice it could be anything but simple to implement.” Protocol’s Kate Kaye unpacks some of the potentially knotty consequences of a Federal Trade Commission enforcement mechanism known as “algorithmic disgorgement.”
  • A video narrated by professors Sandra Wachter and Sanjiv Das, available from Oxford’s Social Sciences website, describes a bias detection toolkit created at Oxford.
  • “Harmful impacts stemming from AI are not just at the individual or enterprise level, but are able to ripple into the broader society. The scale of damage, and the speed at which it can be perpetrated by AI applications or through the extension of large machine learning MODELs across domains and industries requires concerted effort…Trustworthy and Responsible AI is not just about whether a given AI system is biased, fair or ethical, but whether it does what is claimed….The importance of transparency, datasets, and test, evaluation, validation, and verification (TEVV) cannot be overstated.” A new “Special Publication” from the National Institute of Standards and Technologies (NIST) seeks to establish the contours for standards that could be used to counter bias in AI applications.
  • “Debiasing large-scale pretrained models is not without its challenges, namely a lack of access to the original training data and the infeasible amount of compute required for retraining. For the successful and safe adoption of vision-language models by downstream users, we need both effective measures of bias as well as efficient methods of debiasing.” A preprint by Berg and colleagues, available at arXiv, describes a new framework for “de-biasing” AI systems that use both image recognition and natural language processing (H/T @arXiv_Daily).
  • “…while The Trevor Project has used open-source AI models including OpenAI’s GPT-2 and Google’s ALBERT, it does not use tools built with them to carry on conversations directly with troubled kids. Instead, the group has deployed those models to build tools it has used internally to train more than 1,000 volunteer crisis counselors, and to help triage calls and texts from people in order to prioritize higher-risk patients and connect them faster to real-life counselors.” Protocol’s Kate Kaye describes how LGBTQ+ advocacy group The Trevor Project uses natural language AI models to support more effective crisis counseling for troubled LGBTQ+ youth while avoiding some of the known problems that have cropped up with current NLP models.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Quadcopter-style drone hovers in midair with mountains, sky, and clouds in the background. Image credit: Alessio Soggetti/Unsplash
Image credit: Alessio Soggetti/Unsplash
  • “…drone delivery led to faster delivery times and less blood component wastage in health facilities. Future studies should investigate if these improvements are cost-effective, and whether drone delivery might be effective for other pharmaceutical and health supplies that cannot be easily stored at remote facilities.” A research article published in Lancet Global Health by Nisingizwe and colleagues reports on the use aerial drones to efficiently deliver blood products to remote locations in Rwanda.
  • “Familial correlation in DNA methylation increased with the time of twins living together and decreased with the time of twins living apart, consistent with cohabitation-related early-life environmental factors influencing the familial correlation change. In addition, the effects are stronger for methylation sites affected by genetic factors, and sites associated with various early-life exposures and late-life health conditions.” A study published in the Lancet’s eBioMedicine by Li and colleagues presents findings that suggest that DNA methylation in early life can have profound, lifelong ramifications.
  • “The pair’s new paper shows how a device implanted in the brain of a 34-year-old man with locked-in syndrome — on a ventilator, paralyzed, and unable to even move his eyes — could help him communicate in full sentences. Although brain-computer interface, or BCI, technology isn’t new, the case marks the first time it is known to have been used in a patient without any voluntary muscle control.” A STAT News article by Meghana Keshavan relays the remarkable story of controversial neuroscience research Niels Birbaumer, who has recently published research demonstrating the successful use of an implanted device to allow a “locked in” person to communicate directly through a brain-computer interface.
  • “Stimulating adipose thermogenesis is an appealing strategy to counteract metabolic disruptions, such as obesity and type 2 diabetes, but has yet to yield a pharmacotherapy that is both safe and efficacious. However, deeper interrogation of heat-producing macronutrient pathways and the emerging roles of macronutrients in shaping thermogenic capacity offer the possibility of finally unlocking this distinct biology.” A perspective article in Science by Wolfrum and Gerhart-Hines explores whether pathways involved in adiopose thermogenesis, a process by which specialized fat cells metabolize nutrients in response to a variety of stimuli – could be exploited to treat metabolic disorders.

COMMUNICATIONS & DIGITAL SOCIETY

Photograph of a computer monitor displaying a Johns Hopkins University COVID data dashboard displaying information circa 2020. Image credit: Clay Banks/Unsplash
Image credit: Clay Banks/Unsplash
  • “The architects of these dashboards put in long hours and faced considerable challenges, including incomplete and inconsistent data, misconceptions and misunderstandings about how the information was collected, and efforts to twist the messages that the dashboards present. As these data wranglers continue to try to inform individuals and public-health officials, they are learning lessons that will help to navigate the next stage of the pandemic, as well as other social and public-health issues — from crime to climate change.” A Nature feature article by Lynne Peeples takes a look back at the profusion of data “dashboard” displays that erupted in the wake of the COVID pandemic, and asks whether they are, on balance, doing what they set out to do.
  • “A dysfunctional information ecosystem may have accelerated the spread of COVID-19 myths and conspiracy theories but, as the thumbnail history of conspiracy thinking sketched above suggests, it did not directly cause them…. although the information ecosystem is undoubtedly an important influence on vaccine decision-making… focusing only on the information ecosystem can obscure the wider sociocultural, historic, institutional and political context.” A perspective article by Pertwee, Simas, and Larson published in Nature Medicine takes a look at the “COVID-19 infodemic” and the effects of medical misinformation some two years on in the pandemic.
  • “Proctor360 faces a proposed class action brought by a university student who claims that the online proctoring software collected his biometric information without his permission, in violation of Illinois’ Biometric Information Privacy Act.” An article (log-in required) by Bloomberg Law’s Samantha Hawkins reports on a class-action suit being levied against Proctor360, a widely used system for monitoring students taking exams online.

POLICY

Photograph of a sign showing wheelchair symbol and directions to “step-free route.” Image credit: Yomex Owo/Unsplash
Image credit: Yomex Owo/Unsplash
  • “Before any meaningful movement can be made when it comes to the employment of people with disabilities — whether in the form of workplace accommodations, flexible work settings, recruitment practices, or limitations on earnings — the underlying assumption about the value of their presence in the workforce needs to change.” In a post for the Harvard Petrie-Flom Center’s Bill of Health blog, Brooke Ellison makes a case for a new understanding of disability and how that conception of disability relates to larger societal structures.
  • “The companies said the changes would eliminate up to 70 percent of the medical debt accounts on consumers’ credit reports, which contain reams of data used to calculate the all-important three-digit credit score that is the key to mortgages, car loans, rental agreements and more.” The New York Times’ Tara Siegel Bernard reports that three major credit reporting companies are planning to cease penalizing credit scores for medical debt.
  • “Despite the many challenges rural hospitals experienced during the pandemic, our interviews revealed that rural hospitals’ small size and connectedness with their workforce and community gave them distinct advantages with respect to the speed of decision making and action, communication with their workforce, and flexibility.” Although rural hospitals have endured multiple setbacks in recent years, and the pandemic has greatly increased those stresses, the news is not necessarily all bleak, as a Health Affairs Forefront article by Atkinson and colleagues points out.
  • “…the legislation is aimed at stopping the largest tech platforms from using their interlocking services and considerable resources to box in users and squash emerging rivals, creating room for new entrants and fostering more competition.” The New York Times’ Adam Satariano reports on the European Union’s passage of the Digital Markets Act, a law with wide-ranging implications for how tech companies are allowed to collect and use individual consumer’s data for advertising and sales (H/T @eperakslis).