AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

July 7, 2023

In this week’s Duke AI Health Friday Roundup: need for a global AI observatory; humans like GPT-3’s medical information better, regardless of whether it’s true or false; Surgeon General tackles epidemic of loneliness; problems with recency bias in NLP literature; ticks surf static charge to land on hosts; will scholarly publishing be able to cope with AI-generated content?; EHR data, bias, and pragmatic clinical trials; much more:

AI, STATISTICS & DATA SCIENCE

Row of large parabolic radio telescope dishes, all pointing in the same direction, against a background of twilit sky. Image credit: Gontran Isnard/Unsplash
Image credit: Gontran Isnard/Unsplash
  • “Future historians will wonder why so many powerful institutions and intelligent commentators have so dismally failed to generate plausible options. Inevitably, most commentary tries to squeeze the problem into familiar frameworks, whether seeing it as a problem of human or civil rights, copyright or competition law, privacy and data sovereignty, policing and security, or innovation-driven economic growth, with professional bodies wanting to emphasize training and accreditation. None have yet risen up to the scale of the challenge of managing a truly general-purpose technology that is already affecting many areas of daily life.” In an essay for Noema, Geoff Mulgan and Divya Siddharth make the case for a global AI “observatory” that can pull together the myriad disparate threads of different issues arising from the growing ubiquity of AI applications.
  • “Climate change, scarcity of resources, social, economic, and spatial inequalities and divides are too complex to define from a single perspective, too broad to contextualize within a sole geography, and too relevant to overlook in terms of their short- and long-term impacts. Artificial Intelligence has the potential to assist humankind with such problems by improving the efficiency and optimization of data-driven processes towards measurable outcomes.” In June, the Institute for Ethics in Artificial Intelligence released a research brief examining the use of AI in mobility systems as a solution to a number of “wicked problems” confronting modern society.
  • A recent research article, published in Science Advances by Spitale and colleagues, is one of those good-news-bad-news things: researchers used the GPT-3 large language model to generate pieces of text on medical topics – some accurate, some deliberately inaccurate: “Our findings show that tweets produced by GPT-3 can both inform and disinform better than organic tweets. Synthetic tweets containing reliable information are recognized as true better and faster than true organic tweets, while false synthetic tweets are recognized as false worse than false organic tweets. Moreover, GPT-3 does not perform better than humans in recognizing both information and disinformation.”
  • “…we know explanations make people more likely to follow bad advice. And we’ve known this for a long time in robotic systems. There’s really fantastic work by several prominent robotics demonstrating this, especially in settings where people are under stress or believe that a system can mitigate some risk or has access to information that they don’t. I would argue that medicine checks a lot of these boxes, and so it’s not a setting where we want explainability that will turn off critical decision-making skills or engage automation bias.” MIT AI researcher Marzyeh Ghassemi is a guest on the NEJM AI Grand Rounds podcast, where she and host Raj Manrai discuss topics including explainable AI in healthcare settings – and why “explainable AI” is not a panacea for concerns about bias.
  • “Here we show the potential of accelerometry as a biomarker to screen for PD. We found that reduced acceleration manifests years before clinical PD diagnosis. This pre-diagnosis reduction in acceleration was unique to PD and was not observed for any other disorder examined. By comparing the predictive value of accelerometry with other modalities including genetics, lifestyle, blood biochemistry and prodromal symptoms, we found that no other data modality performed better in identifying future diagnosis of PD.” An article published in Nature Medicine by Schalkamp and colleagues describes the use of wearable devices equipped with accelerometers to flag signs of Parkinson disease before clinical manifestations become apparent (H/T @EricTopol).

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

A person, out of frame except for right arm, presses a hand against rain-streaked window glass. Through the window, out of focus, are the frames of other windows in what appears to be an apartment building. Image credit: Kristina Tripkovic/Unsplash
Image credit: Kristina Tripkovic/Unsplash
  • “Humans are wired for social connection, but we’ve become more isolated over time…Social connection is as essential to our long-term survival as food and water. But today, loneliness is more widespread than other major health issues in the U.S. Our epidemic of loneliness and isolation is a major public health concern.” An advisory from the U.S. Surgeon General seeks to raise awareness about the deleterious health effects of loneliness.
  • “The scientists demonstrate that the static electric fields naturally produced by animals (including humans) can physically yank the ungainly creatures onto their hosts. By electrically extending their reach, ticks may be able to grab hold of hosts more easily. While the finding may add to ticks’ terrifying attributes, this knowledge could also be used to improve antistatic tick defenses.” An article in the New York Times by Darren Incorvaia explains how ticks can surf static electrical charges to arrive on hosts without having made direct contact. Lovely.
  • “A recently published study in Current Biology finally pins down the source of this unparalleled photosynthetic efficiency, which has long baffled scientists. The new research found that some phytoplankton are equipped with an extra internal membrane that carries a ‘proton pump’ enzyme that supercharges their ability to convert carbon dioxide into other substances.” Quanta’s Saugat Bolakhe reports on recent research that demystifies a poorly understood process by which phytoplankton are able to benefit from a highly efficient form of photosynthesis.
  • “We identify 3 challenges-incomplete and variable capture of data on social determinants of health, lack of representation of vulnerable populations that do not access or receive treatment, and data loss due to variable use of technology-that exacerbate bias when working with EHR data and offer recommendations and examples of ways to actively mitigate bias.” A research article by Boyd and colleagues, published in the Journal of the American Medical Informatics Association, examines the potential for embedded pragmatic clinical trials that use data gathered from electronic health records to be affected by biases.

COMMUNICATION, Health Equity & Policy

A crowd of people, photographed from the waist down and wearing galoshes and colorful plastic leg coverings, wade through calf-high flood water. Image credit: Jonathan Ford/Unsplash
Image credit: Jonathan Ford/Unsplash
  • “As the trickle of AI generated fake research grows into a flood, we must ask ourselves this: is scholarly publishing willing to do whatever it takes to act as a source of truth? To fight a constant battle to ensure that at least some published research is created by real humans in a real lab?…Given our abysmal progress with implementing measures like ORCID and Open Science, the answer is clearly ‘no’. We will be fatally undermined and we will fall.” Scholarly Kitchen has posted the transcript from a debate by discussants Rick Anderson, Time Vines, and Jessica Miles on the future of AI and scholarly publishing that took place at the most recent Society for Scholarly Publishing conference.
  • “To be sure, governing AI poses novel challenges. But the senator’s plan to hold ‘AI Insight Forums’ this fall for Congress to ‘lay down a new foundation for AI policy’ provides the opportunity to show that a foundation already exists and that a robust field of experts acting in the public interest — outside of the tech industry — have been working for years to build it. We need to draw on the broad expertise in AI policymaking both inside and outside of government.” An opinion piece by Janet Haven and Sorelle Friedler published in The Hill makes the case for embracing the substantial body of existing work on AI governance and ethics when attempting to craft legislation for AI applications.
  • “Papers may receive high citations for a number of reasons; and those that receive high citations are not necessarily model research papers. While they may have some aspects that are appreciated by the community (leading to high citations), they also have flaws. High-citation papers (by definition) are more visible to the broader research community and are likely to influence early researchers more. Thus their strong recency focus in citations is a cause of concern.” A Medium essay by Saif M. Mohammad explores the potentially problematic effects of “recency bias” in citations in the natural language programming literature.
  • “New estimates released today from the Office of the Actuary (OACT) at the Centers for Medicare and Medicaid Services (CMS) and published online today in Health Affairs project a rate of national health spending growth of 4.3 percent for 2022, with expenditures projected to have reached $4.4 trillion. Health spending over the course of 2022–31 is expected to grow 5.4 percent per year on average.” A post at Health Affairs Forefront blog summarizes recently released estimates on future health spending from the Centers for Medicare and Medicaid Services.