AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

September 29, 2023

In this week’s Duke AI Health Friday Roundup: lighting an (s)Beacon for genomic data; randomized trials for clinical AI; bees exhibit signs of sentience; scrutiny of AI chip design paper grows; the complexities of statistics vs. AI in medicine; deep brain stimulation for severe depression; worries about AI that sounds too human; tackling clinical conversations with GPT-4; YouTube disinformation videos being served to kids as STEM educational material; much more:

AI, STATISTICS & DATA SCIENCE

Low angle photograph of an illuminated masonry lighthouse against the background of a starry night sky. Image credit: Nathan Jennings/Unsplash
Image credit: Nathan Jennings/Unsplash
  • “Under the criteria of patient empowerment, complexity of clinical queries, scalability and ease of adoption, we developed ‘serverless Beacon’ (sBeacon)…a cloud-native approach to data exchange and analytics that incorporates Ontoserver, the Health Level 7 Fast Healthcare Interoperability Resources (HL7 FHIR)-based terminology service that has been adopted by digital health agencies around the world.” In an article published in Nature Biotechnology, Wickramarachchi and colleagues debut sBeacon, a new architecture for genomic data exchange.
  • “One of the deficiencies of the field has been the lack of compelling data to demonstrate unequivocal benefit and the tradeoff of risks for AI in medicine, which can be established through prospective clinical trials. This week we posted a preprint review of the 84 randomized controlled trials (RCTs) in medical practice to date (through August 2023), which represents far more progress than has been generally appreciated.” At his Ground Truths blog, Eric Topol surveys some of the recent developments in medical AI, including a preprint review article of randomized clinical trials examining clinical AI applications (H/T @AI_4_Healthcare).
  • “Converting static exam-style case vignettes into conversational interactions significantly reduced diagnostic accuracy for both models. Recent studies which show that LLMs like GPT-4 and GPT-3.5 can achieve high accuracy on medical cases…may present an overly optimistic outlook, overlooking the nuanced challenges associated with dynamic, medical conversations as opposed to static, clearly defined questions.” A preprint article by Johri and colleagues, available from medRxiv, presents a framework for assessing large language models used in medical applications such as diagnosing conditions based on clinical vignettes or on patient’s descriptions in simulated conversations.
  • “In the 21st century, artificial intelligence (AI) has emerged as a valuable approach in data science and a growing influence in medical research, with an accelerating pace of innovation. This development is driven, in part, by the enormous expansion in computer power and data availability. However, the very features that make AI such a valuable additional tool for data analysis are the same ones that make it vulnerable from a statistical perspective. This paradox is particularly pertinent for medical science.” A review article by Hunter and Holmes, published this week in the New England Journal of Medicine, explores the nuances on border where medical statistical methods meet artificial intelligence applications (H/T @EricTopol).
  • “AlphaFold and similar programs, such as RoseTTAFold…promise to shake up the pharmaceutical industry further because the structures of many human proteins had been lacking, making it difficult to find treatments for some diseases. The programmes have become so good at predicting 3D protein shapes that of the 200 million protein structures deposited into a database last year, the European Molecular Biology Laboratory’s European Bioinformatics Institute deemed 35% to be highly accurate — as good as experimentally determined structures — and another 45% accurate enough for some applications.” In a news article for Nature, Carrie Arnold looks at protein-predicting AI AlphaFold’s prospects as an engine of drug discovery.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

A bumblebee flying away from a purple coneflower with its back to the camera. Image credit: Carolien van Oijen/Unsplash
Image credit: Carolien van Oijen/Unsplash
  • “Previous research has shown honey bees and bumble bees are intelligent, innovative, creatures. They understand the concept of zero, can do simple math, and distinguish among human faces (and probably bee faces, too). They’re usually optimistic when successfully foraging, but can become depressed if momentarily trapped by a predatory spider.” Pause before you swat: Science’s Virginia Morrell reports on recent research suggesting that bees (and potentially other insects) may be sentient.
  • “Poor crystallinity of cadmium yellow was also believed to be partially responsible for the degradation observed in older artworks by Picasso, Matisse and other artists. (Environmental conditions, particularly humidity and temperature, have also been shown to play a role.) But these new results highlight the fact that this problem persisted well into the middle of the 20th century, which the researchers found surprising.” The New York TimesKatherine Kornei digs into the chemical culprits behind the fading yellow hues in paintings by Joan Miró and other 20th century giants.
  • Some potentially good news on the long COVID front: a new population cohort study published in Clinical Infectious Diseases by Andersson and colleagues did not find evidence that having had severe acute COVID resulted in greater susceptibility to infectious diseases requiring hospitalization.
  • “During the surgery and in the days after, doctors sent small pulses of electricity into Jon’s brain. In ways that are still unclear, this electrical tinkering changes the messages that move between different brain regions. The doctors and researchers had what seems like a bold goal: They wanted these pulses to pull Jon out of the darkness of depression.” The first installment in a six-part series by Science News’ Laura Sanders explores the use of an experimental treatment – deep brain stimulation – to treat severe depression.

COMMUNICATION, Health Equity & Policy

Three colorful GPUs with their packaging cleanly removed laying on a white surface. Image credit: Fritzchens Fritz / Better Images of AI / CC-BY 4.0
Image credit: Fritzchens Fritz / Better Images of AI / CC-BY 4.0
  • “Google’s paper, ‘A graph placement methodology for fast chip design,’ has been embroiled in controversy since it was published in 2021. It has been cited well over 100 times, but critics argue the article didn’t include enough detail to allow others to vet the findings and say Nature‘s decision to publish it was a mistake.” Retraction Watch reports on critical scrutiny of a paper that advanced the claim that an AI agent was able to design microchips in a fraction of the time required by human teams.
  • “Beyond plagiarism, AI tools raise all kinds of issues (bias, no guarantee of accuracy, etc.) that the academic community needs to better understand. ‘ChatGPT confidently spouts nonsense and makes up references; it’s not very good at solving problems in philosophy or advanced physics. You can’t use it with your eyes closed’ warned Bruno Poellhuber, a professor in the department of psychopedagogy and andragogy at the Université de Montréal…More training is needed to help professors and students understand both the potential and drawbacks of these technologies.” In an article for Quebec’s University Affairs magazine, Catherine Couturier reports on the spreading ripples created by AI’s arrival on university campuses.
  • “Investigative BBC journalists working in a team that analyses disinformation…found more than 50 channels in more than 20 languages spreading disinformation disguised as STEM content. These include pseudo-science – that’s presenting information as scientific fact that is not based on proper scientific methods, plus outright false information and conspiracy theories…Examples of conspiracy theories are the existence of electricity-producing pyramids, the denial of human-caused climate change and the existence of aliens. Our analysis shows YouTube recommends these ‘bad science’ videos to children alongside legitimate educational content.” A BBC investigation finds that disinformation on YouTube is being served to kids under the guise of “educational material.”
  • “Venkatasubramanian worries that the race to replace humans with human-like AI in customer-facing workflows will deepen the digital divide to access critical services. ‘We’ll see more and more rollout of tools in places where we take away human involvement, because it looks like these tools can act like humans. But they really can’t. And they’ll just make everything a lot more difficult to navigate … Those who are more adept at navigating these tools and working with them will succeed. Those who don’t, won’t.’ he said.” An article by Mohar Chatterjee in Politico (which includes remarks from AI Bill of Rights Blueprint coauthor Suresh Venkatasubramanian) reports on a groundswell of alarm about AI chatbots that seem a little too human.