AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

September 8, 2023

In this week’s Duke AI Health Friday Roundup: biased data offers window onto health equity issues; cancer therapeutics eye AI for drug discovery; testing machines with human exams; unveiling the “hidden curriculum” in medical education; once-vaunted telehealth startup collapses; eye movements combine with other data for early autism diagnosis; government seeks public input on AI and copyright; overemphasis on technology during COVID shutdown may have worsened education inequities; much more:


Picture, taken from below, showing a skylight window opening on to blue sky with wispy white clouds. Image credit: Dylan Ferreira/Unsplash
Image credit: Dylan Ferreira/Unsplash
  • “Viewing biased clinical data as artifacts can identify values, practices, and patterns of inequity in medicine and health care. Examining clinical data as artifacts can also provide alternatives to current methods of medical AI development. Moreover, this framing of data as artifacts expands the approach to fixing biased AI from a narrowly technical view to a sociotechnical perspective that considers historical and current social contexts as key factors in addressing bias.” A review article by Ferryman and colleagues, published in the New England Journal of Medicine, examines the use of biased clinical data as a window into healthcare inequities.
  • “What the Luddites wanted to do was to stop the machinery that was very specifically exploiting them or being used as leverage against them to reduce their quality of life, to cut their wages, to force them into factories….So, the Luddites protest against machinery, again against very specific kinds of machinery. They were technologists; they loved technology themselves in many many cases. But they had an issue a specific machinery being used in specific ways…” A podcast conversation (with transcript available) between Scientific American’s Sophie Bushwick and LA Times columnist Brian Merchant re-examines the Luddite movement in light of the burgeoning AI industry.
  • “One of Wenger’s projects called Fawkes places a sort of filter or film on top of people’s face images on social media that prevents AI from using them successfully in its training routines. Another, called Glaze, protects artists’ works from a similar fate. Both programs are freely available to the public, with Fawkes alone having been downloaded more than a half-million times.” An article by Duke University’s Ken Kingery, available from the Pratt School of Engineering website, profiles the work Emily Wenger, a researcher exploring data privacy measures in the context of AI and quantum computing.
  • “These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs, replacing teachers, doctors, journalists, and lawyers. Geoffrey Hinton has called out GPT-4’s apparent ability to string together thoughts as one reason he is now scared of the technology he helped create….But there’s a problem: there is little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren’t convinced one bit.” At MIT Technology Review, Will Douglas Heaven dives into the controversy over what to make of the performance of large language models when confronted with tests designed to evaluate human knowledge, skills, and problem-solving skills.
  • …and speaking of which: “GPT-4 and Bing demonstrated similar performance levels and significantly outperformed GPT-3.5-Turbo on the set of 630 questions, which included those with image components that are inaccessible to the models. GPT-4 and Bing consistently outperformed students in both the spring and fall 2022 examinations. GPT-3.5-Turbo’s performance was comparable to that of students in the spring 2022 examination but was significantly lower in the fall 2022 examination.” A research article published in JMIR Medical Education by Roos and colleagues compares the performance of different large language models and AI-assisted search engines vs human medical students on medical exams in Germany. (Notably, the article includes a notice that GPT-4 and Grammarly were used to “improve the language of the manuscript and correct grammatical errors.”)
  • “A clinician in the U.K. had for years been raising alarm bells about the company’s practices regarding patient safety and corporate governance. Some of that came to a head in 2021, when it emerged that a U.K. medical regulator had also been sharing and agreeing with the concerns for some time….None of that had impacted business development for Babylon just yet….But eventually the house of cards crashed. By 2022, the company was starting to reel after losing major contracts in its home market…” TechCrunch’s Ingrid Lunden surveys the wreckage of Babylon, as the high-profile telehealth company, once valued in the billions, enters bankruptcy.


Laboratory glassware and pipettes, some with liquid in them, others empty, sitting on a laboratory bench. A laboratory notebook is in the background, out of focus. Image credit: Hans Reniers/Unsplash
Image credit: Hans Reniers/Unsplash
  • “Schneider sees early-stage discovery as the current sweet spot. AI is a powerful tool for crunching vast amounts of data to identify genes and proteins linked to specific disease states and to home in on chemical compounds that can effectively modulate those targets. Armed with generative models like those used in tools such as ChatGPT or the image-creating software DALL-E, algorithms can conjure up novel chemical architectures for drugs that fall outside existing compound libraries but are still realistic to synthesize.” Nature’s Michael Eisenstein examines new methods – including ones that rely on AI-powered drug discovery – being applied to cancer therapeutics.”
  • “The study by Jones et al represents a significant step forward toward developing more objective tools for early diagnosis of autism. The intended use of the eye-tracking test is to aid clinicians in making an autism diagnosis in young children who have been referred to a specialty clinic for evaluation. By integrating multiple sources of information—including the eye-tracking test, parent report, and clinical observations—the accuracy, certainty, and efficiency of autism diagnostic assessment could potentially be improved, resulting in fewer missed cases and allowing more children to receive empirically validated early therapies from which they could benefit…” An editorial in JAMA by Duke autism researcher Geraldine Dawson considers a newly published article by Jones and colleagues that reports results from a randomized trial of eye-movement tracking to identify autism in young children.
  • “[We will be] seeing a rise of about 250 000 deaths each year, according to the WHO, that are coming directly from climate change. That’s just death, not including all the morbidity and the suffering that we’re seeing from people who are getting sick and who can’t participate in their lives or their livelihood in the way they want to. The health impacts are both direct and indirect, and they’re creating a real and vicious cycle that is going to impact people’s access to thrive and to earn money and will drive a lot of people into poverty.” JAMA’s Jennifer Abbasi interviews Harvard physician and WHO Special Envoy for Climate Change and Health Vanessa Kerry.
  • “Confusion about the causes of obesity has arisen based on the false dichotomy of genes versus environment (rather than the combined effects of genes and environment). At any point in time, most of the variance in levels of obesity among individuals may be genetic. But, changes across time are predominantly driven by the environment. Which individuals deposit the most fat in response to environmental change is influenced by both.” A perspective article published in Science by Speakman and colleagues sketches the borders of the known and unknown in the physiology of obesity.

COMMUNICATION, Health Equity & Policy

A small key half-hidden among leaf litter and pine needles on the forest floor. Image credit: Michael Dziedzic/Unsplash
Image credit: Michael Dziedzic/Unsplash
  • “…until recently, the idea of a hidden curriculum hadn’t been applied to the world of academic healthcare research. Two years ago, Dr. Enders organized a workshop for more than 100 biomedical research faculty and graduate students at the national meeting of the Association for Clinical and Translational Science. The idea of a hidden curriculum was new to many, but attendees shared that they’d struggled to figure out how to advance at every stage of training.” At Discovery’s Edge, Kate Ledger profiles researchers working to shine a spotlight on the “hidden curriculum” that can be essential for success in academia – but which is not always equally accessible to everyone.
  • “If we start thinking of the use of AI beyond the publishing workflow, it can also help us to synergize many publishing activities around sustainability, justice, resilience, or really any other lens we choose….Can we expect AI to be used in such a way that it acts as an ‘aggregator’ of what we have individually achieved so far, as an ‘accelerator’ to lead us to the pace we want to reach, and as an ‘appraiser’ of our progress, thus becoming a vital element of our solidarity?” In a guest article for Scholarly Kitchen, Haseeb Irfanullah looks at potential applications for AI in scholarly publishing that go beyond the obvious.
  • “The UNESCO researchers argued in the report that “unprecedented” dependence on technology…worsened disparities and learning loss for hundreds of millions of students around the world, including in Kenya, Brazil, Britain and the United States….The promotion of remote online learning as the primary solution for pandemic schooling also hindered public discussion of more equitable, lower-tech alternatives, such as regularly providing schoolwork packets for every student, delivering school lessons by radio or television — and reopening schools sooner for in-person classes, the researchers said.” The New York Times’ Natasha Singer reports on a new UNESCO study that finds that half-baked, tech-heavy approaches to remote learning during the 2020 COVID shutdowns of schools may have had major deleterious effects on global equity in education.
  • “As announced in the Federal Register, the agency wants to answer three main questions: how AI models should use copyrighted data in training; whether AI-generated material can be copyrighted even without a human involved; and how copyright liability would work with AI. It also wants comments around AI possibly violating publicity rights but noted these are not technically copyright issues.” The Verge’s Emilia David reports that the US Copyright Office is seeking public input on issues related to AI and copyright law.