AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

July 26, 2024

In this week’s Duke AI Health Friday Roundup: cat moves to top of pet H-index leaderboard; AI predicts rogue waves at sea; comparing preventive strategies for HIV; AI-assisted MRI scans for breast cancer; adding context for clinical trial demographics; pursuing equity in AI; checking back in on AI self-regulation, one year later; learning to spot predatory research conferences; caution needed when using point-and-click sample size generators; much more:

AI, STATISTICS & DATA SCIENCE

Large wave breaking in the bay of Nazare, Portugal, with a stone-masonry building and lighthouse on a promontory in the foreground. Luis Ascenso from Lisbon, Portugal - Praia do Norte, Nazaré, Portugal, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=57820621
Image credit: Luis Ascenso/Wikimedia Commons, CC BY 2.0
  • “Using roughly 16 million data points collected at half-hour intervals by a network of 172 ocean buoys, Breunung and Balachandran trained an AI program to distinguish wave patterns that preceded rogue waves. The program predicted 3 in 4 rogue wave arrivals at buoys in the network one minute in advance. When the lead time was extended to five minutes, around 7 in 10 waves were predicted….Notably, the program anticipated rogue waves roughly as well at locations where it had received no training data.” At Science News, Nikk Ogasa reports on recent work that uses an AI model to predict the possibility of extreme (and dangerous) “rogue waves” that can emerge at sea when conditions are right.
  • “Published sample size calculations that use G*Power are not transparently reported and may not be well-informed. Given the popularity of software packages like G*Power, they present an intervention point to increase the prevalence of informative sample size calculations.” A preprint from Thibault and colleagues, available from medRxiv, reports on a study that evaluated scientific quality and transparency of articles that incorporated a point-and-click sample size calculator as part of study design.
  • “Learning from that data is what allows generative A.I. tools like OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude to write, code and generate images and videos. The more high-quality data is fed into these models, the better their outputs generally are….For years, A.I. developers were able to gather data fairly easily. But the generative A.I. boom of the past few years has led to tensions with the owners of that data — many of whom have misgivings about being used as A.I. training fodder, or at least want to be paid for it.” The New York Times’ Kevin Roose reports on the growing pinch affecting AI companies as sources of real-world training data are rapidly being exhausted.
  • “The voluntary commitments came at a time when generative AI mania was perhaps at its frothiest, with companies racing to launch their own models and make them bigger and better than their competitors’. At the same time, we started to see developments such as fights over copyright and deepfakes. A vocal lobby of influential tech players, such as Geoffrey Hinton, had also raised concerns that AI could pose an existential risk to humanity. Suddenly, everyone was talking about the urgent need to make AI safe, and regulators everywhere were under pressure to do something about it.” At MIT Technology Review, Melissa Heikkilä revisits promises made a year ago by AI companies who vowed to self-regulate in the face of growing concerns about the potential risks of AI technologies.
  • “Compared with traditional breast density measures used in a previous clinical trial, the current AI method was nearly four times more efficient in terms of cancers detected per 1,000 MRI examinations (64 versus 16.5). Most additional cancers detected were invasive and several were multifocal, suggesting that their detection was timely. Altogether, our results show that using an AI-based score to select a small proportion (6.9%) of individuals for supplemental MRI after negative mammography detects many missed cancers, making the cost per cancer detected comparable with screening mammography.” A research article by Salim and colleagues, published in Nature Medicine, describes the use of an AI to facilitate cost-effective supplemental cancer screening using magnetic resonance imaging.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Motion-blurred photograph of a crowd of people crossing a median on a busy city street. Image credit: Mauro Mora/Unsplash
Image credit: Mauro Mora/Unsplash
  • “We propose that clinical trial registries or protocols should mandate estimates of disease burden alongside sample size calculations that correspond to the regions and periods of participant recruitment. Such estimates, if included a priori, introduce accountability on the part of the investigators to enrol demographic groups proportional to the real-world disease burden since these estimates can be referenced against the final trial participant sample. Indeed, for clinical trial appraisers, considerable deviation from a priori demographic estimates might suggest selection bias and reduced study generalizability.” A letter published in Lancet by Tao and colleagues recommends adding descriptions of background disease burden when providing statistics related to specific populations or subpopulations of clinical trial participants.
  • In 1988, I became one of the first U.S. physicians certified in the new specialty of geriatric medicine, which focuses on the health care of older adults. As an idealistic and optimistic 32-year-old geriatrician, I believed that this branch of medicine would undoubtedly emerge as a vibrant field of medicine, benefiting patients and society. I was also confident that when I reached older adulthood, the health care system would be ready to care for me.” In an opinion article for STAT News (log-in required), geriatrician Jerry H. Gurwitz spotlights the looming shortage of clinicians prepared to care for the specific needs of a surge of aging patients.
  • “We conducted a phase 3, double-blind, randomized, controlled trial involving adolescent girls and young women in South Africa and Uganda. Participants were assigned in a 2:2:1 ratio to receive subcutaneous lenacapavir every 26 weeks, daily oral emtricitabine–tenofovir alafenamide (F/TAF), or daily oral emtricitabine–tenofovir disoproxil fumarate (F/TDF; active control); all participants also received the alternate subcutaneous or oral placebo….No participants receiving twice-yearly lenacapavir acquired HIV infection. HIV incidence with lenacapavir was significantly lower than background HIV incidence and HIV incidence with F/TDF.” A research article by Bekker and colleagues, published in the New England Journal of Medicine, presents results from a randomized trial of 3 different prophylactic strategies for preventing HIV infection.
  • “Jacklyn Gates, leader of the Heavy Element Group at the Berkeley lab, says that chemists are particularly excited about the next set of elements, because they will fall in a new row of the periodic table. Elements 119 and 120 will be the first documented from the eighth ‘period’. In this row, scientists expect to find atoms with so-far unseen electron configurations, or orbitals. Chemists are excited about the potential to observe g orbitals, says Gates, which will provide ‘an entirely new set of orbitals to play around with and explore the chemistry of’”. Elementary! Nature’s Katherine Bourzac reports on recent progress in the ability to synthesize new superheavy elements.

COMMUNICATION, Health Equity & Policy

A grey and white cat lies asleep on a table, with one paw resting on a chemistry textbook open to a molecular diagram. Image credit: Dimitry B/Unsplash
Image credit: Dimitry B/Unsplash
  • “Of course, this isn’t about making a cat a highly cited researcher. Our efforts (about an hour of non-automated work) were to make the same point as the authors of this aptly titled pre-print: Google Scholar is manipulatable. Despite the conspicuous vulnerabilities of Google Scholar (and ResearchGate), the quantitative metrics calculated by these services are routinely used to evaluate scientists.” A blog post by Reese Richardson explores the ease with which you can boost your pet’s H-index – and why that’s a problem for academic science as a whole.
  • “…a major issue in the AI space is that people designing these systems often have limited knowledge and input from experts on social systems. We need social scientists in the tech pipeline, and we need to pay them like we pay engineers. When I think about recent failures of generative AI and their biases in representing people, I believe that people with a grasp of sociological factors underlying biases would be best equipped to address them in these systems. Unfortunately, all we have now are superficial fixes.” At Mozilla’s Insights blog, Kenrya Rankin interviews the Algorithmic Justice League’s Randi Williams on issues of racial justice in artificial intelligence.
  • “A junior doctor at a prestigious London hospital was especially troubled. Her name and signature appeared on the certificates of attendance handed out to the other attendees, without her approval, at the event that Loren attended in March: it seemed to the doctor that the organizers were using her institute’s renown to add a sheen of respectability to the conference. She was later accused by an angry attendee of being part of the fraud…” Nature’s Christine Ro reports on an investigation of the shady scientific demimonde of predatory research conferences.