AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

February 24, 2023

In today’s Duke AI Health Friday Roundup: Bing AI chatbot’s churlishness surprises, alarms users; RCT of high-dose ivermectin for COVID shows no benefit for symptom length, hospitalization; “style cloaks” for art confound generative AIs; vascular surgery practices at Kansas VA draw scrutiny; data brokers are trafficking in sensitive health data; a skeptical perspective on chatbots’ prospects in education; 8 days a week needed for complying with clinical practice guidelines; much more:

AI, STATISTICS & DATA SCIENCE

Closeup photograph of vaguely threatening tin robot toy with gritted teeth, staring directly at the viewer. Image credit: Rock’n Roll Monkey/Unsplash
Image credit: Rock’n Roll Monkey/Unsplash
  • “In one long-running conversation with The Associated Press, the new chatbot complained of past news coverage of its mistakes, adamantly denied those errors and threatened to expose the reporter for spreading alleged falsehoods about Bing’s abilities. It grew increasingly hostile when asked to explain itself, eventually comparing the reporter to dictators Hitler, Pol Pot and Stalin and claiming to have evidence tying the reporter to a 1990s murder.” Microsoft’s integration of a LLM AI with its Bing search engine has been raising eyebrows following its recent debut to US audiences. The Associated Press’ Matt O’Brien has the story. And at his Substack, Gary Marcus eyes reports that the AI-powered Bing, dubbed Sydney, had already been returning some hair-raising responses for weeks prior when it was released for use in India.
  • “But what that number tells me is the seriousness of this moment and how the U.S. has an opportunity to be the global steward for AI and ensure its development in a responsible manner that upholds human values. If we consider how one part of the CHIPS Act was to deny an authoritarian regime access to this technology, a logical additional step is to ensure broad, noncommercial access to that same technology to innovate, expand our talent pool, and compete globally.” In a blog post interview at Stanford’s Human-Centered Artificial Intelligence institute, HAI’s Russell Wald and Jennifer King discuss the details of the recently finalized report from the National AI Research Resource Task Force.
  • “We found that an impute-then-exclude strategy using substantive model compatible fully conditional specification tended to have superior performance across 72 different scenarios. We illustrated the application of these methods using empirical data on patients hospitalized with heart failure when heart failure subtype was used for cohort creation (excluding subjects with heart failure with preserved ejection fraction) and was also an exposure in the analysis model.” A research article by Austin and colleagues, published this week in Statistics in Medicine, compares two different methods for imputing data when a missing variable is used as an inclusion criterion and as an exposure variable in the analysis (H/T @F2Harrell).
  • “These cloaks apply barely perceptible perturbations to images, and when used as training data, mislead generative models that try to mimic a specific artist…. Both surveyed artists and empirical CLIP-based scores show that even at low perturbation levels (p=0.05), Glaze is highly successful at disrupting mimicry under normal conditions (>92%) and against adaptive countermeasures (>85%).” A preprint by Shan and colleagues, available at arXiv, describes GLAZE, a tool for protecting online art from use by generative image AIs by creating “style cloaks.”
  • “Over the course of the next month and a half, 106 teams registered for the challenge to demonstrate their accomplishments. The 537 team members came from a wide variety of disciplines, including biochemistry, clinical research, genomics, immunology, molecular biology, neuroscience, and more. Community engagement was a key component of the challenge. FASEB awarded two ‘People’s Choice’ awards based on more than 2,150 people selected their favorite teams through crowd voting.” The National Institutes of Health’s Office of Data Science Strategy celebrates the winners of the inaugural DataWorks! Prizes, given to recognize excellence in data sharing and reuse.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Photograph of a pink stethoscope lying in a partial coil on top of surface, with the shadows from window blinds lying across it. Image credit: Christopher Boswell/Unsplash
Image credit: Christopher Boswell/Unsplash
  • “Among outpatients with mild to moderate COVID-19, treatment with ivermectin, with a maximum targeted dose of 600 μg/kg daily for 6 days, compared with placebo did not improve time to sustained recovery. These findings do not support the use of the antiparasitic medication ivermectin in patients with mild to moderate COVID-19.” In a research article published this week in JAMA, Naggie and colleagues report results from randomized placebo-controlled trial that evaluated whether higher-dose ivermectin reduced the length of symptom duration or the risk of hospitalization in adults with COVID. An accompanying editorial by London and Seymour questions whether, given that the study reported by Naggie et al is the latest in a string of investigations showing no benefit for ivermectin as a therapy for COVID, further investigations of the drug in that therapeutic context are actually justifiable.
  • “Suppose an American doctor wanted a gold star when seeing patients and followed all of the guidelines for preventive, chronic and acute disease care issued by well-known medical groups. That could require nearly 27 hours per day, a team of doctors wrote in a study last year for the Journal of General Internal Medicine.” The New York Times’ Gina Kolata reports on the growing impossibility of physicians measuring up to standards dictated by clinical practice guidelines.
  • “Independent from the whistleblower suit, internal investigators at the Wichita facility have also examined the treatment patterns of its vascular patients in recent years and found numerous cases where medical devices were used excessively. While it’s not uncommon to deploy several devices, a medical expert on the investigation team found that the VA doctors sometimes used more than 15 at a time — one used 33 — deviating from the standard of care.” A joint investigation by ProPublica and the Wichita Eagle reports on questionable practices and subsequent legal action at a Kansas VA hospital, where vascular surgeons were using unprecedentedly aggressive approaches to treating peripheral arterial disease.
  • “Many questions have since been raised about toxic exposures sustained by humans and wildlife — not just in East Palestine, with its 4,700 residents, but along the Ohio River and farther north. The New Republic reported that residents endured burning and itchy eyes, sore throat, rash, and migraines in the aftermath of the train derailment. Around 3,500 fish have reportedly died in local waterways, and West Virginia Gov. Jim Justice announced that chemicals had been found in the Ohio River in the northern panhandle of the state.” An article by STAT News’ Jill Neimark provides a deep dive on the health implications – and remaining uncertainties – surrounding the recent railroad accident in Palestine, Ohio, that resulted in the release of toxic chemicals.

COMMUNICATION, Health Equity & Policy

Photograph of a kangaroo, staring directly at the camera, against a green forested background. Image credit: Bryn Young/Unsplash
Image credit: Bryn Young/Unsplash
  • “What would a world of writers on autopilot look like? Imagine our kangaroo-loving toddler growing up with a text generator always at hand. Will Microsoft or Google feed her drafts for every word that she writes or texts? Record her prompts and edits to improve their product? Track her data in order to sell her stuff? Flag idiosyncratic thinking? Distract her writing process with ads?” In an essay at Public Books, Lauren M.E. Goodlad and Samuel Baker throw considerable volumes of cold water on recent hype surrounding the advent large language model chatbots – and the prospects for the incorporation into academics.
  • “The number of places people are sharing their data has boomed, thanks to a surge of online pharmacies, therapy apps and telehealth services that Americans use to seek out and obtain medical help from home. Many mental health apps have questionable privacy practices, according to Jen Caltrider, a researcher with the tech company Mozilla whose team analyzed more than two dozen last year and found that ‘the vast majority’ were ‘exceptionally creepy.’” The Washington Post’s Drew Harwell reports on the implications of a recently published Duke Sanford School study that found that online data brokers are selling and reselling (in some cases) extremely sensitive health and wellness information gathered from apps and other electronic sources.
  • “In 2022, the White House Office of Science and Technology Policy (OSTP) released a memo on Ensuring Free, Immediate, and Equitable Access to Federally Funded Research that establishes new guidance for improving public access to scholarly publications and data resulting from Federally supported research. The NIH Public Access Plan outlines the proposed approach NIH will take to implement the new guidance, consistent with its longstanding commitment to public access.” The National Institutes of Health has published a request for comments regarding its draft plan to expand the public’s access to results from research studies funded by the NIH.