AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

September 22, 2023

In this week’s Duke AI Health Friday Roundup: harnessing physical processes to power AI; xenograft study in mice sheds light on neuronal destruction in Alzheimer’s; speaking plainly in science; multimodal AI comes to the clinic; aligning AI fairness with medical practice; small-town healthcare imperiled by lack of doctors; GPT enhances consultant productivity and levels skills – with caveats; ableism in computer programming; much more:

AI, STATISTICS & DATA SCIENCE

A bright lightning bolt captured against a dark, purplish sky. Image credit: Frankie Lopez/Unsplash
Image credit: Frankie Lopez/Unsplash
  • “PFGM can create images of the same quality as those produced by diffusion-based approaches and do so 10 to 20 times faster. ‘It utilizes a physical construct, the electric field, in a way we’ve never seen before,’ said Hananel Hazan, a computer scientist at Tufts University. ‘That opens the door to the possibility of other physical phenomena being harnessed to improve our neural networks.’” Quanta’s Steve Nadis reports on an approach to artificial intelligence that harnesses physical processes to create generative AIs that have some advantages over existing models.
  • “For the last several months, I have been part of a team of social scientists working with Boston Consulting Group, turning their offices into the largest pre-registered experiment on the future of professional work in our AI-haunted age. Our first working paper is out today. There is a ton of important and useful nuance in the paper but let me tell you the headline first: for 18 different tasks selected to be realistic samples of the kinds of work done at an elite consulting company, consultants using ChatGPT-4 outperformed those who did not, by a lot. On every dimension. Every way we measured performance.” At his “One Useful Thing” blog, Wharton School professor Ethan Mollick describes the results of experiment, now available as a preprint, that compared work performed with and without the assistance of GPT4.
  • “Transformer models have the newfound capability of performing multimodal AI in medicine, analyzing in real time a person’s many layers of big data and our knowledge base. Much of the high-dimensional data that underly the uniqueness of each human being can now be captured. These layers include anatomy through imaging, biomarkers of physiology through sensors, the genome, the microbiome, the metabolome, the immunome, cellular-level transcriptome, proteome, and epigenome. Electronic health record data, which incorporate lab results, family history, unstructured text, and longitudinal follow-up of an individual, are also a rich source of data.” A perspective article in Science by Eric Topol describes the multiple applications of “multimodal” AIs in medicine.
  • “A fair model is normally expected to perform equally across subgroups defined by sensitive variables (e.g., age, gender/sex, race/ethnicity, socio-economic status, etc.). Various fairness measurements have been developed to detect differences between subgroups as evidence of bias, and bias mitigation methods are designed to reduce the differences detected. This perspective of fairness, however, is misaligned with some key considerations in clinical contexts. The set of sensitive variables used in healthcare applications must be carefully examined for relevance and justified by clear clinical motivations.” A perspective article by Liu and colleagues, published in NPJ Digital Medicine, addresses the complexities of fairness in the context of clinical AI applications.
  • “As we see wider adoption of AI models within companies, it’s important to raise awareness of relevant security risks at every step of the AI development process, and make sure the security team works closely with the data science and research teams to ensure proper guardrails are defined.” Whoops! At Wiz Blog, Hillai Ben-Sasson and Ronny Greenberg detail how a misconfigured SAS token applied to Microsoft AI’s GitHub repository inadvertently revealed terabytes’ worth of sensitive data.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Three brown field mice clinging to stalks of wheat in a field. Image credit: Nick Fewings/Unsplash
Image credit: Nick Fewings/Unsplash
  • “Transplanted human neurons, in contrast to transplanted mouse neurons, become diseased when exposed to amyloid pathology. Understanding the molecular basis of the resilience of mouse neurons to amyloid pathology will not only help to model the disease better but might also stimulate research into pathways that protect against neurodegeneration.” A research article published in Science by Espuny-Camacho and colleagues suggests a mechanism for neuronal damage in Alzheimer disease – and identifies a genetic switch capable of turning it off in a human/mouse xenograft model.
  • “Because I didn’t know the exact mechanism that caused my anxiety to be uncontrollable (and no one else did either), it seemed as if I must be cheating to use a drug that greatly helped my situation. It felt like a crutch or a shortcut. Especially because, even as a doctor, I can’t explain why the medication works for me or anyone else….I’ve recently faced a similar scenario with new drugs for obesity.” At the New York Times, Indiana University’s Aaron Carroll explores the parallels between antidepressant therapies and the new classes of weight-loss/diabetes drugs in a deeply personal essay (H/T @A_McKethan).
  • “Finding mammalian life above the vegetation line is ‘pretty unique and incredible, because what do they eat?’ asks Jonathan Velotta, a biologist at the University of Denver who studies how animals adapt to extreme environments. He thinks Storz’s new work could change how scientists think about where life exists.” An article in Science News by Meghan Rosen reports on the discovery of a mouse species at an improbable altitude – and the broader implications for our understanding of some aspects of biology and ecology.
  • “The Vesters are in their late 60s and would like to retire soon. Terry Vester wants to spend more time with her grandson and aging parents. But she can’t imagine abandoning her patients, some of whom she has cared for since they were born….Terry Vester’s worry — leaving her town with no doctors — is already reality across much of rural America, where many residents have health problems but few health care professionals to turn to.” A joint Kaiser Health News/NPR report by Arielle Zionts describes a small Alabama town’s scramble to ensure basic health care for its residents as its two remaining doctors are about to retire.

COMMUNICATION, Health Equity & Policy

Phone texting icon with speech bubble caption and three dots, made of green construction paper and crumpled yellow foolscap. Image credit: Volodymyr Hryshchenko/Unsplash
Image credit: Volodymyr Hryshchenko/Unsplash
  • “Language is pivotal to the successful communication of research, both within and beyond academia. Language is a diversity issue, with unnecessarily complex language excluding people (both authors and readers). It’s an impact issue, making a huge difference to the cognitive accessibility of research recommendations in areas like health and climate. It’s a cultural issue, leading to discord and suspicion between groups that really need each other. Complicated language is a barrier, not only between the center and the edge of the circle of knowledge, but between all the rings.” An essay at Scholarly Kitchen by Charlie Rapple highlights the stakes of clear and straightforward language when communicating about science.
  • “People with permanent disabilities know these challenges well — at every turn, programming deters people with disabilities from participating fully, and therefore deters them from participating in science. Some of the most popular platforms for learning to code require a mouse, and so exclude people with motor disabilities. Most code-editing programs, including those used in science, assume users have sight, excluding anyone who is blind or visually impaired.” In an article for Nature’s “career column,” Amy Ko reflects on how an injury forced her to confront ableism in her chosen field of computer programming.
  • “Despite negative feedback from the research community, the US National Institutes of Health (NIH) will move forward with a policy requiring that foreign scientists receiving ‘subawards’ from the agency must share their laboratory notebooks and other raw data with their research partner in the United States. Researchers blasted an earlier version of the policy, released in May, saying that it will have a chilling effect on international collaborations.” In a news article for Nature, Max Kozlov reports on the NIH’s decision to stick with plans to increase monitoring of foreign scientists and researchers working on “subawards” from the agency.
  • “In a policy statement unanimously endorsed by the agency’s commissioners, the F.T.C. said it ‘intends to scrutinize’ whether companies are illegally engaging in an unfair method of competition when they exploit a regulatory loophole that can delay rivals from entering the market…an F.T.C. official who was not authorized to discuss the agency’s findings said that the agency’s staff had identified dozens of patents on inhalers that appear to be being used in violation of federal law.” The New York Times’ Rebecca Robbins reports on sharpened FTC oversight of patenting strategies employed by some pharmaceutical companies that manufacture asthma inhalers.