AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

December 9, 2022

In today’s Duke AI Health Friday Roundup: transferring skills between robots; NHLBI report scrutinizes social determinants of health in atrial fibrillation; kicking the tires on ChatGPT; making pulse oximeters work for everyone; considering race & ethnicity in medical school admissions; national database will track nonfatal opioid overdoses; testing the generalizability of a kidney injury model; researchers buckling under administrative burdens; much more:

AI, STATISTICS & DATA SCIENCE

Photograph of a pair of industrial robot arms with two-”fingered” gripping claws. The arm in the foreground is holding a plastic cup full of water. Image credit: David Levêque/Unsplash
Image credit: David Levêque/Unsplash
  • “Let’s say you have a robot arm with a humanlike hand. You’ve trained its five fingers to pick up a hammer and whack a peg into a board. Now you want a two-fingered gripper to do the same job. The scientists created a kind of bridge of simulated robots between the two that slowly shifts in shape from the original form to the new one.” An article by Matthew Hutson in Scientific American describes how skills can be transferred between learning robots with different physical attributes.
  • “…the model performed worse in predicting acute kidney injury in females in both populations, with miscalibration in lower stages of acute kidney injury and worse discrimination (a lower area under the curve) in higher stages of acute kidney injury. We demonstrate that, while this discrepancy in performance can be largely corrected in non-veterans by updating the original model using data from a sex-balanced academic hospital cohort, the worse model performance persists in veterans.” A paper published in Nature Machine Intelligence by Cao and colleagues probes the generalizability of a prediction model for acute kidney injury when it is applied outside of the population used to train it.
  • “The story gets more complex, however, when goals are introduced, such as when a tennis player wants to run to an exact spot on the court or a thirsty mouse eyes a refreshing prize in the distance. Biologists have understood for a long time that goals take shape in the brain’s cerebral cortex. How does the brain translate a goal (stop running there so you get a reward) into a precisely timed signal that tells the MLR to hit the brakes?” Quanta’s Kevin Hartnett reports on recent research that illuminates how the brain innately deploys calculus to allow rapid goal-oriented motion.
  • In a YouTube interview, Duke Clinical Research Institute Chief Science and Digital Officer Eric Perakslis, PhD, talks about the potential impact of recent changes at Twitter, particularly for members of clinical community who use the app.
  • “Large Language Models (LLMs) are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training. That means that using ChatGPT in its current form would be a bad idea for applications like education or answering health questions.” At the their AI Snake Oil Substack, Arvind Narayan and Sayash Kapoor weigh in on the recent attention received by OpenAI’s ChatGPT chatbot, and what the large-language-model-based application can and can’t do.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Near-infrared processed image from the James Webb Space Telescope showing enormous pillars of glowing dust and gas that form a feature called “the Pillars of Creation” within the Eagle Nebula, some 6500 light years distant from Earth. Image Credit: NASA, ESA, CSA, and STScI
Image Credit: NASA, ESA, CSA, and STScI
  • If you need a little time-out from the week and its news, peruse these recently released deep-space images from the James Webb Space Telescope, collected by Nature’s Alexandra Witze for the journal’s December edition of outstanding science images.
  • “There’s a growing consensus among physicians and government regulators that pulse oximeters measure oxygen levels less accurately in patients with darker skin and need to be fixed. There’s another problem, however, that needs to be fixed first…The light used in the devices to detect oxygenated blood can be blocked by melanin in the skin.” An article in STAT News (login required) by Usha Lee McFarling explores how a lack of high-quality, reliable approaches to measuring skin tone is hampering efforts to improve the accuracy of pulse oximetry in people of color.
  • “Many individuals with AF [atrial fibrillation] have multiple adverse social determinants, which may cluster in the individual and in systemically disadvantaged places (eg, rural locations, urban neighborhoods). Cumulative disadvantages may accumulate over the life course and contribute to inequities in the diagnosis, management, and outcomes in AF.” A report published this week in JAMA Cardiology by Benjamin and colleagues describes efforts by an expert panel assembled by the National Heart, Lung & Blood Institute to address the role of social determinants of health in atrial fibrillation and current gaps in knowledge about these factors.
  • “Gathering more precise data about nonfatal overdoses, Gupta said, represents a first step toward reaching more people experiencing addiction and at risk of overdose. Patients who survive an overdose, he said, are between two and three times more likely than the general public to eventually die from one…Currently, however, estimates of nonfatal overdoses are scattershot at best.” STAT News’ Lev Facher reports on the rollout of a national database designed to systematically track non-fatal opioid overdoses. Interestingly, the database will be maintained by the National Highway and Traffic Safety Administration, as it will rely on reports submitted by local first responders.

COMMUNICATION, Health Equity & Policy

Bulging folders full of paper with post-it notes sticking out, piled on a table in an office. Image credit: Wesley Tingey/Unsplash
Image credit: Wesley Tingey/Unsplash
  • “To put it bluntly, many researchers simply don’t have the time to do much actual research during normal working hours, such is the level of the administrative bureaucracy that they’re subjected to. I personally hear researchers talk about what ought to be their core job as some sort of treat or hobby, something that they get to do in their spare time.” In a post at Scholarly Kitchen, Phill Jones describes the growing academic discontent with the administrative burdens of doing science, and looks at some possible paths out of the paperwork morass.
  • “Diversity in medical school is the pipeline to a diverse physician workforce, which in turn is essential to serving an increasingly diverse populace. In a country saddled with vast, persistent racial disparities in health access and outcomes, physicians who belong to minoritized racial and ethnic groups are far more likely to work in medically underserved areas and are more likely to enter primary care fields.” A viewpoint article published this week in JAMA by Hamilton, Rose, and DeLisser makes a case for continuing to uphold consideration of racial and ethnic diversity in medical school admissions (H/T @UREssien).
  • “After finding success investing in the more obviously lucrative corners of American medicine — like surgery centers and dermatology practices — private equity firms have moved aggressively into the industry’s more hidden niches: They are pouring billions into the business of clinical drug trials.” A report by Kaiser Health News’ Rachana Pradhan describes how private equity firms are positioning themselves as players within the world of clinical research.
  • “…there is far more research documenting the causes of conspiracy beliefs than research that seeks to reduce conspiracy beliefs and their negative effects…This is partly because some of the most researched factors lead to an intellectual cul-de-sac: if the problem lies in factors that are relatively hard to influence — such as people’s pathologies, thinking styles or personalities — then this limits the extent to which the problem can be overcome.” A review article published in Nature Reviews Psychology by Hornsey and colleagues examines the evidence documenting the individual attributes, as well as the cultural, political, and socioeconomic factors, that influence uptake of conspiracy theories.