AI Health

Friday Roundup

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.

November 3, 2023

In this week’s Duke AI Health Friday Roundup: dissecting the AI executive order; deep learning predicts macular degeneration; the history of medical debt; unease over the surveillance campus; social vulnerability, diabetes, and heart health; open access and consolidation in scholarly publishing; AI may require new legal frameworks; diverse datasets needed for training AI; “watermarking” may not work for distinguishing AI-generated content; much more:

AI, STATISTICS & DATA SCIENCE

“This image shows a young black man wearing a black coat staring past the camera in front of a blue cloudy sky. The scene is refracted in different ways by a fragmented glass grid....A neural network diagram is overlaid, familiarising the viewer with the formal architecture of AI systems.” Image credit and alt text description: Alan Warburton/Better Images of AI URL: https://betterimagesofai.org/images?idImage=0
Image credit: Alan Warburton/Better Images of AI
  • “The Biden-Harris administration has issued an executive order on artificial intelligence. It is about 20,000 words long and tries to address the entire range of AI benefits and risks. It is likely to shape every aspect of the future of AI, including openness: Will it remain possible to publicly release model weights while complying with the EO’s requirements? How will the EO affect the concentration of power and resources in AI? What about the culture of open research?” Following news of the Biden Administration’s Executive Order on artificial intelligence, Arvind Narayanan, Sayash Kapoor, and Rishi Bommasani weigh in on the implications at AI Snake Oil.
  • “We found that the need for dataset diversity was well described in literature, and experts generally favored the development of a robust set of guidelines, but there were mixed views about how these could be implemented practically. The outputs of this study will be used to inform the development of standards for transparency of data diversity in health datasets (the STANDING Together initiative).” A review article published in Nature Medicine by Arora and colleagues examines the potential benefits of standards for health datasets used to train AI and machine learning applications for healthcare.
  • “Generally defining AI fairness as ‘equality’ is not necessarily reasonable in clinical settings, as differences may have clinical justifications and do not indicate biases. Instead, ‘equity’ would be an appropriate objective of clinical AI fairness. Moreover, clinical feedback is essential to developing fair and well-performing AI models, and efforts should be made to actively involve clinicians in the process. The adaptation of AI fairness towards healthcare is not self-evident due to misalignments between technical developments and clinical considerations. Multidisciplinary collaboration between AI researchers, clinicians, and ethicists is necessary to bridge the gap and translate AI fairness into real-life benefits.” A perspective article published in NPJ Digital Medicine by Liu and colleagues addresses the complexities of defining and working toward “fairness” in health AI.
  • “We find that watermark-based detection of AI-generated content is vulnerable to strategic, adversarial post-processing. An attacker can add a small, human-imperceptible perturbation to an AI-generated, watermarked image to evade detection. Our results indicate that watermark-based AI-generated content detection is not as robust as previously thought. We also find that simply extending standard adversarial examples to watermarking is insufficient since they do not take the unique characteristics of watermarking into consideration.” In a paper presented at ACM and available from arXiv, Jiang and colleagues present findings from a study that suggests the technique of “watermarking” to identify AI generated content can be overcome.
  • “The RTI Rarity team uses Artificial Intelligence (AI) and Machine Learning (ML) methods to generate a suite of neighborhood-level risk scores based on local social determinants of health. The scores draw from the RTI Rarity data library of small-area measures. The project uses random forest models and other advanced AI/ML methods to understand health outcomes at the Census tract and ZIP code levels across the U.S.” Research group RTI International has announced that their Rarity health equity data dashboard is now fully available for use by the general public.

BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH

Extreme closeup photograph of a human eye, with iris and pupil filling most of the field. Image credit: v2osk/Unsplash
Image credit: v2osk/Unsplash
  • “We aimed to create an algorithm that did not require human annotation or expert feature selection; generalized to multiple spectral-domain optical coherence tomography (SD-OCT) devices, including current standard-of-care models; was validated on data obtained during routine patient care; made predictions on a clinically meaningful timeframe; and was automated end-to-end allowing for the screening of large patient databases without the need for human intervention.” In an article published this month in JAMA Ophthalmology, Dow and colleagues present findings from a study that evaluated the use of the DeepGAze deep learning algorithm to predict the likelihood that patients with age-related macular degeneration would progress to a more serious form of the disease known as geographic atrophy.
  • “This national data analysis of diabetes‐related cardiovascular mortality highlights several important disparities by social vulnerability that contributed to >39 000 extra lives lost and appears to be prominently demonstrated in certain groups such as racial and ethnic minorities, female sex, urban residents, and younger age groups.” A study by Bashar and colleagues, published this month in the Journal of the American Heart Association, examines the impact of social vulnerability on diabetes-related cardiovascular outcomes (H/T @SVRaoMD).
  • “A historical lens reveals that since the 1980s, medical debts have shifted from obligations negotiated by doctors, patients, and hospitals to assets bought and sold by people with no role in patient care. In part because of the proliferation of insurance plans with higher copayments and deductibles, hospitals have faced more delinquent payments. Hospital administrators have turned away from charity care and have opted instead for aggressive debt collection.” In an article published this week in the New England Journal of Medicine, Luke Messac traces the ongoing evolution of medical debt collection and its societal impact.
  • “A panel of experts said on Tuesday that a groundbreaking treatment for sickle cell disease was safe enough for clinical use, setting the stage for likely federal approval by Dec. 8 of a powerful potential cure for an illness that afflicts more than 100,000 Americans….The Food and Drug Administration had previously found that the treatment…was effective. The panel’s conclusion on Tuesday about exa-cel’s safety sends it to the F.D.A. for a decision on greenlighting it for broad patient use.” The New York Times’ Gina Kolata reports on the outcome of this week’s FDA Advisory Committee meeting that weighed in on the use of CRISPR-based experimental therapy for patients with sickle cell disease.

COMMUNICATION, Health Equity & Policy

An illustration in intense colors in a gloomy mood showing a collage of two mirrored cars, street signs and mathematical symbols. Anton Grabolle / Better Images of AI / Autonomous Driving / CC-BY 4.0
Image credit: Anton Grabolle/Better Images of AI/CC-BY 4.0
  • “…this is the first review zooming in on this particular issue by conducting a structured analysis of underlying ethical theories that could guide SDVs’ ethical decision-making. To this end, this article focuses on autonomous driving ethics literature and synthesizes its past publications to answer the following research questions: 1. What are the advantages and disadvantages of applying particular ethical theories to the decision-making of self-driving vehicles? 2. How can ethical theories be integrated into the ethical decision-making of self-driving vehicles?” In an article by Poszler and colleagues published in Technology in Society, the authors present a systematic review and synthesis on the application of ethical theory to the decision-making processes employed by self-driving vehicles.
  • “At university campuses, people who are subjected to surveillance ‘don’t have a lot of control over the technology and how it’s used’, says Jason Kelley, the activism director at the Electronic Frontier Foundation, a non-profit organization advocating for digital rights, based in San Francisco, California. ‘One of the issues we’ve seen … is that it often ends up being used for disciplinary purposes,’ he adds.” A Nature feature article by Anne Gulland and Fayth Tan explores pushback from students and faculty as cameras and sensors make their way into classroom settings.
  • “Our research is not motivated by any normative claim about rights for AI or ‘robots,’ whether based on the ontological properties of advanced AI or on the direct application of a social-relational model. It is about finding a way to apply what has essentially been human law to autonomous AI capable of performing many cognitive tasks that until recently only humans could—specifically when, having taken corporate form, an AI interacts with humans, businesses, and the legal system.” An policy forum article published this month in Science by Gervais and Nay suggests that the advent of autonomous or semiautonomous AI systems working in what had hitherto been exclusively human domains is ushering in a new legal era.
  • “The dominant business models for OA are volume based and reward scale, driving the biggest companies to get bigger, and smaller organizations to seek the shelter of a larger partner. Publication volume is the essential measurement of success in an author-pays OA market. Transformative agreements (aka, the ‘Bigger Big Deal’) have become the preferred purchasing model for journals, again favoring scale, because the resource-intensiveness required to negotiate and administer such deals leads to the benefits accruing to large publishers with large numbers of journals for researchers and scholars to publish in.” At Scholarly Kitchen, David Crotty digs into the data to actually confirm whether impressions of increasing consolidation in scholarly publishing are actually true.