FAIR HEALTH™ (Fostering AI/ML Research for Health Equity and Learning Transformation)

January 8, 2024 | 9:00 AM – 12:00 PM
In person at Sarah Duke Gardens in Kirby Horton Hall

The FAIR HEALTH™ workshop will delve into the critical issue of algorithmic bias and clinical decision making. This is an opportunity to gain essential insights and practical strategies to identify, mitigate, and evaluate bias in clinical algorithms. We will also explore the legal and ethical implications of algorithmic bias in healthcare, an aspect that is gaining increasing importance in today’s dynamic healthcare landscape. To enrich the workshop and encourage active participation, we have incorporated a combination of lectures and an interactive case study discussion, making it an even more rewarding experience for all attendees.

The workshop is directed by Michael Cary, PhD, RN, who is Associate Professor in the Duke School of Nursing and the Equity Scholar for Duke AI Health. In these roles he works with a dedicated team to eliminate bias in clinical algorithms and mitigating potential harms to patients.

Presented by:

  • Michael Cary, PhD, RN; Elizabeth C. Clipp Term Chair of Nursing at the Duke University School of Nursing; Inaugural AI Health Equity Scholar; Course Creator
  • Sophia Bessias, MPH, MSA; Evaluation Lead, Algorithm-Based Clinical Decision Support (ABCDS) Oversight, Duke AI Health
  • Ben Goldstein, PhD, MPH; Associate Professor of Biostatistics & Bioinformatics
  • Christina Silcox, PhD; Research Director for Digital Health at the Duke-Margolis Center for Health Policy

Register at https://duke.qualtrics.com/jfe/form/SV_6sCG5CMTCryDuGG

Understanding algorithmic bias, its implications, and strategies for mitigation

This in-person workshop will provide insight into the sources of bias, ethical and legal considerations, as well as bias mitigation strategies. This workshop will be led by clinical and methodological faculty from Duke University with hands-on experience researching algorithmic bias in both academic and research settings.

Who should attend

The FAIR HEALTH™ Workshop is intended for clinical algorithm developers (biostatisticians, engineers, computer scientists, social scientists) and users (clinicians). After attending this workshop, participants will have a deeper understanding of bias in clinical AI algorithms, their impact on health and healthcare outcomes, bias mitigation strategies, and ethical considerations for the responsible use of AI in healthcare.

Broad areas of emphasis for the three-hour workshop

I. Understanding the Impact of Algorithm Bias on Health and Healthcare Outcomes

  • Definition of algorithmic bias and its impact on healthcare
  • Examples of algorithmic bias in clinical algorithms
  • Discussion on the ethical implications of algorithmic bias

II. Identifying and Mitigating Algorithmic Bias in Clinical Algorithms

  • Exploring the data sources and variables used in clinical algorithms
  • Understanding the potential sources of bias in data collection
  • Analyzing the impact of biased data on algorithmic outcomes
  • Introduction to the concept of fairness in machine learning
  • Techniques for detecting and measuring bias in algorithms
  • Strategies for mitigating algorithmic bias in clinical algorithms

III. Evaluating the Performance of Clinical Algorithms

  • Methods for evaluating the accuracy and fairness of algorithms
  • Addressing trade-offs between accuracy and fairness
  • Incorporating feedback loops and continuous monitoring

IV. Ethical Considerations and Responsible AI

  • Ethical considerations in algorithmic decision-making
  • Principles of responsible AI in healthcare
  • Integrating ethical frameworks into algorithm development and deployment

V. Case Studies and Group Exercises

  • Analyzing real-world case studies of algorithmic bias in healthcare
  • Engaging participants in group exercises to identify and mitigate bias
  • Facilitating discussions and sharing insights from the exercises

Acknowledgements

We wish to acknowledge support from the Duke Faculty Advancement Seed Grant Program, and the National Center For Advancing Translational Sciences of the National Institutes of Health under Award Number UL1TR002553. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.