Patients are increasingly using generative AI to answer health questions, through tools like chatbots or AI-powered search results. Recent research led by Monica Agrawal, PhD, AI Health Faculty Affiliate and Assistant Professor of Biostatistics & Bioinformatics, characterizes the potential failure modes of this phenomenon, analyzes how LLM-generated responses can mislead patients even without hallucinations, and offers recommendations for building safer systems. The paper, “Retrieval-augmented systems can be dangerous medical communicators,” was presented in July at the International Conference on Machine Learning (ICML).



