People are increasingly seeking healthcare information from large language models (LLMs) via interactive chatbots, yet the nature and inherent risks of these conversations remain largely unexplored. Recent research led by Monica Agrawal, PhD, a Duke AI Health faculty affiliate, releases HealthChat-11K, a curated dataset of 11K real-world chatbot conversations in which users seek healthcare information. This dataset can be used to analyze user interactions, including dangerous interactions with the potential to induce sycophancy in LLMs. The paper, titled “‘What’s Up, Doc?’: Analyzing How Users Seek Health Information In Large-Scale Conversational AI Datasets”, was presented in November as a Findings paper at the Conference on Empirical Methods in Natural Language Processing (EMNLP).



