
epocrates
The expanding role of AI chatbots in clinical decision-making: Cause for concern?
October 14, 2024

A survey of 107 physicians conducted by Fierce Healthcare and physician social network Sermo found that 76% of respondents reported using general-purpose large language models (LLMs) in clinical decision-making. More than 60% of the respondents said they use LLMs like ChatGPT to check drug interactions, while over half reported using them for diagnosis support. This aligns with a Wolters Kluwer survey which found that 40% of U.S. physicians felt ready to use generative AI at the point of care this year, with 68% viewing it as beneficial to health care. However, 89% of doctors stressed the need for transparency about the sources of AI-generated content before they could confidently adopt these tools. (Gliadkovskaya, 2024; Wolters Kluwer, 2024)
While AI chatbots are trained on vast amounts of publicly available data, some of that information can be incorrect or outdated. ChatGPT’s training data isn’t updated in real-time, potentially leading to recommendations based on older evidence. AI bots are prone to confabulations; the Fierce Healthcare piece explains that as recently as August, ChatGPT couldn’t accurately recognize how many R’s there are in the word “strawberry.” And an NIH-funded study on AI’s diagnostic capabilities revealed that while AI models can achieve high accuracy in medical quizzes, they often falter in explaining their reasoning, highlighting the need for human oversight in clinical decision-making. (Gliadkovskaya, 2024; NIH, 2024)
As the medical community continues to explore the potential of AI chatbots, professional societies like the American Medical Association recommend against physician use of LLM tools for assistance with clinical decisions until robust guardrails are established to protect both patients and physicians. “Whoever makes the clinical decision is the one who’s responsible,” echoed William Hersh, M.D., professor in the Department of Medical Informatics & Clinical Epidemiology at Oregon Health & Science University. “Even if they use ChatGPT or PubMed or Google or whatever, they’re liable for those decisions.” (Gliadkovskaya, 2024)
Sources:
Gliadkovskaya, A. (2024, October 7). Fierce Healthcare. Some doctors are using public AI chatbots like ChatGPT in clinical decisions. Is it safe? https://www.fiercehealthcare.com/special-reports/healthcare-conferences-put-your-calendar-2024-2025
Wolters Kluwer. (2024, April 16). Wolters Kluwer survey: Over two-thirds of U.S. physicians have changed their mind, now viewing GenAI as beneficial in healthcare. [Press release]. https://www.wolterskluwer.com/en/news/gen-ai-clincian-survey-press-release
National Institutes of Health. (2024, July 23). NIH findings shed light on risks and benefits of integrating AI into medical decision-making. [News release]. https://www.nih.gov/news-events/news-releases/nih-findings-shed-light-risks-benefits-integrating-ai-into-medical-decision-making
TRENDING THIS WEEK