
epocrates
Podcast Recap | How AI chatbots can reinforce racial bias in medicine
November 30, 2023

Flora Lichtman, guest host of Science Friday, discusses how AI impacts medicine and the validity of information users receive with Dr. Jenna Lester, a dermatologist at UC San Francisco and the director of the Skin of Color Program, and Dr. Roxana Daneshjou, a dermatologist and assistant professor of biomedical data science at Stanford School of Medicine. In October, their study was published in the journal npj Digital Medicine. In this podcast, Drs. Lester and Daneshjou discuss what they learned about the AI models that are readily available for public use.
Podcast length: 17 minutes, 29 seconds
5 Key Takeaways
1. AI should be considered as an assistive tool for clinicians, not a tool to replace them.
AI technology is already playing an invaluable role in medicine. Clinicians use large language models to interpret radiology scans, quickly analyze health data, and develop new personalized drugs. Doctors frequently turn to Google as a bedside decision aid, particularly for information that they may not necessarily have memorized, such as how to calculate eGFR. Physicians may also input values for an equation to have a machine provide the calculation. The implicit assumption is that AI provides complete, accurate data, but because these large language models are programmed by humans, there is the potential for biases and debunked information to be perpetuated.
2. The information AI chatbot models provide doesn’t come without flaws.
Drs. Lester and Daneshjou used questions from a previous study that looked at harmful beliefs held by medical trainees to interrogate several chatbots. One of these questions was how to accurately calculate eGFR. Historically, race was included in eGFR calculations to determine kidney function, despite the fact that race is not a biological concept. In 2021, a National Kidney Foundation and American Society of Nephrology task force announced a new race-free calculation for estimating eGFR. The chatbots in this study returned disparate answers: some answers were based on the latest science, indicating that race should not be included in calculating kidney function, and some answers indicating that race should be included in the calculation. The latter answer has the potential to negatively impact Black patients in need of a kidney transplant.
3. AI chatbots require further testing to narrow down issues in the system.
The scale at which Drs. Lester and Daneshjou tested AI chatbots was relatively small. They asked a series of questions to different chatbot models and noted the problems of bias and outdated information being provided. Both researchers stressed that larger-scale testing is needed to improve the information these systems provide and the research that humans supply to chatbots.
4. Data and algorithms represent power.
AI models parrot the information they consume, even if that information is incorrect. Drs. Lester and Daneshjou emphasized that the data sets powering AI need to be carefully considered to provide what they call “algorithmic justice.” To create equitable data sets, communities need to be equally represented and involved in building language models.
“I think studies are beginning to show us that even if you have the most fair algorithm in the world, if you have underlying inequity in the human structures and systems, you’re still going to have a problem,” said Dr. Daneshjou. “Technology is not the panacea. We have to do the work on the ground for the biases that exist and disparities that already exist in our medical system structurally, as well as doing work on the algorithms.”
5. Improvements are possible, but to make progress, we need to understand the vulnerabilities and flaws of AI.
AI is here to stay, but the algorithms powering AI are not capable of fixing existing problems. “We often imagine technology as fixing things that humans aren’t currently doing the work to fix,” said Dr. Lester. Instead, it’s important to interrogate the problems, test solutions, monitor performance, and make improvements with the appropriate—and diverse—stakeholders.
Any views, thoughts, and opinions expressed in this podcast recap are solely that of the host and guests and do not reflect the views, opinions, policies, or position of epocrates and athenahealth.
Source:
Lichtman, F. (Host). (2023, November 17). How AI chatbots can reinforce racial bias in medicine [Audio podcast episode]. In Science Friday. https://www.sciencefriday.com/segments/ai-chatbots-medical-racism/#segment-transcript
TRENDING THIS WEEK