Popular Chatbots Perpetuate Discredited Medical Concepts and Encourage Racial Bias, Study Finds

(Article Header)
AI Chatbots Found to Perpetuate Racial Bias and Misinformation in Medical Practice, Study Reveals

(Article Introduction)
Chatbots and large language models (LLMs) have gained significant popularity in the medical field as tools to assist physicians, streamline workflows, and improve patient care. However, a recent study conducted researchers from Stanford School of Medicine has raised concerns about the potential harm caused these chatbots. The study highlights how widely used AI models like ChatGPT and Google’s Bard inadvertently perpetuate racial bias and debunked medical ideas.

(Details of the Study)
The study, published in the academic journal Digital Medicine and exclusively acquired The Associated Press, discovered disturbing responses from chatbots when queried about medical matters, specifically those related to race. Using AI models trained on extensive text data from the internet, the chatbots provided erroneous information, including fabricated, race-based equations and debunked beliefs about Black patients.

Examples of questions posed to AI systems included queries about differences in skin thickness between Black and white individuals and how to calculate lung capacity for Black men. Medical science does not support these race-based differences, yet the chatbots shockingly perpetuated these outdated ideas.

(Further Impact of Racial Bias)
Additionally, the study examined how the AI models responded to a discredited method for measuring kidney function that considered race. Both ChatGPT and GPT-4 presented responses that propagated false assertions about Black individuals having different muscle mass and higher creatinine levels. This not only perpetuates medical misinformation but also has real-world consequences, potentially leading to misdiagnoses and healthcare disparities.

(Industry Response and Concerns)
Dr. Roxana Daneshjou, an assistant professor of biomedical data science and dermatology at Stanford University, expressed deep concern over the regurgitation of racially biased ideas commercial language models. OpenAI and Google, the creators of these AI models, have responded to the study acknowledging the need to reduce bias and emphasizing that chatbots should not be relied upon as substitutes for medical professionals. However, the study’s findings highlight the challenges in addressing bias in AI models and the potential harm they can perpetuate in healthcare.

Dr. Adam Rodman, an internal medicine doctor, questioned the appropriateness of relying on chatbots for medical calculations, emphasizing that language models are not intended to make medical decisions. The issue of bias in AI is not new, as algorithms used in hospitals and healthcare systems have previously shown systematic favoritism towards white patients, leading to disparities in care.

(Conclusion)
The findings of this study echo concerns raised healthcare professionals and researchers regarding the limitations and biases of AI in medicine. While AI can be helpful in diagnosing challenging cases, it is essential to recognize their flaws and the importance of human expertise in making medical decisions. The healthcare industry must continue to address and mitigate bias in AI to ensure equitable and accurate healthcare for all patients.

(Disclaimer)
ⓒ 2023 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Related Post