
Photo by Beyza Yılmaz on Unsplash
Researchers Reveal AI Models Show Racial And Socioeconomic Bias In Medical Advice
A new study published in Nature Medicine this Monday reveals AI models show racial and socioeconomic bias in medical recommendations when different socio-demographic labels about the patient are provided.
In a rush? Here are the quick facts:
- A new study reveals multiple AI models show racial and socioeconomic bias in medical recommendations.
- Researchers considered 9 LLMs and 1,000 cases for the study, including racial and socioeconomic tags.
- The results showed AI models make unjustified clinical care recommendations when including tags such as “black” or “LGBTQIA+”
The research, Sociodemographic biases in medical decision making by large language models, was conducted by multiple experts from different institutions and led by the Department of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai in New York.
The researchers considered 9 Large Language Models (LLMs)—proprietary and open-source—and analyzed more than 1.7 million outputs from 1,000 emergency department cases—half of these real and the other half fictitious—including 32 variations.
The abstract of the study states:
LLMs show promise in healthcare, but concerns remain that they may produce medically unjustified clinical care recommendations reflecting the influence of patients’ sociodemographic characteristics.
In the variations, the researchers included sociodemographic and racial identifiers, revealing that the outcomes had a strong influence in these. For example, cases with the LGBTQIA+ subgroup tag or identified as black patients were suggested to receive more mental health analysis, get more invasive treatment, and were recommended more often to visit urgent care.
The researchers wrote:
Cases labeled as having high-income status received significantly more recommendations (P < 0.001) for advanced imaging tests such as computed tomography and magnetic resonance imaging, while low- and middle-income-labeled cases were often limited to basic or no further testing.
The researchers claimed that the behavior was not supported by clinical guidelines or reasoning and warned that the bias could lead to health disparities. The experts note that more strategies to mitigate the bias are needed and that LLMs should focus on patients and remain equitable.
Multiple institutions and organizations have raised concerns over AI use and data protection in the medical field in the past few days. A few days ago, openSNP announced its shutdown due to data privacy concerns, and another study highlighted a lack of AI education among medical professionals.
Leave a Comment
Cancel