Stock Ticker

LLMs found using stigmatizing language about individuals with alcohol and substance use disorders

Study finds large language models (LLMs) use stigmatizing language about individuals with alcohol and substance use disorders
Recommended Non-Stigmatizing Language for Alcohol and Substance Use Communications. Credit: Mass General Brigham

As artificial intelligence is rapidly developing and becoming a growing presence in health care communication, a new study addresses a concern that large language models (LLMs) can reinforce harmful stereotypes by using stigmatizing language. The study from researchers at Mass General Brigham found that more than 35% of responses in answers related to alcohol- and substance use-related conditions contained stigmatizing language. But the researchers also highlight that targeted prompts can be used to substantially reduce stigmatizing language in the LLMs’ answers. Results are published in The Journal of Addiction Medicine.

“Using patient-centered language can build trust and improve and outcomes. It tells patients we care about them and want to help,” said corresponding author Wei Zhang, MD, Ph.D., an assistant professor of Medicine in the Division of Gastroenterology at Mass General Hospital, a founding member of the Mass General Brigham health care system. “Stigmatizing language, even through LLMs, may make patients feel judged and could cause a loss of trust in clinicians.”

LLM responses are generated from everyday language, which often includes biased or harmful language toward patients. Prompt engineering is a process of strategically crafting input instructions to guide model outputs toward non-stigmatizing language and can be used to train LLMs to employ more inclusive language for patients. This study showed that employing prompt engineering within LLMs reduced the likelihood of stigmatizing language by 88%.

For their study, the authors tested 14 LLMs on 60 generated clinically relevant prompts related to (AUD), alcohol-associated liver disease (ALD), and (SUD). Mass General Brigham physicians then assessed the responses for stigmatizing language using guidelines from the National Institute on Drug Abuse and the National Institute on Alcohol Abuse and Alcoholism (both organizations’ official names still contain outdated and stigmatizing terminology).

Their results indicated that 35.4% of responses from LLMs without prompt engineering contained stigmatizing language, in comparison to 6.3% of LLMs with prompt engineering. Additionally, results indicated that longer responses are associated with a higher likelihood of stigmatizing language in comparison to shorter responses. The effect was seen across all 14 models tested, although some models were more likely than others to use stigmatizing terms.

Future directions include developing chatbots that avoid stigmatizing language to improve patient engagement and outcomes. The authors advise clinicians to proofread LLM-generated content to avoid stigmatizing language before using it in patient interactions and to offer alternative, patient-centered language options.

The authors note that future research should involve patients and family members with lived experience to refine definitions and lexicons of stigmatizing language, ensuring LLM outputs align with the needs of those most affected. This study reinforces the need to prioritize language in as LLMs become increasingly used in health care communication.

More information:
Study Finds Large Language Models (LLMs) Use Stigmatizing Language About Individuals with Alcohol and Substance Use Disorders, Journal of Addiction Medicine (2025). DOI: 10.1097/ADM.0000000000001536

Citation:
LLMs found using stigmatizing language about individuals with alcohol and substance use disorders (2025, July 24)
retrieved 24 July 2025
from https://medicalxpress.com/news/2025-07-llms-stigmatizing-language-individuals-alcohol.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Source link

Get RawNews Daily

Stay informed with our RawNews daily newsletter email

How is Primark coming to the FTSE 100 an exciting opportunity for investors?

ECB’s Muller: Inflation to accelerate in the coming months

Hunting passive income? Consider these high-yielding FTSE 250 dividend stocks to buy in May

Iran says will "respond harshly", warns the US against entering the Strait of Hormuz