Inquries
Name
E-mail
Country/Region
Content
X
From Stigma to Acceptance: Ethical Implications of Anthropomorphic Design in Healthcare Chatbots | Prof. WANG Lili
2026-03-03

With the development of digital healthcare, AI-driven medical chatbots are widely used in scenarios such as symptom self-checking, appointment booking, and smoking cessation/weight loss guidance. From timely consultation for common symptoms to emotional support for mental health and daily management of chronic diseases, these "digital assistants" are gaining an increasingly large market size due to their advantages such as convenience, low cost, and anonymity.

Image source: ©千库网

Many people often feel ashamed when facing health issues, especially in highly stigmatized situations such as mental illness, obesity, and smoking. PANG Yuting, a 2020 doctoral student in Business Administration at the School of Management, Zhejiang University, and her supervisor, Professor WANG Lili, along with Associate Professor CHEN Fangyuan of the Department of Marketing at the Faculty of Business, University of Macau, focused on this issue: When users inquire about stigmatized health issues that are susceptible to social prejudice, is a "human-like" chatbot with a human avatar, a friendly name, and a natural conversational style, or a "robotic" chatbot with a simple, mechanical appearance and neutral expression more likely to ease users concerns?

This research, published in the journal Journal of Business Ethics (one of the FT50 journals), shows that in scenarios involving privacy and stigma, overly anthropomorphic designs may actually increase user stress, thus reducing their willingness to use the technology. This finding not only reveals the "contextual adaptation code" of medical chatbot design but also provides a new approach with both theoretical depth and practical value for the ethical and inclusive development of digital health technologies.

You can access the Article  here 

WANG Lili  |  王丽丽

School of Management, Zhejiang University




 

Academic Background:  Professor, deputy head, and doctoral supervisor in the Department of Marketing at the School of Management, Zhejiang University. Research: consumer behavior, with recent research interests including consumer self-control, the impact of product-consumer interaction on consumer behavior, the influence of anthropomorphism on consumer behavior, service recovery, and management responses.


You can learn more about Prof. WANG Lili’s academic background  here 

PANG Yuting  |  庞雨婷

School of Management, Xiamen University




 

Academic Background:  Assistant Prof. in the Department of Marketing, School of Management, Xiamen University.  Research: consumer behavior, primarily concerning consumer health behavior, the impact of vulnerable consumers, and social relationships.



CHEN Fangyuan  |  陈方圆

Faculty of Business Administration, University of Macau




 

Academic Background:  Associate Prof. Marketing and Assistant Dean (Internationalization, Marketing and Communications). Research: anthropomorphism & human-AI interaction, emotion regulation & affect, health & aging, digital persuasion & psycholinguistics.


You can learn more about Associate Prof. CHEN Fangyuan’s academic background  here 

Why Does Chatbot Design Matter in Stigmatized Contexts?

Previously, the industry generally accepted "human-like design" as a magic bullet for improving user acceptance: giving chatbots human names, designing anthropomorphic avatars, and using natural dialogue styles to make them more "friendly and relatable." This design is indeed effective in scenarios such as retail, but it has never been systematically validated in the medical field, especially in stigmatized settings - would the "friendliness" brought by human-like design actually trigger patients "anxiety of being judged"? This is precisely the core question that the research team aims to answer.

Among these scenarios, the stigmatization of health issues is particularly unique. Stigmatization refers to health problems that are negatively labeled due to social prejudice. For example, mental illness is often misunderstood as "weak-willed," obesity is associated with "laziness and loss of control," and smoking is seen as "undisciplined and harmful to others." These patients are already highly sensitive to social judgment, fearing discrimination and belittlement, and may even avoid seeking help as a result - studies have confirmed that stigmatization can lead to delayed medical treatment, concealment of illness, and increased health risks.

Image source: ©千库网

In the current context of limited medical resources, chatbots are seen as an important supplementary force. They can break down time and space barriers, providing services such as symptom inquiries, medication guidance, and appointment booking in areas with underdeveloped healthcare systems, thus alleviating the pressure on doctors and shortening patient waiting times. However, the prerequisite for this technological advantage is that users are "willing to use" it - especially when it comes to sensitive health issues, users psychological concerns often become the biggest obstacle to adopting this technology.

Six Sequential Experiments Reveal a Preference for “Robotic” Chatbots Under High Stigma

To find the answers, the team conducted six experiments, covering scenarios ranging from laboratory verification to real-world Facebook ad field experiments. These experiments addressed three typical stigmatized health issues: mental illness, obesity, and smoking, and included more than 28,000 participants, ultimately yielding three core findings.

The study manipulated scenarios involving mental illness, weight management, and smoking cessation, allowing users to rate their perception of the "stigma" associated with these illnesses and their willingness to accept chatbots. The results showed that when users perceived "high social stigma," they were more likely to choose "robotic" chatbots rather than human-like ones. Further analysis revealed that it was not the chatbots themselves, but rather the "level of stigma" that determined their preference.

Image source: ©千库网

Why do users reject anthropomorphic designs in highly stigmatized scenarios? Research has found that users worry that anthropomorphic chatbots will "judge them like humans." The human characteristics of anthropomorphic chatbots (such as avatars and names) can trigger "interpersonal interaction associations," making them subconsciously feel that this human-like robot will discriminate against them. On the other hand, robotic designs, because they lack human characteristics, are perceived by users as "lacking the ability to judge," which can provide a sense of psychological security.

Image source: ©千库网

If the "threat of social judgment" stems from "human characteristics," then which characteristics are key? The answer is - facial features! The face is the core carrier for humans to convey emotions, intentions, and judgments - a frown represents dissatisfaction, and a pout implies contempt. For those suffering from stigma, the facial features of anthropomorphic chatbots amplify the feeling of being "observed and judged"; however, by removing the face, the "human attributes" are weakened, the "threat of social judgment" decreases accordingly, and the anthropomorphic "friendliness" becomes apparent.

Image source: ©千库网

In addition, the team confirmed their findings through various studies , demonstrating their existence in different populations and with varying disease severity , and validated the effectiveness of these findings in real-world scenarios through Facebook advertising experiments .

Previous research has been divided on the effectiveness of "anthropomorphic design": some say it enhances trust, while others say it provokes aversion. This study is the first to clearly define the key element of "degree of stigma" - anthropomorphism is not a panacea, and its effectiveness is highly dependent on "contextual sensitivity." In low-sensitivity scenarios (such as retail consultation), anthropomorphism can bridge the gap; however, in highly sensitive, stigmatized medical scenarios, it can backfire. This provides a new analytical framework of "contextual adaptation" for research on technological anthropomorphism.

Applying Care Ethics: From Traditional Caregiving to Digital Design

The core of care ethics is "relational, responsive, and contextual," emphasizing that services should be tailored to the vulnerabilities and unique needs of users. Previously, this theory was primarily applied to traditional healthcare scenarios. The team is the first to systematically integrate it into medical AI design research, proposing "care-oriented digital design" - health technologies must not only be "useful" but also "safe," especially for vulnerable groups facing stigma, requiring design to reduce harm and build trust. This expands the application boundaries of care ethics and provides theoretical support for the ethical design of digital health.

Previous research suggested that the "judgment anxiety" experienced by stigmatized patients originated from humans. However, this study is the first to discover that anthropomorphic digital agents can also become a new source of judgment - chatbots with human-like characteristics can trigger "anticipatory discrimination" in patients even without the presence of real humans. Furthermore, the team found that stigmatization triggers "persistent anxiety at the identity level," rather than fleeting embarrassment, providing a deeper perspective on understanding the psychological impact of stigmatization.

Image source: ©千库网

In addition to its theoretical contributions, this research provides a clear practical path for the implementation of medical chatbots, offers design guidance for medical institutions and health technology companies, and reminds them to use anthropomorphic design with caution in sensitive health scenarios to avoid causing additional psychological pressure to users, thereby improving the tools usage rate.

ABSTRACT:

Demand for healthcare chatbots has surged in recent years, with companies and governments actively investing in this technology. Drawing on care ethics and technology anthropomorphism literature, this research examines how businesses can design chatbots to provide more supportive healthcare services. Specifically, we explore how anthropomorphic features affect user perceptions and their willingness to adopt chatbots, particularly in the context of stigmatized health conditions. Across six experiments, including a Facebook advertising study, we find that, when consumers perceive the target health condition as highly stigmatized, they are more likely to adopt a robot-like chatbot than a human-like one. This preference arises from the belief that human-like chatbots are more likely to make social judgments, which is undesirable for consumers facing stigma. Supporting this social judgment threat mechanism, we find that this effect diminishes when the human-like chatbot lacks facial features. Our findings advance technology anthropomorphism research by showing how design choices affect chatbot adoption in sensitive contexts. By integrating care ethics, we emphasize the need for empathetic, non-judgmental design in digital health technologies. This research also contributes to stigma research, showing how thoughtful chatbot design can reduce stigma-driven disengagement. Practically, we provide actionable guidance for designers and policymakers, urging them to tailor chatbot features to the needs of stigmatized users to ensure inclusivity. Our research highlights the ethical responsibility of marketers to design technologies that reduce, rather than reinforce, stigma, thus contributing to a broader discourse on business ethics in technology.




- We thank Prof. WANG Lili and the team for their insightful and rigorous research on the contextual design of medical chatbots.

- You can read the original article in Chinese  here 

TOP