Why Avatar Design Must be Patient Centered to Ensure Inclusive Research

3 November 2025

Authored by: Nicky Barclay-Prout

Following our recent article on how AI avatars are progressing healthcare market research, this piece expands the conversation – exploring the psychological and emotional impact of avatar design in patient-facing research contexts. 

Think of this… 

A digital moderator appears on screen. They are young, svelte, and symmetrical. Camera-ready, with perfect teeth, perfect lighting, and a perfectly modulated voice.
They are here to interview a patient about living with obesity. 

Something feels off. 

For all the sophistication of this technology, they seem too perfect.
And perfection, as it turns out, can get in the way of trust. 

When Perfect Avatars Erode Patient Trust  

 The use of avatars as moderators in market research is accelerating, promising scalability, consistency, perhaps even a form of round-the-clock empathy. But in practice, the reaction has been mixed. 

Early experiences suggest that many avatars fall into two traps: they are either too perfect, polished to the point of distraction, or they slip into crude stereotypes. Instead of feeling relatable, they feel manufactured. This reflects the realism–trust paradox, as avatars become more lifelike, perceived authenticity can actually decline, evoking the discomfort of the uncanny valley.1,2,3

So what? When avatars appear idealized rather than representative, users tend to report lower empathy and reduced willingness to self-disclose.4,5 This dynamic echoes findings from media psychology, which show that when people repeatedly encounter “idealized” digital humans, they internalize those images as norms of competence and health.6,7

In an avatar moderator, that bias becomes relational rather than passive: it shapes who feels comfortable speaking and who feels subtly judged. 

Biased Training Data Flattens Avatar Diversity 

The problem is not the technology itself, but the version of humanity we try to encode within it, and the assumptions about what makes a person appear trustworthy or approachable.  

Today’s avatar platforms tend to be designed around marketing ideals, and many avatar-generation systems are trained on biased image datasets that privilege Western, youthful, and conventionally attractive features. The result is an “algorithmic beauty standard” that flattens diversity and amplifies stereotype. Older faces are underrepresented, larger bodies are rare, and visible disabilities are almost nonexistent.  

It’s the same trap advertisers fall into when casting model-like patients for chronic or life-limiting conditions. Unrealistic imagery does not just look off; it breaks the emotional contract of authenticity. 

When used in patient contexts, these avatars risk reinforcing the very exclusions that our research is meant to challenge. We’ve seen it first hand when researching therapy areas such as obesity, oncology, or chronic wounds, that overly beautiful aesthetic can alienate rather than engage. Patients open up most when they feel seen.  

The underlying principle is well established: similarity breeds trust. In virtual health environments, avatars that reflect the patient’s demographic characteristics improve comfort, communication, and perceived empathy.8 Subtle alignment in age, ethnicity, and body type has been linked to higher rapport.9,10 When that resemblance is missing, when a moderator looks decades younger or impossibly healthy, participants can experience cognitive dissonance that undermines rapport.  

Designing for Disclosure  

At RP, the conversation has moved from critique to curiosity: could avatars, when designed purposefully, actually enhance disclosure in certain contexts?  

Technology will continue to evolve, but the real opportunity lies in how we apply it. The goal is not to make avatars indistinguishable from humans; it is to make them feel relatable enough to be trusted. 

If the future of patient insight includes digital moderators, their design brief must start from psychological insight – ideally involving direct patient input. As researchers, we have a role to play in pushing technology providers to offer avatar options that reflect the real diversity of patient populations. Taking the time to consider realism and representation is not only ethical but a strategy to build trust and generate insights that genuinely reflect the lived experience.   

Practical steps include “demographic tuning”, adapting avatar characteristics to match participant profiles to increase engagement and self-disclosure in sensitive contexts. Another promising direction could be to acknowledge individual preferences by giving participants some control over the avatar moderator they engage with. Just as users can choose a preferred voice, in virtual assistants, platforms could offer simple customization options such as age, gender, and accent. In either scenario, testing emotional resonance (how participants respond, engage, or withdraw) should become a core part of piloting to ensure more inclusive, honest, and ultimately impactful patient research. 

Curious how you can make your research design more inclusive and patient centric?  Get in contact 


 References

  1. Baake, M., et al. (2024). Balancing realism and trust: AI avatars in science communication. Journal of Science Communication.
  2. Mori, M. (1970). The uncanny valley. Energy, 7(4), 33–35.
  3. Seyama, J., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments.
  4. Nowak, K. L., & Biocca, F. (2003). The effect of agency and anthropomorphism on social presence. Presence: Teleoperators and Virtual Environments.
  5. Fox, J., & Bailenson, J. (2009). Virtual self-modeling: The effects of avatars on health behavior change. Human Communication Research.
  6. Chou, H. T. G., & Edge, N. (2012). “They are happier and having better lives than I am”: The impact of using Facebook on perceptions of others’ lives. Cyberpsychology, Behavior, and Social Networking.
  7. Fardouly, J., Diedrichs, P. C., Vartanian, L. R., & Halliwell, E. (2015). Social comparisons on social media: The impact of Facebook on young women’s body image concerns and mood. Body Image.
  8. Nowak, K. L., & Rauh, C. (2005). The influence of the avatar on online perceptions of similarity, trust, and attraction. Computers in Human Behavior.
  9. Kang, S., & Gratch, J. (2010). Virtual rapport and empathy: Effects of gender and similarity in virtual agents. Proceedings of Intelligent Virtual Agents.
  10. Lee, S., Kim, J., & Sundar, S. S. (2021). Can similar avatars enhance trust? Computers in Human Behavior. 

 

 

Sign up to receive Rapport.

Rapport is our monthly newsletter where we share our latest expertise and experience.