Authored by: Nicky Barclay-Prout
Following our recent article on how AI avatars are progressing healthcare market research, this piece expands the conversation – exploring the psychological and emotional impact of avatar design in patient-facing research contexts.
Think of this…
A digital moderator appears on screen. They are young, svelte, and symmetrical. Camera-ready, with perfect teeth, perfect lighting, and a perfectly modulated voice.
They are here to interview a patient about living with obesity.
Something feels off.
For all the sophistication of this technology, they seem too perfect.
And perfection, as it turns out, can get in the way of trust.
The use of avatars as moderators in market research is accelerating, promising scalability, consistency, perhaps even a form of round-the-clock empathy. But in practice, the reaction has been mixed.
Early experiences suggest that many avatars fall into two traps: they are either too perfect, polished to the point of distraction, or they slip into crude stereotypes. Instead of feeling relatable, they feel manufactured. This reflects the realism–trust paradox, as avatars become more lifelike, perceived authenticity can actually decline, evoking the discomfort of the uncanny valley.1,2,3
So what? When avatars appear idealized rather than representative, users tend to report lower empathy and reduced willingness to self-disclose.4,5 This dynamic echoes findings from media psychology, which show that when people repeatedly encounter “idealized” digital humans, they internalize those images as norms of competence and health.6,7
In an avatar moderator, that bias becomes relational rather than passive: it shapes who feels comfortable speaking and who feels subtly judged.
The problem is not the technology itself, but the version of humanity we try to encode within it, and the assumptions about what makes a person appear trustworthy or approachable.
Today’s avatar platforms tend to be designed around marketing ideals, and many avatar-generation systems are trained on biased image datasets that privilege Western, youthful, and conventionally attractive features. The result is an “algorithmic beauty standard” that flattens diversity and amplifies stereotype. Older faces are underrepresented, larger bodies are rare, and visible disabilities are almost nonexistent.
It’s the same trap advertisers fall into when casting model-like patients for chronic or life-limiting conditions. Unrealistic imagery does not just look off; it breaks the emotional contract of authenticity.
When used in patient contexts, these avatars risk reinforcing the very exclusions that our research is meant to challenge. We’ve seen it first hand when researching therapy areas such as obesity, oncology, or chronic wounds, that overly beautiful aesthetic can alienate rather than engage. Patients open up most when they feel seen.
The underlying principle is well established: similarity breeds trust. In virtual health environments, avatars that reflect the patient’s demographic characteristics improve comfort, communication, and perceived empathy.8 Subtle alignment in age, ethnicity, and body type has been linked to higher rapport.9,10 When that resemblance is missing, when a moderator looks decades younger or impossibly healthy, participants can experience cognitive dissonance that undermines rapport.
At RP, the conversation has moved from critique to curiosity: could avatars, when designed purposefully, actually enhance disclosure in certain contexts?
Technology will continue to evolve, but the real opportunity lies in how we apply it. The goal is not to make avatars indistinguishable from humans; it is to make them feel relatable enough to be trusted.
If the future of patient insight includes digital moderators, their design brief must start from psychological insight – ideally involving direct patient input. As researchers, we have a role to play in pushing technology providers to offer avatar options that reflect the real diversity of patient populations. Taking the time to consider realism and representation is not only ethical but a strategy to build trust and generate insights that genuinely reflect the lived experience.
Practical steps include “demographic tuning”, adapting avatar characteristics to match participant profiles to increase engagement and self-disclosure in sensitive contexts. Another promising direction could be to acknowledge individual preferences by giving participants some control over the avatar moderator they engage with. Just as users can choose a preferred voice, in virtual assistants, platforms could offer simple customization options such as age, gender, and accent. In either scenario, testing emotional resonance (how participants respond, engage, or withdraw) should become a core part of piloting to ensure more inclusive, honest, and ultimately impactful patient research.
Curious how you can make your research design more inclusive and patient centric? Get in contact
References
Jump to a slide with the slide dots.
Engaging patients early in clinical trials improves design, endpoints, and outcomes-driving patient-centric, inclusive research.
Read moreLearn how measuring brand equity in pharmaceutical marketing reveals insights into physician prescribing behavior, patient trust, and future market...
Read moreExplore why China’s rare disease market is surging and how pharma can succeed with policy shifts, AI tools, and strategic insights.
Read moreRapport is our monthly newsletter where we share our latest expertise and experience.