Synthetic users and AI in research:: Allies or Noise in Understanding Reality?
Keywords: AI in research, synthetic users, research and AI, participant recruitment, AI-assisted recruitment, human-centred research, insight validation.
In recent months, concepts like “synthetic users”, “AI-generated people” or “journey simulations” have gained momentum as ways to quickly test products, messages or experiences. The promise is tempting: getting “insights” in minutes, without having to recruit or coordinate sessions with real people.
At quantica, we see AI as a powerful tool to prepare, speed up and complement research, but not as a replacement for the human side. The key is understanding what it can do well and where it starts to distort reality.
1. What “synthetic users” are and what they’re for.
When we talk about synthetic users, we mean simulated representations of users, for example:
Personas created from historical data and then “filled in” by AI.
Journey simulations (what a “typical” person would do across different screens or decisions).
Conversations with generative models “acting” as a specific customer profile.
They’re useful as a sparring partner to:
Explore hypotheses before going into the field.
Refine research guides and materials.
Simulate extreme scenarios that we later want to contrast with real people.
Generate quick variations of messages or claims to test afterwards.
In short: they help us think better and faster, but they do not replace reality.
2. Risks: when AI stops helping.
The problems start when we treat what a model says as if it were field data:
Oversimplification: AI tends to produce neat, coherent answers; real life is much messier and more contradictory.
Amplified biases: if the training data is biased (and it is), synthetic users will be biased too.
False validation: “people prefer X” based only on simulations is a dangerous trap.
Speed ≠ certainty: just because something is fast doesn’t mean it’s better for making important decisions.
3. AI in recruitment: support, not autopilot.
Beyond simulations, AI can also add value in ENGAGE, our recruitment pillar:
Designing and reviewing screeners
Proposing first drafts of recruitment questionnaires.
Simplifying language and spotting ambiguities.
Supporting segmentation
Analysing internal data to identify usage patterns.
Suggesting quota criteria based on behaviour, not just demographics.
Optimising operations
Drafting variants of emails and invitation messages.
Suggesting channel and timing combinations based on past performance.
But recruitment that is “over-optimised” by AI can easily close the door to profiles that don’t fit the learned pattern and reinforce existing biases. That’s why, for us, AI in recruitment is an assistant, not the decision-maker: decisions about who we look for, how we balance diversity and which ethical boundaries we set remain human.
4. How we use it at quantica.
At quantica, we start from a simple idea: Research with people is the foundation.
AI is one more tool to think, explore and prepare better.
We use it to:
Design better studies and recruitment processes.
Refine questions, materials and messages.
Explore angles before going into the field.
But validation, deep understanding and contextual interpretation still come from conversations, observations and data with real people. Because in the end, we don’t design for synthetic users. We design for people. And that requires technology, yes, but also listening, judgement and a human perspective.