How bias sneaks into the whole research process


Keywords: bias in research / research bias / sampling bias / recruitment bias / analysis bias / rigorous research / impartial research

 

When we talk about bias in research, we almost always think about the sample: “we’re missing seniors”, “we didn’t include people from X region”, “this is very digital-heavy”. And yes, the sample matters. A lot. But bias doesn’t start or end there: it sneaks into the brief, into how we ask questions, into how we moderate… and into how we later tell the story.


At quantica, we don’t pretend research is ever completely “neutral”. We start from something more honest: bias exists; what matters is making it visible and designing around it.

 

1. ENGAGE · Bias in recruitment: who gets in… and who never shows up.

 

The first layer of bias appears when we decide who we’re looking for and how we’re going to find them. Some classics:

  • Convenience bias: Always recruiting from the same panels, cities or channels. Result: we keep seeing a very specific slice of reality, over and over again.

  • Affinity bias: Being more likely to accept people who “fit” how we talk, think or imagine the typical user.

  • Access bias: Designing studies that only people with a good internet connection, flexible schedules or a certain level of digital literacy can realistically join.

Reducing these biases isn’t only about “adding more quotas”, but about:

  • Reviewing recruitment channels.

  • Asking ourselves who never shows up in our studies, and why.

  • Designing recruitment processes that don’t unintentionally filter people out (especially those with less time, fewer resources or lower digital access).

 

2. FRAME · Bias in how we define the study.

 

Even if the sample is flawless, we can still bias the study from the design itself. Some examples:


  • Confirmation bias: Setting objectives to prove what someone already believes: “we want to validate that the new feature is easy to use.”
    If the study is only built to confirm, it’s unlikely to challenge anything.

  • Bias in question wording: Questions that lead the answer (“did you find it easy…?”), that judge (“why didn’t you use…?”) or that oversimplify.

  • Focus bias: Looking only at what matters to one team (e.g. product) and ignoring impacts on other areas (operations, people, retail, support).

What helps here:

  • Co-creating objectives with more than one stakeholder.

  • Writing alternative hypotheses (what if what we expect doesn’t happen?).

  • Having someone who isn’t so close to the product review the discussion guides.

 
 

3. Bias in moderation: the influence of the person asking.

 

What people share in a session doesn’t depend only on what they think, but also on how they feel in that conversation. Some common biases:

  • Social desirability: Participants trying to say “the right thing”: what they think we want to hear or what sounds better. Even more so if someone from the brand is present.

  • Halo / hierarchy effect: If the participant perceives “power” on the other side (big brand, senior profile, “expert”), they may soften criticism or exaggerate praise.

  • Moderator reaction: Gestures, silences, smiles, changing the subject… all of that shapes what gets shared and what doesn’t.

That’s why it’s key to:

  • Create an explicit safety frame (“you’re not here to look good, you’re here to help this improve”). 

  • Normalise everyday behaviours (“lots of people do exactly this, tell me how it looks for you”).

  • Train moderators in listening, handling silence and being aware of their own position.

 
 

4. SENSE · Bias in analysis and in how we tell the story.

 

Even with strong recruitment, design and moderation, there’s still the final layer: how we interpret and communicate what we’ve learned. Here we see:

  • Confirmation bias (again): Selecting quotes and findings that support the narrative the team already had in mind.

  • Overweighting anecdotes: A striking, emotional or extreme case that ends up standing in for a whole group when it doesn’t.

  • Simplification bias: Turning nuance into overly definite statements: “users want…”, “people aren’t ready for…”.


Good practices we use at quantica:

  • Triangulation: qual, quant and business data when available.

  • Cross-review inside the team (someone who wasn’t in fieldwork reviews the conclusions).

  • Always including the limits of the study and the things we don’t know (yet).

 

5. The quantica view: making bias visible to make better decisions.

 

For us, working with bias is not about blame; it’s about professionalising research:

  • Recognising that completely neutral research doesn’t exist.

  • Designing ENGAGE–FRAME–SENSE with awareness of where bias can show up.

  • Explaining to client teams not only what we’ve learned, but also from where we’ve learned it.

Because in the end, bias doesn’t disappear if we ignore it. But when we put it on the table, we can make decisions that are more honest, more transparent, and much more useful for the business and for the people it affects.

 
Anterior
Anterior

Inclusive Exhibition: How to Include Accessibility Needs from the Outreach Stage