Recruitment Is Not Logistics. It's Design.
Bad participant selection doesn't look like failure — it looks like confident, well-documented research pointing in the wrong direction. A practical framework for inclusion, exclusion, and diversity criteria in UX research.

I remember a project where we spent three weeks designing a user test. The questions were good. The tasks were realistic. The prototype was solid. We even had a discussion guide that made the team proud.
Then we ran the sessions.
Five participants. All longtime power users of the client's platform. Two had worked in adjacent industries. One had actually helped write the original product brief.
The feedback was meticulous, detailed, and almost completely useless for the people who would actually buy the product. We hadn't tested users. We had tested insiders. And it took three more weeks of additional research to undo the assumptions that first round had baked in.
The mistake that hides best
The NNGroup has called participant selection one of the most common and most expensive mistakes in UX research. I'd go further: it's the mistake that hides best.
When your questions are bad, you feel it during the session. When your analysis is sloppy, you see it in the presentation. But when your participants are wrong, the results look perfectly fine. They're coherent. They're quotable. They generate confidence.
That confidence is the problem. You walk out of five sessions with a strong sense of direction and a set of product decisions that reflect a population of exactly zero of your actual future users. The research didn't fail. It succeeded at measuring the wrong people.
Three questions before you write a screener
Good recruitment starts before you touch a screener. It starts with three questions, each filtering a different kind of noise.
Who should be in? These are your inclusion criteria. Not demographics — behaviors. Past behavior is a stronger predictor of future decisions than age, gender, or job title. If you're testing a project management tool, "works across multiple teams simultaneously" tells you far more than "35–45, mid-level professional." Behavior anchors the participant to the actual product context. Demographics anchor them to a marketing segment.
Who should stay out? Exclusion criteria are often treated as formalities. They shouldn't be. Industry insiders, extreme power users, people who have seen the product before, people with adjacent professional interests — they all bring assumptions that distort what you're measuring. If you're testing something designed for first-time users, anyone who has already formed mental models of how it "should" work will pollute the signal. Their feedback won't be wrong. It will be irrelevant, dressed up as expertise.
Who keeps the picture honest? Diversity criteria exist not for representation in the abstract, but for realism. Your future user base is not a single persona. It's a range of contexts, comfort levels, and approaches. If your participants all share the same prior experience or the same relationship to technology, you're not measuring your product in the real world. You're measuring it in a convenient approximation of it.
Where the screener breaks
Most recruitment failures happen in the screener itself.
The most common mistake: screening for demographics when you need to screen for behavior.
"Do you use digital tools for project management?" tells you almost nothing. Everyone says yes.
"In the last month, how many times did you create a new project from scratch, rather than duplicating an existing one?" — that tells you something. It filters by actual behavior, not self-reported identity.
The second most common mistake: leading questions that let candidates reverse-engineer the right answer. "How important is ease of use to you?" is not a screener question. It's an invitation for self-selection. A well-written screener is specific, behavioral, and slightly opaque about what you're looking for. It should feel more like a genuine survey than a filter.
When a client says "just ask our existing users"
If you do UX work with clients, you've heard this. Sometimes it comes from budget pressure. Sometimes from genuine belief that loyal users are representative users.
They rarely are.
Existing users have adapted to the product. They've built workarounds. They've forgotten what it felt like to be confused by it. Their feedback is valuable — but not for every question. Treating them as a stand-in for the broader audience leads to products that improve for the 10% while staying opaque to the 90%.
Inclusion, exclusion, and diversity criteria give you the professional language to have this conversation. Not as a methodological objection, but as a business argument: the cost of bad recruitment shows up downstream, in launch results, support tickets, and churn — not in the research session itself.
The quiet failure mode
There's an unspoken belief in a lot of UX research: that getting people in the room is the logistical part, and figuring out the right questions is where the real thinking happens.
I held this belief for longer than I'd like to admit.
But participant selection is the foundational design decision. The questions you ask are shaped by who can meaningfully answer them. The insights you draw depend entirely on whose experience you were measuring in the first place.
When the wrong people sit across from you, the research doesn't break visibly. It breaks quietly. The data looks fine. The presentation goes well. The team feels confident. And somewhere further down the line — in a launch that underperforms, in feedback that contradicts your findings — you discover that the users you built for were never quite real.
Recruitment isn't logistics. It's the first design decision in every research project. Treat it that way, and everything downstream becomes more reliable.