Personas are not useless. They are just wildly insufficient—and often actively misleading when teams treat them as research.
A persona named “Operations Olivia, 34, likes efficiency” can help designers remember that multiple user types exist. It cannot tell you why Olivia chose your competitor, what workaround she already built in spreadsheets, or what she will actually pay for. At worst, personas become corporate fan fiction: plausible, tidy, and disconnected from behavior.
In the last lesson we defined PM work as building the right thing. “Right” is not a matter of taste. It is a bet about human behavior under constraints. This lesson is about how to form those bets without fooling yourself.
Personas are a shorthand, not a source of truth
Treat personas as communication tools, not discoveries. They summarize what you already learned elsewhere. If the underlying learning is weak, the persona is wallpaper.
Good teams refresh or retire personas when reality diverges. Bad teams defend personas because workshops produced them. Your allegiance is to the user in the wild—not to the slide deck.
When someone says “our persona would want this,” ask: Which observation supports that? If the answer is silence or a single anecdote, you do not have user understanding. You have a hypothesis dressed up as identity.
Jobs-to-be-done forces you to talk about progress, not demographics
Jobs-to-be-done (JTBD) is often summarized as “people do not want a quarter-inch drill; they want a quarter-inch hole.” The useful version for PMs is sharper: people “hire” products to make progress in a circumstance. The job is the progress, not the persona label.
A parent buying meal kits may be “hiring” the product to reduce decision fatigue on Wednesday at 6 p.m., not to “eat healthy” in the abstract. A finance manager adopting a new analytics tool may be hiring it to survive a board question next month, not to “modernize the stack.”
JTBD questions that actually change your thinking sound like:
- What were you doing the last time this problem hurt enough to act?
- What did you try before us, including spreadsheets and manual processes?
- If we disappeared tomorrow, what would you do instead?
Notice what those questions have in common: they anchor on situations and substitutes, not attitudes.
Behavioral observation beats self-report for many product questions
People are unreliable narrators of their own habits—not because they lie, but because memory and identity get in the way. They round up frequency, downplay workarounds, and describe the person they wish they were.
So pair interviews with observation whenever you can: session replay (where ethical and permitted), site visits, shadowing support calls, watching onboarding attempts, reading verbatim tickets. Look for friction that users no longer notice because they have acclimated to pain.
Example: Users may say checkout is “fine” while consistently abandoning at the same step. The behavior is the truth; the word “fine” is a compression artifact.
This does not mean surveys and interviews are worthless. It means claims require corroboration. The strongest product insights often come from triangulation: what people say, what they do, and what breaks when they try.
In B2B, add workflow context: who approves spend, who owns the metric, who inherits the tool after rollout. A delighted end user blocked by procurement is not the same insight as a champion with budget. In B2C, look for substitution: free apps, offline habits, and “good enough” alternatives compete harder than your competitor list suggests.
When you cannot observe directly, approximate observation with artifacts: exports, screenshots of spreadsheets, photos of whiteboards, redacted emails. If users hesitate to share, that hesitation is also data—often about trust or political sensitivity.
User interviews are a skill—most teams do them badly
A bad interview is a sales call in disguise. You pitch, you steer, you ask leading questions, and you leave with validation theater.
A useful interview is closer to journalism. You want specifics: stories, sequences, exceptions, and concrete numbers (“How often?” “Last time it happened?”).
Ask about past behavior, not hypothetical futures. Humans are bad at predicting what they will do. They are better at recounting what they did—still imperfect, but better.
Useful prompts:
- Walk me through the last time that happened.
- What was the hardest step—and why?
- What workarounds did you build?
- Who else had to get involved?
- What would have to be true for you to switch?
Watch for social desirability bias. People want to seem competent and kind. They may downplay how much they ignore alerts, how often they break policy, or how price-sensitive they are. Neutral tone and nonjudgmental follow-ups help. So does asking for examples rather than values.
Listen more than you talk. If you are speaking more than twenty or thirty percent of the time, you are probably not learning—you are performing.
Recruit for specificity, not celebrity. The best participants recently experienced the situation you care about. “Power users” can be useful, but they can also be outliers who tolerate broken flows. Balance your sample intentionally rather than interviewing whoever is easiest.
Take notes that preserve verifiable detail: numbers, tools named, sequence of steps, exact phrases. Summaries like “they want simplicity” are not actionable. “They copy IDs into Slack because search cannot find partial matches” is.
Avoid leading with your solution even when you are excited. The moment you anchor on a feature, users will politely riff on your frame. If you must test a concept, show it late, after you have captured their independent story.
What users say and what users do diverge—and that divergence is data
When statements and behavior conflict, do not automatically assume users are lying. Often they believe what they say. The gap is information: incentives in the room, missing context, or a question framed poorly.
Your job is to document both: the quote and the metric. Then decide which one your decision depends on. For adoption and retention, behavior usually wins. For positioning and language, what people say still matters—because words shape perception even when behavior lags.
A classic pattern: users request more features while only using a narrow subset. That is not hypocrisy; it is aspirational self-image colliding with real constraints like time and attention. Product strategy has to serve actual usage, while marketing may speak to the aspiration. Mix those up and you get bloat plus disappointment.
Empathy without evidence is just assumption with a halo
Empathy is not a feeling you have about users. It is accurate modeling of their constraints: time, skill, risk, organizational politics, switching costs, and competing priorities.
Warmth helps build rapport in interviews; it does not replace rigor. If your “empathy” makes you reluctant to probe, you will leave interviews with polite fluff.
Strong researchers stay curious and kind while still asking: How do you know? Can you show me? When specifically? That is not cold. That is respect for reality.
Sample size: “enough” depends on what you are trying to learn
There is no magic number that works for every question. Rules of thumb:
- Early discovery about a messy problem: a handful of deeply contextual conversations (often five to eight) can surface major themes. You are not estimating a percentage of the population; you are mapping possibilities and language.
- Evaluative research (usability, comprehension): smaller samples can still find big issues; iteration beats one giant study.
- Quantitative estimation (conversion lift, segment sizing): you need statistical thinking and proper sampling—not twelve interviews pretending to be data science.
The failure mode is using “we talked to ten people” to justify a precise forecast. The other failure mode is doing zero conversations because you are waiting for a perfect study. Bias your learning method toward the decision at hand.
Turn interviews into decisions, not theater
After interviews, produce artifacts that compel action: problem statements tied to evidence, clips or quotes with context, a short list of surprises versus assumptions. If nothing changed in your roadmap or your questions, the research was expensive socializing.
Synthesis is a skill separate from interviewing. Themes are not “things we heard twice” by accident—they are patterns that survive scrutiny. Ask a cynical teammate to challenge your top three takeaways. If they cannot, you might be narrating instead of analyzing.
Share enough raw material that others can disagree constructively: short clips (with consent), anonymized notes, or a highlight reel with timestamps. Transparency beats a polished slide that hides the messy middle.
You are also allowed to conclude, “We still do not know.” That is a valid output if it comes with the next experiment—what you will try, what signal you need, and what it would take to change your mind.
Finally, close the loop with participants when you can. Users who spent time with you deserve to know what changed—not as marketing, but as respect. That practice also keeps your organization honest: promises made in interviews should not evaporate quietly.
This connects directly to what comes next
User understanding is the foundation, but it is not the same as problem discovery. You can understand people deeply and still chase the wrong problem—or a real problem that is not worth your company’s bet.
Next lesson: how to validate that a problem is real, important, and solvable before you fall in love with a solution.