Most product failures are not engineering failures. They are elegant solutions to low-value problems—or to problems that do not exist outside the conference room.
Teams skip problem discovery because solutions feel productive. Demos impress. Roadmaps comfort executives. Problem work is squishy until you get good at it, so organizations reward motion over clarity.
Your job is to slow the right moments down: before major build commitments, before platform bets, before you lock a narrative the sales team will sell regardless of truth.
Jumping to solutions is the default failure mode
The pattern is familiar. Someone sees a competitor ship a feature. A large customer threatens to churn. A leader returns from a conference energized. The team brainstorms, attaches a metric, and starts building.
Sometimes that works. More often, the team discovers late that:
- The “problem” was a one-off request disguised as strategy.
- Users work around the issue cheaply and will not switch behavior.
- The pain is real but owned by a buyer who never uses the product.
- The org cannot deliver the follow-on work that makes the fix stick.
Problem discovery is the discipline of asking, before we invest, whether the problem is real, frequent, severe, and aligned with what we can win at.
A real problem is not the same as a loud complaint
Complaints are data, but they are filtered through who speaks loudest, who your sales team visits most, and which customers have enterprise leverage.
A real problem, for product purposes, shows up across multiple independent signals: interviews with specifics, support themes (not one ticket), usage friction, funnel drop-offs, churn reasons, competitive losses you trust, and sometimes economic willingness to pay.
If only one VP mentioned it, treat it as a clue—not a mandate.
Severity and frequency are the boring axes that save you
A problem can be real but rare. It can be frequent but low impact. Your prioritization later will need numbers or structured judgment; problem discovery supplies the raw material.
Frequency: How often does this situation occur for a meaningful segment? Daily, weekly, quarterly?
Severity: What happens when it fails? Lost revenue? Legal risk? hours of manual work? public embarrassment? minor annoyance?
Be suspicious when severity is described in adjectives (“huge,” “critical”) without consequences. Ask: What broke? What did it cost? Who felt it?
Willingness to pay (money, time, data, or organizational change): Would users adopt a new workflow, pay more, or tolerate migration pain? If the answer is theoretically yes but practically no, you do not have a business problem—you have a polite audience.
A quick sanity check you can run in a meeting: If we magically solved this tonight, what measurable line moves? If nobody can name a plausible metric movement within a quarter, you may still pursue the work—but you should know you are betting on faith, not impact.
Another useful question: Who stops what they are doing when this fails? If the answer is “nobody,” you are often looking at an annoyance, not a problem that commands build time.
Symptoms masquerade as problems—and seduce you into local fixes
A symptom is what people name on the surface. The problem is the underlying cause—or the job not getting done.
Example: “We need more dashboards” sounds like a problem. Often it is a symptom of distrust in the underlying data, unclear definitions, or political need for visibility in a negotiation. Shipping more dashboards can increase clutter without increasing confidence.
Example: “Users do not finish onboarding” is a symptom. The problem might be value too late, unclear prerequisites, fear of importing data, or a mismatch between marketing promise and product reality.
If you solve symptoms without understanding structure, you accumulate product complexity without reducing human pain.
Opportunity-solution trees keep you honest about the shape of the bet
Teresa Torres popularized opportunity-solution trees for a reason: they make the hierarchy explicit. At the top: an outcome you want. Below: opportunities (problems or needs) that might drive that outcome. Below that: solutions—many possible—each testable.
The value is not the diagram. It is the conversation: Which opportunity are we actually pursuing? If your roadmap is only a list of solutions, you cannot explain why those solutions exist. If you cannot explain why, you cannot defend tradeoffs when resources tighten.
Use the tree to prevent the common failure mode: a single cherished solution searching for a justification. Instead, keep multiple solution ideas attached to the same opportunity until evidence narrows the field.
Walk through a concrete slice. Outcome: improve paid conversion among teams that start a trial but stall before inviting colleagues. Opportunities might include: value arrives too late, permissions scare admins, pricing is unclear at the moment of intent, or the product punishes incomplete data. Under “value arrives too late,” solutions could be a guided first success path, template libraries, or human onboarding—different bets on the same opportunity. The tree does not pick the winner. It stops the team from debating three unrelated features as if they were one argument.
Assumption mapping tells you what must be true—and what to test first
Before you build, list the beliefs that would have to be true for this bet to work. Examples:
- This segment experiences the pain at least weekly.
- They cannot solve it adequately with existing tools.
- They will trust us with the data required.
- We can deliver the core experience in one quarter.
Not all assumptions are equal. Some are fatal if wrong; others are painful but reversible. Map them by importance and evidence strength. Test the fatal, weakly supported beliefs first—the cheapest experiments that buy the most risk reduction.
This is where “lean” thinking is genuinely useful: not as a religion of MVPs, but as sequenced learning.
Problem discovery experiments do not have to be code. They can be concierge tests (you manually deliver the outcome), Wizard of Oz prototypes (human behind a thin UI), pricing conversations with real numbers, or smoke tests that measure click-through on a credible landing page. Choose the cheapest instrument that falsifies the riskiest belief.
Beware the trap of building an elaborate experiment to avoid talking to users. Sometimes the right next step is ten conversations, not a new microservice.
When to trust your gut—and when that is an excuse
Experienced PMs develop intuition. Pattern recognition is real. It is also contaminated by ego, recent anecdotes, and organizational politics.
Use your gut to generate hypotheses, not to end debate. When stakes are high, novelty is high, or the failure mode is expensive, require evidence. When stakes are low and reversal is cheap, a reasoned gut call can be faster than research theater.
A practical rule: the more irreversible the decision, the more problem validation you owe. Platform choices, pricing architecture, major policy shifts—these deserve heavy upstream clarity. Small UI experiments can lean on faster loops.
If every decision is “trust me,” you are not a PM; you are a gambler with a calendar.
Problem discovery is a team sport—PMs coordinate, they do not monopolize
The best discovery environments let engineers and designers hear pain directly. That reduces translation loss and builds shared ownership.
Your role is often to protect discovery time from the sprint machinery, to synthesize across conversations, and to prevent the loudest voice in sales from becoming the only voice in product.
That does not mean ignoring sales or support. It means converting their input into testable claims. “Customer X needs this” becomes: which segment does X represent, what outcome are they chasing, and what would we observe if this problem were widespread?
If you cannot state the problem, you are not ready to prioritize solutions
Write the problem as a single paragraph a smart stranger could understand, without mentioning your feature idea. Include who, when, consequence, and current workaround. If you cannot, you are still foggy.
Compare:
- Weak: “We need AI-powered insights.”
- Stronger: “Mid-market ops managers spend two hours every Monday reconciling three systems because exports disagree; they trust spreadsheets more than our app.”
The second statement invites measurement and solution diversity. The first invites hype.
Keep a problem brief attached to major bets: evidence list (with dates), affected segments, known workarounds, and “what we would need to see to declare this solved.” When leadership changes or memory fades, that brief is how you prevent zombie initiatives from resurrecting without scrutiny.
A compact checklist before you fund the build
You do not need a perfect scorecard. You do need to have wrestled with the basics:
- Existence: Do we have multiple independent signals—not one loud customer?
- Importance: Does it hurt enough that people will change behavior?
- Fit: Are we credible owners of this problem, or are we kidding ourselves?
- Solvability: Is a solution plausible in our constraints, including adoption cost?
- Differentiation: If we solve it, do we win anything—or does parity merely remove a objection?
If you cannot answer one of these, that is your next discovery task—not another design sprint on a solution sketch.
Connect forward: validated problems still compete for scarce capacity
Problem discovery narrows the field from “everything” to “worth discussing.” It does not absolve you of prioritization. Multiple real problems will still collide in one roadmap, and stakeholders will still disagree about urgency.
Next lesson: how to prioritize with frameworks—without pretending a spreadsheet removes the need for judgment.