Skip to content

Discovery methods

The real work of understanding users happens between humans. Nobody learns what a person actually struggles with by chatting to an AI. You learn it in interviews, field visits, observations, and immersions. What the Business Analysis skill can do is notice when your understanding has a gap and suggest the method that will close it fastest.

Every card on this page is a method the BA agent will bring up during your session. You take the method into the field, run it with real users, and come back with findings the agent can turn into an Exploration board entry.

Index

Interviews and conversations

Observation and immersion

Synthesis and mapping

Qualitative interview

An open conversation with one user built around narrative and specifics. Done right, it gives you the user's real mental model in their own words.

When to reach for it. You cannot describe the user concretely. You can name a user but have no idea what they really struggle with. Your own answers to "why is that a problem" feel like guesses instead of evidence.

How to run it.

  1. Two people per session. One moderates, one takes notes. Do not try to do both alone, you will lose half the data.
  2. Open with an invitation to narrate. "Walk me through a typical day" or "Tell me about the last time this happened to you."
  3. Ask open questions only. No yes/no. No leading questions.
  4. When the user gives a generic answer, ask for specifics. "Can you show me how you did that?" "When was the last time this happened?"
  5. Ask about emotion. "How did that feel?" "What was the most frustrating part?"
  6. Transcribe. Do not interpret during the interview. The quotes are what matters, not your conclusions about them.
  7. Within 30 minutes of finishing, write down four or five raw observations, not yet cleaned up into insights.

Team and time. Two people per session. 60 to 90 minutes per interview. 30 minutes of synthesis straight after.

Things that go wrong.

  • Explaining the product you are building. That poisons every answer afterwards.
  • Filling every silence. Three seconds of quiet usually produces more follow-up than any clever next question.
  • Sticking to your script when the user opens a more interesting thread. Follow the thread.
  • Writing down your interpretations instead of the exact words the user used.

5-Why probe. When a surface answer appears, ask "why is that a problem?" until you hit something you did not expect. You do not always get to five, but you almost always get to something deeper than the first answer.

What to bring back. Four or five raw observations per interview plus two or three direct quotes you found striking.

Explorative interviews

Short, topic-spanning conversations with users you do not already have a relationship with. Use them to map the problem space before committing to deeper research.

When to reach for it. You have a fuzzy problem statement and no clear idea which user segment to focus on. You suspect several user groups might be affected but cannot rank them. You want to verify the problem exists at all before investing in long interviews.

How to run it.

  1. Write 5 to 8 broad questions that cover the whole problem space.
  2. Find users quickly. LinkedIn, user forums, your personal network, or intercept interviews in a place where the user naturally shows up.
  3. Run 7 to 10 short conversations, 20 to 30 minutes each. Do not try to be exhaustive.
  4. After each interview, write down the three things that most surprised you.
  5. After the batch, cluster the surprises. The clusters tell you where to go deeper next.

Team and time. One or two people. 20 to 30 minutes per interview. Plan the whole batch across one or two weeks.

Things that go wrong.

  • Stopping after three interviews because patterns already seem to appear. Run four more to confirm.
  • Interviewing only people you already know. The bias is too strong.
  • Writing the questions so stiffly that they kill the conversation. Read them aloud before the first interview, rewrite anything that sounds awkward.

What to bring back. The strongest insight clusters from the batch and a short note on which user segment felt most promising.

Extreme users

Interviews with people at the edges of the user population. Power users who do the thing far more than average, and refusers who will not do it at all. Their motivations and objections bracket the real design space.

When to reach for it. Your average-user interviews keep producing generic answers. The user group feels too uniform to design for. You cannot find a real emotional driver behind the problem.

How to run it.

  1. Pick the axis of extreme. Frequency, intensity, refusal, expertise.
  2. Find two to four power users, people who use the product or process ten times more than average.
  3. Find two to four non-users, people who tried it and quit, or never started.
  4. Interview each with the qualitative-interview rhythm. Add one probe: "Why you and not everyone else?"
  5. Synthesise. The power-user motivations and the non-user objections together show you the real span of the problem.

Team and time. Two people. 60 to 90 minutes per interview. Four to eight interviews in total usually cover it.

Things that go wrong.

  • Dismissing weird habits as outliers. That is exactly where the unmet need lives.
  • Underestimating non-user recruitment. It is harder than finding power users and worth the extra effort.
  • Letting power users sell you on features. Separate what they want from what they do.

What to bring back. A summary of the two edge groups, the shared motivations, and the shared objections.

Expert conversations

A focused interview with one domain expert. Used to validate feasibility, surface regulatory or technical constraints, and correct assumptions you cannot check yourself.

When to reach for it. The problem has constraints you cannot evaluate on your own (legal, medical, safety, regulatory, deep technical). You suspect a proposed solution might be infeasible. Internal stakeholders disagree on what is possible.

How to run it.

  1. Pick the expert for a specific question, not a general brain-pick.
  2. Prepare 3 to 5 concrete questions. "What do you think?" is not one of them.
  3. Book 30 minutes. Experts are busy, be precise.
  4. Open with your current understanding so the expert can correct you fast.
  5. Ask the single thing you are most uncertain about. "What would you be worried about if we built this?"
  6. Write down the exact phrases the expert uses. Vocabulary matters in domain work.

Team and time. One or two people. 30 to 45 minutes per expert. Two or three experts is usually enough.

Things that go wrong.

  • Going external before you went internal. Internal experts give you the vocabulary you need to talk to external ones without wasting their time.
  • Asking "what do you think" and letting the expert wander. You will not get useful data out of a 30-minute ramble.
  • Not writing the answers down verbatim. You will remember the paraphrase and forget the constraint.

What to bring back. A one-page constraint summary sorted into technical, organisational, financial, and a short list of "things I was wrong about" from your own assumptions.

Fly on the wall

Silent, non-intrusive observation of real user behaviour in its natural context. Users do things they would never describe in an interview, because they consider them too obvious or too embarrassing to mention.

When to reach for it. Users describe a workflow that contradicts what the data shows. Interviews produce the ideal answer instead of the real one. You need to understand a routine so normal the user cannot describe it.

How to run it.

  1. Pick one real context. A shop floor, an office, a kitchen, a service counter.
  2. Get permission to observe. Be visible but silent.
  3. Sit for at least 60 minutes. The first 15 are warm-up. The real behaviour appears later.
  4. Note three columns: what they do, what they work around, where they hesitate.
  5. Never interrupt. Never ask "why did you do that?" during the observation.
  6. Debrief with the user after the session using two or three clarifying questions.

Team and time. One or two observers. At least 60 minutes per session. Two or three sessions before the patterns stabilise.

Things that go wrong.

  • Bringing a laptop. It makes you intrusive and it tempts you to write essays instead of observations.
  • Helping. The moment you help, you stop observing.
  • Treating the workaround as a minor detail. The workaround is the insight.

What to bring back. A list of observed workarounds and a short description of the hesitation points you saw most often.

Self-test

Step into the user's process yourself. Walk the journey, use the product, call the hotline, work an afternoon in their role. Your own body and emotions teach you what interview transcripts cannot.

When to reach for it. You have never personally experienced the problem and your interview questions sound generic. The domain is unfamiliar and you need vocabulary fast. You want to stress-test an assumption about how easy or how hard something really is.

How to run it.

  1. Pick a concrete situation. One trip, one purchase, one call, one shift.
  2. Do it for real, not as a simulation.
  3. Write down every friction point as it happens. Phone notes are fine.
  4. Finish the whole process. Do not stop at the first failure.
  5. Debrief yourself within 24 hours. Surprises, frustrations, points where you wanted to give up.
  6. Verify your conclusions with real users afterwards. Your own experience is one data point, not the truth.

Team and time. One person. Between one hour and several days depending on the situation.

Things that go wrong.

  • Bringing a colleague. You will comment on the experience instead of having it.
  • Using the self-test as a replacement for real user research. It only works as a way to get better at interviewing users.
  • Ignoring the emotional data. "This was humiliating" is more useful than "step 4 was slow."

What to bring back. A frustration log with timestamps and a short list of assumptions you now believe were wrong.

Cultural probes

A small kit (diary, camera prompts, a few tasks) that users take home for days or weeks to document their own experience. It reaches moments you would never see in an interview.

When to reach for it. The behaviour you care about happens in private, at home, in bed, in the bathroom. The relevant moments are spread across days or weeks and cannot be captured in one session. You are designing for emotional or identity-related behaviour that users will self-censor in a live interview.

How to run it.

  1. Design a light kit. Three to five prompts, a disposable or phone camera, a short diary.
  2. Recruit four to six users willing to spend one or two weeks on it.
  3. Give clear, simple instructions. "Photograph one thing a day that made you feel X."
  4. Check in mid-week so participants stay engaged.
  5. Collect the kits and run a 45-minute debrief interview with each participant about their own artifacts.
  6. Cluster the artifacts across participants to find shared themes.

Team and time. One or two researchers. One to two weeks of user participation. 45 minutes debrief per user.

Things that go wrong.

  • Making the kit too heavy. Ten prompts produce less data than three, because participants quit.
  • Skipping the debrief. The real insights come from the conversation about the artifacts, not the artifacts themselves.
  • Forcing participants to share everything. Self-censorship is also data.

What to bring back. The strongest themes from the cluster and the specific artifacts that support each one.

User motivation analysis

A synthesis step. You take a pile of raw interview and observation data and pull out the functional needs, the emotional needs, and the obstacles that block them.

When to reach for it. You have interview notes but cannot tell the agent what the user actually wants. Every need you can name sounds the same. You cannot rank which need is most pressing.

How to run it.

  1. Cluster every raw insight on a wall or in a shared doc without judging.
  2. Split each cluster into three piles. Functional (what they try to do), emotional (how they want to feel), obstacle (what blocks them).
  3. For each functional need, write the emotional need that lives underneath. Every functional need has one.
  4. Rank needs by frequency times intensity. The top of that list drives the next phase.

Team and time. Two to four people. Two or three hours after an interview batch.

Things that go wrong.

  • Doing the synthesis weeks later. The tacit context evaporates fast, do it within a day or two.
  • Treating every stated obstacle as a real blocker. Users often invent reasons to justify not solving a problem they do not really have.
  • Ranking by popularity instead of intensity. One user in severe pain is more informative than ten users in mild annoyance.

What to bring back. The top functional needs with their paired emotional needs, and the biggest obstacles.

Persona synthesis cluster

Affinity-cluster all interview insights into groups, then draft one preliminary persona per group. A way to get useful persona candidates while research is still ongoing.

When to reach for it. You have too much interview data to force into a single persona. The users you interviewed clearly fall into several groups. You are not ready to commit to final personas yet.

How to run it.

  1. Put every interview note on a shared canvas, one sticky per raw insight.
  2. Cluster by similarity of behaviour, goal, or context. Not by demographic.
  3. Name each cluster with one short phrase. "The reluctant adopter." "The power user with no time."
  4. For each cluster, draft a persona seed of three or four lines describing the shared pattern.
  5. Decide which clusters deserve a full persona. Drop or merge the rest.

Team and time. Two to four people. Two or three hours.

Things that go wrong.

  • Clustering by job title instead of behaviour. "Senior engineers who distrust AI tools" is useful. "Engineers" is not.
  • Stopping at the first pass. The second pass is where the real structure usually appears.
  • Ending with seven clusters. You are splitting too fine. Merge until you have three or four.

What to bring back. The cluster seeds with names and one-paragraph descriptions.

Persona

A named, concrete description of a typical user built from interview evidence. A reference point every team member can check decisions against.

When to reach for it. You have enough interview data for a cluster seed to exist. Decisions keep drifting because the user is too abstract. Team members hold different mental models of the same user.

How to run it.

  1. Pick one persona seed from the synthesis cluster.
  2. Give the persona a name. Add a photo or a rough sketch, even a bad one.
  3. Fill in the relevant parts. Goals, motivations, frustrations, a typical day in the context of the problem. Skip demographics unless they matter.
  4. Add one direct quote from the interviews that captures the persona's voice.
  5. Write the single pain and the single need that are most relevant to the project.
  6. Pin the persona visibly so every design decision can be checked against it.

Team and time. One to six people per persona. 30 to 60 minutes per persona after the synthesis cluster.

Things that go wrong.

  • Inventing facts. Every line needs to come from an interview quote or observation.
  • Writing a biography. A persona longer than one page stops being a design tool.
  • Leaning on demographic clichés. Age, gender, and job title are the least predictive dimensions. Use them only where they matter.
  • Skipping the quote. Without a voice the persona never becomes real.

What to bring back. The finished persona one-pager.

Value proposition chain

A map of the actors between your direct user and the final beneficiary. Use it when value passes through several hands and you suspect it leaks somewhere along the way.

When to reach for it. The project is B2B and the buyer is not the end user. Value flows through several intermediaries (supplier, platform, reseller, customer, end user). You suspect the wrong actor is getting paid for the value.

How to run it.

  1. Draw every actor in the chain on one line. Who hands what to whom.
  2. For each hand-off, label the value that passes, the friction, and who captures the value.
  3. Mark every point where value is lost (broken hand-off, missing information, a bad experience).
  4. Identify the actor whose problem you are actually solving. That actor becomes your primary persona.

Team and time. Two to four people. One or two hours.

Things that go wrong.

  • Assuming the primary persona is the one who pays. Very often the most underserved actor in the chain drives the decision if you reach them directly.
  • Mixing friction with value loss on the same sticky. Friction is a design problem. Value loss is a business-model problem. Keep them separate.
  • Drawing a branching graph instead of a line. The line is where the story lives.

What to bring back. The chain diagram, the leak points, and a short note on which actor you propose as the primary persona.

Research mind map

A radial map that decomposes a fuzzy initial question into research fields, sub-questions, and data sources. Used to decide what to research first.

When to reach for it. The problem is too broad to interview anyone about. You keep getting lost in tangential questions. Several researchers are splitting the work and need alignment on scope.

How to run it.

  1. Write the central question in the middle of the map.
  2. Branch into research fields. Usually four to six (users, market, technology, regulation, ecosystem, history).
  3. For each field, branch into two to four sub-questions. Make them interview-ready.
  4. Prioritise the fields. Not everything gets researched. Pick a few to actually work on.
  5. Assign each field to a person or a pair.

Team and time. Two to four people. 60 to 90 minutes at the start of a research phase.

Things that go wrong.

  • Drawing the branches before writing the questions. The structure has to follow the content.
  • Writing questions nobody can answer. If it cannot be answered, it is not a question, it is a problem statement.
  • Treating the map as a one-time artifact. Revisit it after every interview batch. The shape changes.

What to bring back. The prioritised field list and the two or three sub-questions you are going into the field with first.

Stakeholder map

A visual map of everyone who is affected by, has influence on, or has information about the project. It tells you who to interview, who to align with, and who can quietly kill the project if you ignore them.

When to reach for it. The project touches several departments or external organisations. You are about to start interviews and do not know who to talk to. You sense political friction that could kill the project later.

How to run it.

  1. Brainstorm every actor on stickies. Users, customers, internal teams, suppliers, regulators, competitors, managers.
  2. Place each sticky on an influence-by-interest matrix (2x2).
  3. Draw the relationships between stickies and label each relationship with one word.
  4. Prioritise. High influence and high interest means "interview first and align early."
  5. Look for blind spots. A cell with too few stickies usually hides a missing perspective.

Team and time. The core project team plus one person from the customer side. 45 to 60 minutes.

Things that go wrong.

  • Ignoring the high-influence low-interest cell. That is the most dangerous cell. Those people can kill the project in one sentence they were not invited to hear.
  • Never updating the map. Stakeholders change as the project evolves. Redraw it after every milestone.
  • Writing role names instead of actual names. A role without an owner is not a stakeholder.

What to bring back. The matrix with names, influence/interest ratings, and a short note on the stakeholders you need to interview first.

Market and trend analysis

A clustering of megatrends, technology trends, consumer trends, competitors, and adjacent markets. Used to place the project in a wider context and spot opportunity fields.

When to reach for it. You cannot name the competitors in the space. The project might be obsolete before it ships because of an underlying trend. You want to find adjacent opportunities beyond the obvious problem.

How to run it.

  1. Desk research. Collect 20 to 30 trend items (articles, reports, press releases, product launches).
  2. Cluster them into four to six themes.
  3. Place competitors and partners inside the themes.
  4. Identify potential fields where trends intersect with your users' needs.
  5. Write one sentence per potential field describing the opportunity it implies.

Team and time. One or two people. One day of desk research plus two hours of clustering.

Things that go wrong.

  • Using the clustering as a collection exercise instead of a filter. If a trend does not fit any theme, it is probably not relevant to your project right now.
  • Staring only at direct competitors. The real threat usually comes from a different category. Spend at least as much time on adjacent markets.
  • Forgetting to timestamp the sources. Trends go stale fast.

What to bring back. The trend clusters with one potential field each.

User journey

A stage-by-stage map of what the user thinks, does, feels, and touches throughout an interaction with the product or service. It surfaces pain points and opportunity moments.

When to reach for it. The user's experience spans several touchpoints or channels. You suspect the problem lives in one phase of usage rather than in the product itself. You need to communicate the end-to-end experience to the team.

How to run it.

  1. Define phases. Often "before, during, after" is enough. A complex journey may need five to seven stages.
  2. For each phase, fill four lanes. Actions, thoughts, emotions, touchpoints.
  3. Mark pain points where the experience breaks, slows down, or causes emotional friction.
  4. Mark opportunity moments where a small improvement has a large emotional payoff.
  5. Focus design energy on the emotionally intense phases, not the functionally long ones.

Team and time. Two to four people. One to two hours per persona.

Things that go wrong.

  • Drawing the journey without an emotion line. The dips in the emotion line tell you where to intervene.
  • Mixing several personas in one journey. The pain points become fuzzy.
  • Filling in phases without interview quotes. Without evidence the journey turns into wishful thinking.

What to bring back. The journey diagram, the top pain points, and the opportunity moments worth exploring.

Next steps