Skip to content
phattv.dev
EmailLinkedIn

Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value - Teresa Torres

books114 min read

About the book

cover

Personal summary

Mindsets

  1. Outcome Oriented, think in outcomes rather than outputs
  2. Customer Centric: create customer value as well as business value
  3. Collaborative: leverage the expertise and knowledge of product trio to make decisions
  4. Visual: draw, externalize your thinking, and map what you know
  5. Experimental: identify assumptions and gather evidence
  6. Continuous: infuse discovery continuously throughout your development process

Actions

  • build product trio: product manager, product designer, software engineer
  • build the keystone habit: continuous interviewing weekly
  • work backward to find
    • product outcome: "If our customers had this solution, what would it do for them?"
    • business outcome: "If we shipped this feature, what value would it create for our business?"
  • show the work using visualization: boxes and arrows, opportunity solution tree, story maps, interview snapshot
  • focus on problems before jumping to solutions, prioritize "leap of faith" assumptions
  • start small, interview often, test fast experiments and prototypes before building, iterate
  • measure impact, combine qualitative and quantitative data for decisions

Quotes

Part I: What Is Continuous Discovery?

Chapter 1: The What and Why of Continuous Discovery

The Prerequisite Mindsets

1. Outcome-oriented: The first mindset is both a mindset and a habit. You'll learn more about the habit in the coming chapters, but the mindset requires that you start thinking in outcomes rather than outputs. That means rather than defining your success by the code that you ship (your output), you define success as the value that code creates for your customers and for your business (the outcomes). Rather than measuring value in features and bells and whistles, we measure success in impact-the impact we have had on our customers' lives and the impact we have had on the sustainability and growth of our business.

2. Customer-centric: The second mindset places the customer at the center of our world. It requires that we not lose sight of the fact (even though many companies have) that the purpose of business is to create and serve a customer. We elevate customer needs to be on par with business needs and focus on creating customer value as well as business value.

3. Collaborative: The third mindset requires that you embrace the cross-functional nature of digital product work and reject the siloed model, where we hand off deliverables through stage gates. Rather than the product manager decides, the designer designs, and the engineer codes, we embrace a model where we make team decisions while leveraging the expertise and knowledge that we each bring to those decisions.

4. Visual: The fourth mindset encourages us to step beyond the comfort of spoken and written language and to tap into our immense power as spatial thinkers. The habits in this book will encourage you to draw, to externalize your thinking, and to map what you know. Cognitive psychologists have shown in study after study that human beings have an immense capacity for spatial reasoning. The habits in this book will help you tap into that capacity.

5. Experimental: The fifth mindset encourages you to put on your scientific-thinking hat. Many of us may not have scientific training. but, to do discovery well, we need to learn to think like scientists identifying assumptions and gathering evidence. The habits in this book will help you develop and hone an experimental mindset.

6. Continuous: And finally, these habits will help you evolve from a project mindset to a continuous mindset. Rather than thinking about discovery as something that we do at the beginning of a project, you will learn to infuse discovery continuously throughout your development process. This will ensure that you are always able to get fast answers to your discovery questions, helping to ensure that you are building something that your customers want and will enjoy.

Chapter 2: A Common Framework For Continuous Discovery

Begin With the End in Mind

trio

Product Trio

As our product-discovery methods evolve, we are shifting from an output mindset to an outcome mindset. Rather than obsessing about features (outputs), we are shifting our focus to the impact those features have on both our customers and our business (outcomes). Starting with outcomes, rather than outputs, is what lays the foundation for product success.

When a product trio is tasked with delivering an outcome, the business is clearly communicating what value the team can create for the business. And when the business leaves it up to the team to explore the best outputs that might drive that outcome, they are giving the team the latitude they need to create value for the customer.

When a product trio is tasked with an outcome, they have a choice. They can choose to engage with customers, do the work required to truly understand their customers' context, and focus on creating value for their customers. Or they can take shortcuts-they can focus on creating business value at the cost of customers. The organizational context in which the product trio works will have a big impact on which choice the product trio will make. Some teams, however, choose to take shortcuts because they simply don't know another way of working. The framework in this chapter and the habits described in this book will help you resolve the tension between business needs and customer needs so that you can create value for your customers and your business.

The Challenge of Driving Outcomes

Most product trios don't have a lot of experience with driving outcomes. They grew up in a world where they were told what to build. Or they were asked to generate outputs, with little thought for what impact those outputs had. So, when we shift from an output mindset to an outcome mindset, we have to relearn how to do our jobs.

Unfortunately, it's not as simple as talking to customers even week. That's a good start. But we also need to consider the rest of our continuous-discovery definition:

  • At a minimum, weekly touchpoints with customer
  • By the team building the product
  • Where they conduct small research activities
  • In pursuit of a desired Outcome

If product trios tasked with delivering a desired outcome want to pursue business value by creating customer value, they'll need to work to frame the problem in a customer-centric way. They'll need to discover the customer needs, pain points, and desires that, if addressed, would drive their business outcome.

To reach their desired outcome, a product trio must discover and explore the opportunity space. The opportunity space, However, is infinite. This is precisely what makes reaching our desired outcome an ill-structured problem. How the team defines and structures the opportunity space is exactly how they give structure to the ill-structured problem of reaching their desired outcome.

The implication for product trios is that two of the most important steps for reaching our desired outcome are first, how we out and structure the opportunity space, and second, how we select which opportunities to pursue. Unfortunately, many product trios skip these steps altogether. They start with an outcome and simply start generating ideas. We do have to get to solutions-shipping code is how we ship value to our customers and create value for our business. But the right problem framing will help to ensure that we explore and ultimately ship better solutions.

The Underlying Structure of Discovery

OST

Opportunity Solution Trees

Opportunity Solution Trees (OST) have a number of benefits. They help product trios:

Resolve The Tension Between Business Needs And Customer Need

You start by prioritizing your business need-creating value for your business is what ensures that your team can serve your customer over time. Next, the team should explore the customer needs, pain points, and desires that, if addressed, would drive that outcome. The key here is that the team is filtering the opportunity space by considering only the opportunities that have the potential to drive the business need. By mapping the opportunity space, the team is adopting a customer- centric framing for how they might reach their outcome.

Build And Maintain A Shared Understanding Across Your Trio

For most of us, when we encounter a problem, we simply want to solve it. This desire comes from a place of good intent. We like to help people. However, this instinct often gets us into trouble. We don't always remember to question the framing of the problem. We tend to fall in love with our first solution. We forget to ask, "How else might we solve this problem?"

These problems get compounded when working in teams. When we hear a problem, we each individually jump to a fast solution. When we disagree, we engage in fruitless opinion battles. These opinion battles encourage us to fall back on our organizational roles and claim decision authority (e.g., the product manager has the final say), instead of collaborating as a cross-functional team

When a team takes the time to visualize their options, they build a shared understanding of how they might reach their desired outcome. If they maintain this visual as they learn week over week, they maintain that shared understanding, allowing them to collaborate over time. We know this collaboration is critical to product success.

Adopt A Continuous Mindset

Shifting from a project mindset to a continuous mindset is hard. We tend to take our six-month-long waterfall project, carve it up into a series of two-week sprints, and call it "Agile." But this isn't Agile. Nor is it continuous. A continuous mindset requires that we deliver value every sprint. We create customer value by addressing unmet needs, resolving pain points, and satisfying desires.

The opportunity solution tree helps teams take large, project-sized opportunities and break them down into a series of smaller opportunities. As you work your way vertically down the tree, opportunities get smaller and smaller. Teams can then focus on solving one opportunity at a time. With time, as they address a series of smaller opportunities, these solutions start to address the bigger opportunity. The team learns to solve project-sized opportunities by solving smaller opportunities continuously.

Unlock Better Decision-Making

Instead of framing our decisions as "whether or not" decisions, this book will teach you to develop a "compare and contrast" mindset. Instead of asking, "Should we solve this customer need?" we'll ask, "Which of these customer needs is most important for us to address right now?" We'll compare and contrast our options. Instead of falling in love with our first idea, we'll ask, "What else could we build?" or "How else might we address this opportunity?" Visualizing your options on an opportunity solution tree will help you catch when you are asking a "whether or not" question and will encourage you, instead, to shift to a compare-and-contrast question.

Even with this decision-making framework in hand, you'll still need to guard against overconfidence (the fourth villain of decision-making). It's easy to think that, when you've done discovery well, you can't fail, but that's simply not true (as we'll see in a few stories throughout this book). Good discovery doesn't prevent us from failing; it simply reduces the chance of failures. Failures will still happen. However, we can't be afraid of failure, Product trios need to move forward and act on what they know today, while also being prepared to be wrong. The habits in this book will help you balance having confidence in what you know with doubting what you know, so that you can take action while still recognizing when you are on a risky path.

And finally, we can't talk about decision-making without tackling the dreaded problem of analysis paralysis. Many of the decisions we make in discovery feel like big strategic decisions. That's because they often are. Deciding what to build has a big impact on our company strategy, on our success as a product team, and on our customers' lives. However, most of the decisions that we make in discovery are reversible decisions. If we do the necessary work to test our decisions, we can quickly correct course when we find that we made the wrong decision. This gives us the luxury of moving quickly, rather than falling prey to analysis paralysis.

Unlock Faster Learning Cycles

Many organizations try to define clear boundaries between the roles in a product trio. As a result, some have come to believe that product managers own defining the problem and that designers and software engineers own defining the solution. This sounds nice in theory, but it quickly falls apart in practice.

Best designers evolve the problem space and the solution space together. As they explore potential solutions, they learn more about the problem, and, as they learn more about the problem, new solutions become possible. These two activities are intrinsically intertwined. The problem space and the solution space evolve together.

When we learn through testing that an idea won't work, it's not enough to move on to the next idea. We need to take time to reflect. We want to ask: "Based on my current understanding of my customer, I thought this solution would work. It didn't. What did I misunderstand about my customer?" We then need to revise our understanding of the opportunity space before moving on to ne solutions. When we do this, our next set of solutions get better When we skip this step, we are simply guessing again, hoping that we'll strike gold.

Build Confidence In Knowing What To Do Next

While many teams work top-down, starting by defining a clear desired outcome, then mapping out the opportunity space, then considering solutions, and finally running assumption tests to evaluate those solutions, the best teams also work bottom-up. They use their assumption tests to help them evaluate their solutions and evolve the opportunity space. As they learn more about the opportunity space, their understanding of how they might reach their outcome (and how to best measure that outcome) will evolve. These teams work continuously, evolving the entire tree at once.

They interview week over week, continuing to explore the opportunity space, even after they've selected a target opportunity. They consider multiple solutions for their target opportunity, setting up good "compare and contrast" decisions. They run assumption tests across their solution set, in parallel, so that they don't overcommit to less-than-optimal solutions. All along, they visualize their work on their opportunity solution tree, so that they can best assess what to do next.

Unlock Simpler Stakeholder Management

When it comes to sharing work with stakeholders, product trios tend to make two common mistakes. First, they share too much information- entire interview recordings or pages and pages of notes without any synthesis-expecting stakeholders to do the discovery work with them. Or second, they share too little of what they are learning, only highlighting their conclusions, often cherry-picking the research that best supports those conclusions, In the first instance, we are asking our stakeholders to do too much, and, in the second, we aren't asking enough of them. The key to bringing stakeholders along is to show your work. You want to summarize what you are learning in a way that is easy to understand, that highlights your key decision points and the options that you considered, and creates space for them to give constructive feedback. A well-constructed opportunity solution tree does exactly this.

When sharing your discovery work with stakeholders, you can use your tree to first remind them of your desired outcome. Next, you can share what you've learned about your customer, by walking them through the opportunity space. The tree structure makes it easy to communicate the big picture while also diving into the details when needed. Your tree should visually show what solutions you are considering and what tests you are running to evaluate those solutions. Instead of communicating your conclusions (e.g., "We should build these solutions"), you are showing the thinking and learning that got you there. This allows your stakeholders to truly evaluate your work and to weigh in with information you may not have.

Part II: Continuous Discovery Habits

Chapter 3: Focusing On Outcomes Over Outputs

"An outcome is a change in human behavior that drives business results." - Josh Seiden, Outcomes Over Outputs

"Too often we have many competing goals that all seem equally important." - Christina Wodtke, Radical Focus

When we manage by outcomes, we give our teams the autonomy, responsibility, and ownership to chart their own path. Instead of asking them to deliver a fixed roadmap full of features by a specific date in time, we are asking them to solve a customer problem or to address a business need. The key distinction with this strategy over traditional roadmaps is that we are giving the team the autonomy to find the best solution. If they are truly a continuous-discovery team, the product trio has a depth of customer and technology knowledge, giving them an advantage when it comes to making decisions about how to solve specific problems.

Additionally, this strategy leaves room for doubt. A fixed roadmap communicates false certainty. It says we know these are the right features to build, even though we know from experience their impact will likely fall short. An outcome communicates uncertainty. It says, We know we need this problem solved, but we don't know the best way to solve it. It gives the product trio the latitude they need to explore and pivot when needed. If the product trio they needs with their initial solution, they can quickly shift to a new idea, often trying several before they ultimately find what will drive the desired outcome.

Finally, managing by outcomes communicates to the team how they should be measuring success. A clear outcome helps a team align around the work they should be prioritizing, it helps them choose the right customer opportunities to address, and it helps them measure the impact of their experiments. Without a clear outcome, discovery work can be never-ending, fruitless, and frustrating.

Exploring Different Types of Outcomes

Managing by outcomes is only as effective as the outcomes themselves. If we choose the wrong outcomes, we'll still get the wrong results. When considering outcomes for specific teams, it helps to distinguish between business outcomes, product outcomes, and traction metrics. A business outcome measures how well the business is progressing. A product outcome measures how well the product is moving the business forward. A traction metric measures usage of a specific feature or workflow in the product.

Business outcomeMeasure business valueRetention
Product outcomeMeasure how the product drives business valueDogs who like the food
Traction MetricsTrack usage of specific featuresOwners who use the transition calendar

Business outcomes start with financial metrics (e.g., grow revenue, reduce costs), but they can also represent strategic initiatives (e.g., grow market share in a specific region, increase sales to a new customer segment). Many business outcomes, however, are lagging indicators. They measure something after it has happened. It's hard for lagging indicators to guide a team's work because it puts them in react mode, rather than empowers them to proactively drive results. For Sonja's team, 90-day retention was a lagging indicator of customer satisfaction with the service. By the time the team was able to measure the impact of their product changes, customers had already churned. Therefore, we want to identify leading indicators that predict the direction of the lagging indicator. Sonja's team believed that increasing the perceived value of tailor-made dog food and increasing the number of dogs who liked the food were leading indicators of customer retention. Assigning a team a leading indicator is always better than assigning a lagging indicator.

As a general rule, product trios will make more progress on a product outcome rather than a business outcome. Remember, product outcomes measure how well the product moves the business forward. By definition, a product outcome is within the product trio's span of control. Business outcomes, on the other hand, often require coordination across many business functions.

Assigning product outcomes to product trios increases a sense a product team is assigned a of responsibility and ownership. If business outcome, it's easy for the trio to blame the marketing or customer-support team for not hitting their goal. However, if they are assigned a product outcome, they alone are responsible for driving results. When multiple teams are assigned the same outcome, it's easy to shift blame for lack of progress.

Finally, when setting product outcomes, we want to make sure that we are giving the product trio enough latitude to explore. This is where the distinction between product outcomes and traction metrics can be helpful. It's also a key delineation between an outcome mindset and an output mindset.

When we assign traction metrics to product trios, we run the risk of painting them into a corner by limiting the types of decisions the that they can make. Product outcomes, generally, give product trios far more latitude to explore and will enable them to make the decisions they need to ultimately drive business outcomes. However, there are two instances in which it is appropriate to assign traction metrics to your team.

First, assign traction metrics to more junior product trios. Improving a traction metric is more of an optimization challenge than a wide-open discovery challenge and is a great way for a junior team to get some experience with discovery methods before giving them more responsibility. For your more mature teams, however, stick with product outcomes.

Second, if you have a mature product and you have a traction metric that you know is critical to your company's success, it makes sense to assign this traction metric to an optimization team. For example, Sonja's team may already know that customers want to use the transition calendar-perhaps they use it every day-but the recommended schedule isn't as effective as they hoped it would be. In this case, it might make sense to have a team focused on optimizing the schedule. If the broader discovery questions have already been answered, then it's perfectly fine to assign a traction metric to a team. The key is to use traction metrics only when you are optimizing a solution and not when the intent is to discover new solutions. In those instances, a product outcome is a better fit.

Outcomes Are the Result of a Two-Way Negotiation

Setting a team's outcome should be a two-way negotiation between the product leader (e.g., Chief Product Officer, Vice President of Product, etc.) and the product trio.

The product leader brings the across-the-business view of the organization and should communicate what's most important for the business at this moment in time. But to be clear, the product leader should not be dictating solutions. Instead, the leader should be identifying an appropriate product outcome for the trio to focus on. Outcomes are a good way for the leader to communicate strategic intent.

The product trio brings customer and technology knowledge to the conversation and should communicate how much the team can move the metric in the designated period of time (usually one calendar quarter). The trio should not be required to communicate what solutions they will build at this time, as this should emerge from discovery.

If the business needs the team to have a bigger impact on the outcome, the trio will need to adjust their strategy to be more ambitious, and the product leader will need to understand that more ambitious outcomes carry more risk. The team will need to make bigger bets to increase their chance of success, but these bigger bets typically come with a higher chance of failure. Similarly, the product leader and product trio can negotiate resources (e.g., adding engineers to the team) and/or remove competing tasks from the team's backlog, giving them more time to focus on delivering their outcome.

Encouraging a two-way negotiation between the product leader and the product trio ensures that the right organizational knowledge is captured during the selection of the outcome. It, however, has another benefit. Bianca Green, business faculty at University of Twente (in the Netherlands), and her colleagues found that teams who participated in the setting of their own outcomes took more initiative and thus performed better than colleagues who were not involved in setting their outcomes. This is an area where the research supports industry best practice.

A Guide for Product Trios

Product trios tend to fall into four categories when it comes to setting outcomes:

1. they are asked to deliver outputs and don't work toward outcomes

This is, by far, the most common scenario. When your product leader assigns a new initiative to your product trio, ask your leader to share more of the business context with you. Explore these questions:

  • Who is the target customer for this initiative?
  • What business outcome are we trying to drive with this initiative?
  • Why do we think this initiative will drive that outcome? (Be careful with Why? questions. They can put some leaders on the defensive. Use your best judgment, based on your knowledge of your specific leader.)

Try to connect the dots between the business outcome and potential product outcomes. Can you clearly define how this new initiative will impact a product outcome? Is that outcome a leading indicator of the lagging indicator, business outcome?

2. their product leader sets their outcome with little input from the team

If your product leader is asking you to deliver an outcome with no input from your team, try these tips to shift to a two-way negotiation:

  • If you are being asked to deliver a business outcome. try mapping out which product outcomes might drive that business outcome, and get feedback from your leader.
  • If you are being asked to deliver a product outcome. ask your leader for more of the business context. Try asking. "What business outcomes are we trying to drive with this product outcome?"
  • In either case, clearly communicate how far you think you can get in the allotted time.

3. the product trio sets their own outcomes with little input from their product leader

If your team is setting their own outcome with no input from the product leader, try these tips to shift to a two-way negotiation:

  • Before you set your own outcome, ask your product leader for more business context. Try these questions:
  • What's most important to the business right now? Try to frame this conversation in terms of business outcomes.
  • Is there a customer segment that is more important than other customer segments?
  • Are there strategic initiatives we should know about?

Use the information you gain to map out the most important business outcomes and what product outcomes might drive those business outcomes. Get feedback from your leader.

Choose a product outcome that your team has the most influence over.

4. the product trio is negotiating their outcomes with their leaders as described in this chapter

If your product trio is already negotiating outcomes with your product leader, congratulations! However, remember to keep these tips in mind as you set outcomes with your leader:

  • Is your team being tasked with a product outcome and not a business outcome or a traction metric?
  • If you are being tasked with a traction metric, is the metric well known? Have you already confirmed that your customers want to exhibit the behavior being tracked?
  • If it's the first time you are working on a new metric, are you starting with a learning goal (e.g., discover the relevant opportunities) before committing to challenging performance goal?
  • If you have experience with the metric, have you set a a specific and challenging goal?

Avoid These Common Anti-Patterns

Pursuing too many outcomes at once

Most of us are overly optimistic about what we can achieve in a short period of time. No matter how hard we work, our companies will always ask more of us. Put these two together, and we often see product trios pursuing multiple outcomes at once. What happens when we do this is that we spread ourselves too thin. We make incremental progress (at best) on some of our outcomes but rarely have a big impact on any of our outcomes. Most teams will have more of an impact by focusing on one outcome at a time.

Ping-ponging from one outcome to another

Because many businesses have developed fire-fighting cultures-where every customer complaint is treated like a crisis-it's common for product trios to ping-pong from one outcome to the next, quarter to quarter. However, you've already learned that it takes time to learn how to impact a new outcome. When we ping-pong from outcome to outcome, we never reap the benefits of this learning curve. Instead, set an outcome for your team, and focus on it for a few quarters. You'll be amazed at how much impact you have in the second and third quarters after you've had some time to learn and explore.

Setting individual outcomes instead of product-trio outcomes

Because product managers, designers, and software engineers typically report up, to their respective departments, it's not uncommon for a product trio to get pulled in three different directions, with each member tasked with a different goal. Perhaps the product manager is tasked with a business outcome, the designer is tasked with a usability outcome, and the engineer is tasked with a technical-performance outcome. This is most common at companies that tie outcomes to compensation. However, it has a detrimental effect. The goal is for the product trio to collaborate to achieve product outcomes that drive business outcomes. This isn't possible if each member is focused on their own goal. Instead of setting individual outcomes, set team outcomes.

Choosing an output as an outcome

Shifting to an outcome mindset is harder than it looks. We spend most of our time talking about outputs. So, it's not surprising that we tend to confuse the two. Even when teams intend to choose an outcome, they often fall into the trap of selecting an output. I see teams set their outcome as "Launch an Android app" instead of "Increase mobile engagement" or "Get to feature parity on the new tech stack" instead of "Transition customer to the new tech stack." A good place to start is to make sure your outcome represents a number even if you aren't sure yet how to measure it. But even then, outputs can creep in. I worked with a team that helped students choose university courses who set their outcome as "Increase the number of course reviews on our platform." When I asked them what the impact of more reviews was, they answered, "More students would see courses with reviews." That's not necessarily true. The team could have increased the number of reviews on their platform, but if they all clustered around a small number of courses, or if they were all on courses that students didn't view, they wouldn't have an impact. A better outcome is "Increase the number of course views that include reviews." To shift your outcome from less of an output to more of an outcome, question the impact it will have.

Focusing on one outcome to the detriment of all else

Like we saw in the Wells Fargo story, focusing on one metric at the cost of all else can quickly derail a team and company. In addition to your primary outcome, a team needs to monitor health metrics to ensure they aren't causing detrimental effects elsewhere. For example, cus. tomer-acquisition goals are often paired with customer-satisfaction metrics to ensure that we aren't acquiring unhappy customers. To be clear, this doesn't mean one team is focused on both acquisition and satisfaction at the same time. It means their goal is to increase acquisition without negatively impacting satisfaction.

Chapter 4: Visualizing What You Know

"Where actual or virtual, an external representation creates common ground..." - Barbara Tversky, Mind in Motion

"If we give each other time to explain ourselves using words and pictures, we build shared understanding." - Jeff Patton, User Story Mapping

When working with an outcome for the first time, it can feel overwhelming to know where to start. It helps to first map out your customers' experience as it exists today. This trio started by mapping out what they thought was preventing their customers from submitting their applications. But they didn't do so by get. ting together in a room to discuss what they knew. Instead, they started out with each product-trio member mapping out their own perspective. This was uncomfortable at first. The designer had little context for what might be going wrong. The engineer had a lot of technical knowledge but had little firsthand contact with customers. The product manager had some hunches as to what was going wrong but didn't have any analytics to confirm those hunches. They each did the best they could.

Once they had each created their individual map, they took the time to explore each other's perspectives. The product manager had the best grasp of the "known" challenges-the customer complaints that made their way to their call center and through support tickets. The designer missed a few steps in the process but did a great job of capturing the confusion and insecurity that the customer might be feeling in the process. Because he was new to the company, he was able to view the application process from an outsider's perspective. The engineer's map accurately captured the process and added detail about how one step informed another step. This uncovered insights into how a customer might get derailed if an earlier step had been completed incorrectly.

Each map represented a unique perspective-together they represented a much richer understanding of the opportunity space they intended to explore. The trio quickly worked to merge their unique perspectives into a shared experience map that better reflected what they collectively knew. Their map wasn't set in stone. They knew that it contained hunches and possibilities, not truth. But it gave them a clear starting point. They had made explicit what they thought they knew, where they had open questions, and what they needed to vet in their upcoming customer interviews.

Set the Scope of Your Experience Map

To get started, you'll want to first set the scope of your experience map. If you start jotting down everything you know about your customer, you'll quickly get overwhelmed. Instead, start with your desired outcome. The trio in the opening story was trying to increase application submissions, so they mapped out what they thought their customers' experience was as they filled out the application. They specifically focused on this question: "What's preventing our customers from completing their application today?" Their outcome constrained what they tried to capture.

Think strategically about how broad or narrow to set the scope. When a team is focused on an optimization outcome, like increasing application submissions, it's fine to define the scope narrowly. However, when working on a more open-ended outcome, you'll want to expand the scope of your experience map.

Start Individually to Avoid Groupthink

It's easy when working in a team to experience groupthink. Groupthink occurs when a group of individuals underperform due to the dynamics of the group. There are a number of reasons for this. When working in a group, it's common for some members to put in more effort than others; some group members may hesitate or even refrain from speaking up, and groups tend to perform at the level of the least-capable member. In order to leverage the knowledge and expertise in our trios, we need to actively work to counter groupthink.

To prevent groupthink, it's critical that each member of the trio start by developing their own perspective before the trio works together to develop a shared perspective. This is counterintuitive. It's going to feel inefficient. We are used to dividing and conquering, not duplicating work. But in instances where it's important that we explore multiple perspectives, the easiest way to get there is for each product-trio member to do the work individually.

Experience Maps Are Visual, Not Verbal

Many of us stopped drawing sometime in elementary school. As a result, we have the drawing skills of a child. This makes drawing uncomfortable. Regardless of how well you draw, drawing is a critical thinking aid that you will want to tap into. Drawing allows us to externalize our thinking, which, in turn, helps us examine that thinking. When we draw an experience map, rather than verbalize it, it's easier to see gaps in our thinking, to catch what's missing, and to correct what's not quite right.

As you get started, you are going to be tempted to describe this context with words. Don't. Language is vague. It's easy for two people to think they are in agreement over the course of a conversation, but, still, each might walk away with a different perspective. Drawing is more specific. It forces you to be concrete. You can't draw something specific if you haven't taken the time to get clear on what those specifics are. Your goal during this exercise is to do the work to understand what you know, not to generalize vague thoughts about your customer. So set aside some time, grab a pen and paper, and start drawing. Push through the discomfort of being a beginner, and you'll be reaping the benefits in no time.

Explore the Diverse Perspectives on Your Team

Take turns sharing your drawings among your trio. As you explore your teammates' perspectives, ask questions to make sure you fully understand their point of view. Give them time and space to clarify what they think and why they think it. Don't worry about what they got right or wrong (from your perspective). Instead, pay particular attention to the differences. Be curious.

When it's your turn to share, don't advocate for your drawing Simply share your point of view, answer questions, and clarify your thinking.

Remember, everyone's perspective can and should contribute to the team's shared understanding. We saw in our opening story that the trio's shared map was stronger because they synthesize the unique perspectives on the team into a richer experience map than any of them could have individually created.

Co-Create a Shared Experience Map

  1. Start by turning each of your individual maps into a collection of nodes and links. A node is a distinct moment in time, an action, or an event, while links are what connect nodes together. Links help show relationships between the nodes. Links can show the movement through the nodes.
  1. Create a new map that includes all of your individual nodes. Arrange the nodes from all of your individual maps into a new, comprehensive map.
  1. Collapse similar nodes together. Many of your individual maps will include overlapping nodes. Feel free to collapse similar nodes together. However, be careful. Make sure you are collapsing like items and not generalizing so much that you lose key detail.
  1. Determine the links between each node. Use arrows to show the flow through the nodes. Don't just map out the happy path. Remember to capture where steps need to be redone, where people might give up out of frustration, or where steps might loop back on themselves.
  1. Add context. Once you have a map that represents the nodes and links of your customer's journey, add context to each step. What are they thinking, feeling, and doing at each step of the journey? Try to capture this context visually. It will help the team (and your stakeholders) synthesize what you know, and it will be easier to build on this shared understanding.

Avoid Common Anti-Patterns

  1. Getting bogged down in endless debate. If you find yourself debating minute details, try to draw out your differences instead of debating them. We often debate details when we already agree. We just don't realize we already agree. When you are forced to draw an idea, you have to get specific enough to define what it is. This often helps to quickly clear up the disagreement or to pinpoint exactly where the disagreement occurs. Drawing really is a magic tool in your toolbox. Use it often.
  1. Using words instead of visuals. Because many of us are uncomfortable with our drawing skills, we tend to revert back to words and sentences. Instead, use boxes and arrows. Remember, you don't have to create a piece of art. Stick figures and smiley faces are perfectly okay. But drawing engages a different part of your brain than language does. It helps us see patterns that are hard to detect in words and sentences. The more you draw, the more you'll realize drawing is a superpower.
  1. Moving forward as if your map is true. One of the drawbacks of documenting a customer-experience map is that it can start to feel like truth. Remember, this is your first draft, intended to capture what you think you know about your customer. We'll test this understanding in our customer interviews and again when we start to explore solutions.
  1. Forgetting to refine and evolve your map as you learn more. It can be easy to think of this activity as a one-time event. However, as you discover more about your customer, you'll want to make sure that you continue to hone and refine this map as a team. Otherwise, you'll find that your individual perspectives will quickly start to diverge even when you are working with the same set of source data. Each person will take away different points from the same customer interview or the same assumption test. You'll want to continuously synthesize what you collectively know so that you maintain a shared understanding of your customer context.

Chapter 5: Continuous Interviewing

Some people say, "Give the customers what they want." But that's not my approach. Our job is to figure out what they're going to want before they do. I think Henry Ford once said, "If I'd asked customers what they wanted, they would have told me, 'A faster horse!" People don't know what they want until you show it to them. That's why I never rely on market research. Our task is to read things that are not yet on the page. - Steve Jobs, CEO of Apple, in Walter Isaacson's Steve Jobs

"Confidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it. It is wise to take admissions of uncertainty seriously, but declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true." - Daniel Kahneman, Thinking, Fast and Slow

Steve Jobs, the founder and former CEO of Apple, often discounted market research. He argued, "People don't know what they want until you show it to them." Jobs was right. Customers don't always know what they want. Most aren't well-versed in technology. Nor do they have time to dream up what's possible. That's our job. That's what Jobs meant when he said, "Our task is to read things that are not yet on the page." We are the inventors not our customers.

The purpose of customer interviewing is not to ask your customers what you should build. Instead, the purpose of an interview is to discover and explore opportunities. Remember, opportunities are customer needs, pain points, and desires. They are opportunities to intervene in your customers' lives in a positive way.

Steve Jobs knew the importance of discovering opportunities better than most. He and the rest of the Apple team were masters at uncovering unmet needs. When the first iPhone was released in 2007, it wasn't the first smartphone on the market. People resisted the idea of an on-screen keyboard. There were no third-party apps. But even though Apple wasn't first to the market and they launched with a limited feature set, the first iPhone solved several customer needs that other smartphones didn't.

The Challenges With Asking People What They Need

During a workshop, I asked a woman what factors she considered when buying a new pair of jeans. She didn't hesitate to answer. She said, "Fit is my number-one factor." I then asked her to tell me about the last time she bought a pair of jeans. She said, "I bought them on Amazon." I asked, "How did you know they would fit?" She replied, "I didn't, but they were a brand I liked, and they were on sale."

What's the difference between her two responses? Her first response tells me how she thinks she buys a pair of jeans. Her second response tells me how she actually bought a pair of jeans. This is a crucial difference. She thinks she buys a pair of jeans based on fit, but brand loyalty, the convenience of online shopping, and price (or getting a good deal) were more important when it came time to make a purchase.

This story isn't unique. I've asked people these same two questions countless times in workshops. The purchasing factors often vary, but there is always a gap between the first answer and the second. These participants aren't lying. We just aren't very good at understanding our own behavior.

This is exactly why in Thinking, Fast and Slow, behavioral economist Daniel Kahneman claimed, "A remarkable aspect of your mental life is that you are rarely stumped." Your brain will gladly give you an answer. That answer, however, may not be grounded in reality. In fact, Kahneman outlines dozens of ways our brains get it wrong. It's also why Kahneman argues confidence isn't a good indicator of truth or reality. He writes, "Confidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it." Not necessarily the truth.

As long as your brain can summon a compelling reason, it will feel like the truth-even if it isn't. Gazzaniga's participants thought they knew why they selected the card that they did. The left-brain interpreter filled in the missing details, creating a coherent story. The participant was confident-and, unfortunately, wrong.

Our failure wasn't due to a lack of research. It was because we asked our customers the wrong questions. We built a product based on a coherent story told by both the thought leaders in our space and by our customers themselves. But it wasn't a story that was based in reality. If you want to build a successful product, you need to understand your customers' actual behavior-their reality-not the story they tell themselves.

Too often in customer interviews, we ask direct questions. We ask, "What criteria do you use when purchasing a pair of jeans?" Or we ask, "How often do you go to the gym?" But these types of questions invoke our ideal selves, and they encourage our brains to generate coherent but not necessarily reliable responses. In the coming pages, you'll learn a far more reliable method for learning about your customers' actual behavior.

Distinguish Research Questions From Interview Questions

In any given interview, you'll want to balance broadly exploring the needs, pain points, and desires that matter most to that particular customer and diving deep on the specific opportunities that are most relevant to you. Every customer is unique, and, no matter how well you recruit, you may find that your customer doesn't care about the opportunity you most need to learn about. We don't want to spend time exploring a specific opportunity with a customer if that opportunity isn't important to them. Our primary research question in any interview should be: What needs, pain points, and desires matter most to this customer?

Since we can't ask our customers direct questions about their behavior, the best way to learn about their needs, pain points, and desires is to ask them to share specific stories about their experience. You'll need to translate your research questions into Interview questions that elicit these stories.

Instead of asking, "What criteria do you use when purchasing a pair of jeans?" - a direct question that encourages our participant to speculate about their behavior - we want to ask, "Tell me about the last time you purchased a pair of jeans." The story will help us uncover what criteria our participant used when purchasing a pair of jeans, but because the answer is situated in a specific instance (an actual time when they bought jeans), it will reflect their actual behavior, not their perceived behavior.

You'll want to tailor the scope of the question based on what you need to learn at that moment in time. A narrow scope will help you optimize your existing product. Broader questions will help you uncover new opportunities. The broadest questions might help you uncover new markets. The appropriate scope will depend on the scope you set when creating your experience map.

Excavate the Story

You'll notice, as you excavate the story, that your participant will bounce back and forth between the story they are telling and generalizing about their behavior. You might ask, "What challenges did you face?" and they may respond with, "I usually..." or, "In general, I have this challenge..." You'll want to gently guide them back to telling you about this specific instance. You might say, "In this specific example, did you face that challenge?"

Keep the interview grounded in specific stories to ensure that you collect data about your participants' actual behavior, not their perceived behavior. And remember, like most of the habits in this book, it takes practice. Don't get discouraged. Keep at it. You will get better with time.

You Won't Always Get What You Want

With story-based interviewing, you won't always collect the story that you want. That's okay. The golden rule of interviewing is to let the participant talk about what they care about most. You can steer the conversation in two ways.

First, you decide which type of story to collect. You can ask a more open question like: "Tell me about the last time you watched streaming entertainment." Or you can ask for a more specific story: "Tell me about the last time you watched streaming entertainment on a mobile device."

Second, you can use your story prompts to dig deeper into different parts of the story. If you are primarily concerned with how they chose what to watch, dig into that part of the story. If you aren't particularly interested in what device they watch on, don't ask for that detail if they leave it out of their story. Let your research questions guide your story prompts.

However, even so, you might encounter some participants who simply don't cooperate. They might not have a relevant story. They might be motivated to tell you about a different part of the story. They might not want to tell you a story at all. They might give one-sentence answers. Or they might want to share their feature ideas or gripe about how your product works.

In these instances, you'll want to do the best you can to capture the value the participant is willing to share, but don't force it. You always want to respect what the participant cares about most. Remember, with continuous interviewing, you'll be interviewing another customer soon enough. When we rarely interview, a disappointing interview can feel painful. When we interview continuously, a disappointing interview is easily forgotten.

Synthesize as You Go

interview snapshot

Interview Snapshot

An interview snapshot is a one-pager designed to help you synthesize what you learned in a single interview. It's how you are going to turn your copious notes into actionable insights. Your collection of snapshots will act as a reference or index to the customer knowledge bank you are building through continuous interviewing

After you've conducted even a handful of interviews, let alone the dozens you will conduct each year, interviews will start to blur together. You don't want to rely on your memory to keep your research straight. That's the job of an interview snapshot. Snapshots are designed to help you remember specific stories. They help you identify opportunities and insights from each and every interview.

The cliché "A picture is worth a thousand words" is true. The more visual your snapshot, the easier it will be for the team to remember the stories you collected-even weeks or months later. With permission, include a photo of the participant. Grab one from a social-media profile, Grab a screenshot from a video call. Snap a photo during an in-person interview. If your corporate guidelines require that you anonymize your interview data, or if you are interviewing participants about sensitive topics, skip the photo, and replace it with a visual that will help you remember therr specific story. This could be a workplace logo, the car they drive, or even a cat meme that represents their story. The photo should help you put that interview snapshot into context. It should help you remember the stories that you heard.

At the top of the snapshot, include a quote that represents a memorable moment from their story. This might be an emotional quote or a distinct behavior that stood out. Like the photo, the quote acts as a key for unlocking your memory of the specific stories that they told. I can still remember memorable quotes from interviews that I did years ago. A couple of my favorites are "I've worked here for three years. But they feel like dog years." And "I'm old school. Agile doesn't work for me." When a participant uses vivid language, be sure to capture their exact words.

To help put a specific interview into context, you'll want to capture some quick facts about the customer. The quick facts will change from company to company, but they should help you identify what type of customer you were talking to. For example, a service that matches job candidates with companies might segment their employer customers by size (e.g., SMB, enterprise) or they might list average annual contract size. A streaming-entertainment service might list the customer's sign-up date and average hours watched each week. If they segment further, they might even include behavioral traits like binge-watcher or active referrer. The goal of the quick-facts section is to help you understand how the stories you heard in this interview may be similar to or different from those you heard from other customers.

The photo and the memorable quote will act as keys that help you to unlock your memory of the stories you heard. The quick facts help you situate those stories in the right context. Now you want to capture the heart of what you learned. You'll do this by identifying the insights and opportunities that you heard in the interview.

An opportunity represents a need, a pain point, or a desire that was expressed during the interview. Be sure to represent opportunities as needs and not solutions. If the participant requests a specific feature or solution, ask about why they need that, and capture the opportunity (rather than the solution). A good way to do this is to ask, "If you had that feature, what would that do for you?" For example, if an interviewee says, "I wish I could just say the name of the movie I'm searching for," that's a feature request. If you ask, "What would that do for you?" they might respond, "I don't want to have to type out a long movie title." That's the underlying need. The benefit of capturing the need and not just the solution is that the need opens up more of the solution space. We could add voice search to address this need, but we also could auto-complete movie titles as they type.

Opportunities don't need to be exact quotes, but you should frame them using your customer's words. This will help ensure that you are capturing the opportunity from your customer's perspective and not from your company's perspective.

Throughout the interview, you might hear interesting insights that don't represent needs, pain points, or desires. Perhaps the participant shares some unique behavior that you want to capture, but you aren't sure yet what to do with this information. Capture these insights on your interview snapshot. Over time, insights often turn into opportunities.

The goal with the snapshot is to capture as much of what you heard in each interview as possible. It's easy to discount a behavior as unique to a particular participant, but you should still capture what you heard on the interview snapshot. Be as thorough as possible. You'll be surprised how often an opportunity that seems unique to one customer becomes a common pattern heard in several interviews.

Interview Every Week

Weekly interviewing is foundational to a strong discovery practice. Interviewing helps us explore an ever-evolving opportunity space. Customer needs change. New products disrupt markets. Competitors change the landscape. As our products and services evolve, new needs, pain points, and desires arise. A digital product is never done, and the opportunity space is never finite or complete

From a habit standpoint, it's much easier to maintain a habit than to start and stop a habit. If you interview every week, you'll be more likely to keep interviewing every week. Every week that you don't interview increases the chances that you'll stop interviewing alle ether. To nurture your interviewing habit, interview at least one customer every week.

Automate the Recruiting Process

The hardest part about continuous interviews is finding people to talk to. In order to make continuous interviewing sustainable, we need to automate the recruiting process. Your goal is to wake up Monday morning with a weekly interview scheduled without you having to do anything.

When a customer interview is automatically added to your calendar each week, it becomes easier to interview than not to interview. This is your goal.

Recruit Participants While They Are Using Your Product or Service

The most common and easiest way to find interview participants is to recruit them while they are using your product or service. You can integrate a single question into the flow of your product: "Do you have 20 minutes to talk with us about your experience in exchange for $20?" Be sure to customize the copy to reflect the ask- and-offer that works best for your audience. If the visitor answers "Yes," ask for their phone number.

Interview Your Customer Advisory Board

While most product teams worry their customers are too busy to talk with them, for most teams, this won't be true. We dramatically underestimate how much our customers want to help. If you are solving a real need and your product plays an important role in your customers' lives, they will be eager to help make it better. However, there are some audiences that are extremely hard to reach. In these instances, setting up a customer-advisory board will help.

One advantage of interviewing the same customers month over month is that you get to learn about their context in-depth and see how it changes over time. The risk is that you'll design your product for a small subset of customers that might not reflect the broader market. You can pair this recruiting method with one or two of the other methods to avoid this fate.

Interview Together, Act Together

Product trios should interview together. Some teams prefer to let one role, usually the product manager or the designer, be the "voice of the customer." However, our goal as a product trio is to collaborate in a way that leverages everyone's expertise. If one person is the "voice of the customer," that role will trump every other role.

Imagine that a product manager and a designer disagree on how to proceed. The designer has done all the interviewing. It's easy for the designer to argue, "This is what the customer wants." Whether or not that is true, the product manager has no response to that. Designating one person as the "voice of the customer" gives that person too much power in a team decision-making model. The goal is for all team members to be the voice of the customer.

Additionally, the more diverse your interviewing team, the more value you will get from each interview. What we hear in an interview will be influenced by our prior knowledge and experience. A product manager will hear things that an engineer might not pick up on, and vice versa.

Avoid These Common Anti-Patterns

  • Relying on one person to recruit and interview participants. To make sure continuous interviewing is a robust habit, make sure everyone on your team is well-versed in recruiting and interviewing.
  • Asking who, what, why, how, and when questions. Instead, generate a list of research questions (what you need to learn), and identify one or two story-based interview questions (what you'll ask). Remember, a story-based interview question starts with, "Tell me about a specific time when..."
  • Interviewing only when you think you need it. Remember, it's much easier to continue a weekly habit than to start and stop a periodic behavior. Continuous interviewing ensures that you stay close to your customers. More can greatly, continuous interviewing will help to ensure that you can get fast answers to your daily questions.
  • Sharing what you learned by sending out pages of notes and/ or sharing a recording. Instead, use your interview snapshots to share what you are learning with the rest of the organization.
  • Stopping to synthesize a set of interviews. Instead, we interview every week. Rather than synthesizing a batch of interviews, synthesize as you go, using interview snapshots.

Chapter 6: Mapping The Opportunity Space

The Power of Opportunity Mapping

As you collect customers' stories, you are going to hear about countless needs, pain points, and desires. Our customers' stories are rife with gaps between what they expect and how the world works. Each gap represents an opportunity to serve your cus- tomer. However, it's easy to get overwhelmed and not know where to start. Even if you worked tirelessly in addressing opportunity after opportunity for the rest of your career, you would never fully satisfy your customers' desires. This is why digital products are never complete. How do we decide which opportunities are more important than others? How do we know which should be addressed now and which can be pushed to tomorrow? It's hard to answer either of these questions if we don't first take an inventory of the opportunity space.

Our goal should be to address the customer opportunities that will have the biggest impact on our outcome first. To do this, we need to start by taking an inventory of the possibilities. We should compare and contrast the impact of addressing one opportunity against the impact of addressing another opportunity. We want to be deliberate and systematic in our search for the highest-impact opportunity.

As the opportunity space grows and evolves, we'll have to give structure to it again and again. As we continue to learn from our customers, we'll reframe known opportunities to better match what we are hearing. As we better understand how our customers think about their world, we'll move opportunities from one branch of the tree to another. We'll rephrase opportunities that aren't specific enough. We'll group similar opportunities together. These tasks will require rigorous critical thinking, but the effort will help to ensure that we are always addressing the most impactful opportunity.

Just Enough Structure

One of the biggest challenges with opportunity mapping is that it looks deceptively simple. However, it does require quite a bit of critical thinking. You'll want to examine each opportunity to ensure it is properly framed, that you know what it means, and that it has the potential to drive your desired outcome. If you do your first tree in 30 minutes and think you are done, you are probably not thinking hard enough. However, I also see teams make the opposite mistake. They churn for hours trying to create the perfect tree. We don't want to do that, either.

The key is to find the sweet spot between giving you enough structure to see the big picture, but not so much that you are over- whelmed with detail. Unfortunately, it will take some experience to get this right, as it's a "You'll know it when you see it" type of situation.

"Structure is done, undone, and redone." You should be revisiting your opportunity solution tree often. You'll continue to reframe opportunities as you learn more about what they really mean. Seemingly simple opportunities will subdivide into myriad sub-opportunities as you start exploring them in your interviews. This is normal. You don't have to do all of this work in your first draft. Do just enough to capture what you currently know, and trust that it will continue to grow and evolve over time.

Avoid Common Anti-Patterns

  • Opportunities framed from your company's perspective. Product teams think about their product and business all day every day. It's easy to get stuck thinking from your company's perspective rather than your customers' perspective. However, if we want to be truly human-centered, solving customer needs while creating value for the business, we need to frame opportunities from our customers' perspective. No customer would ever say, "I wish I had more streaming-entertainment subscriptions." But they might say, "I want access to more compelling content." Review each opportunity on your tree and ask, "Have we heard this in interviews?" If you had to add opportunities to support the structure of your tree, you might ask, "Can I imagine a customer saying this?" Or are we just wishing a customer would say this?
  • Vertical opportunities. Vertical opportunities tend to arise in two situations. One: You hear similar opportunities from several interviews, and each opportunity is really saying the same thing in different words. In this case, simply reframe one opportunity to encompass the broader need, and remove the rest. Otherwise, you're missing sibling opportunities. If each sub-opportunity only partially solves the parent, then identify which sibling opportunities are missing, and fill them in. If you aren't sure what the missing opportunities are, explore the parent opportunity in your upcoming interviews.
  • Opportunities have multiple parent opportunities. If your top-level opportunities represent distinct moments in time, then no opportunity should have two parents. If you are finding that an opportunity should ladder up to more than one parent, it's framed too broadly. Get more specific. Define one opportunity for each moment in time in which that need, pain point, or desire occurs.
  • Opportunities are not specific. Opportunities that represent themes, design guidelines, or even sentiment, aren't specific enough. "I wish this was easy to use," "This is too hard," and "I want to do everything on the go" are not good opportunities. However, if we make them more specific, they can become good opportunities "I wish finding a show to watch was easier," "Entering a movie title using the remote is hard," and "I want to watch shows on my train commute" are great opportunities.
  • Opportunities are solutions in disguise. Often in an interview, your customer will ask for solutions. Sometimes they will even sound like opportunities. For example, you might hear a customer say, "I wish I could fast-forward through commercials." You might be tempted to capture this as an opportunity. However, this is really a solution request. The easiest way to distinguish between an opportunity and a solution is to ask, "Is there more than one way to address this opportunity?" In this example, the only way to allow people to fast-forward through commercials is to offer a fast-forward solution. This isn't an opportunity at all. Instead, we want to uncover the implied opportunity. Maybe it's as simple as, "I don't like commercials." Why does this reframing help? If we then ask, "How might we address 'I don't like commercials'?" we can generate several options. We can create more entertaining commercials-like those we see during the Super Bowl. We can allow you to fast-forward through commercials like the customer suggested. Or we can offer a commercial-free subscription. An opportunity should have more than one potential solution. Otherwise, it's simply a solution in disguise.

Capturing feelings as opportunities. When a customer expresses emotion in an interview, it's usually a strong signal that an opportunity is lurking nearby. However, don't capture the feeling itself as the opportunity. Instead, look for the cause of the feeling. When we capture opportunities like "I'm frustrated" or I'm overwhelmed," we limit how we can help. We can't fix feelings. But if we capture the cause of those feelings-"I hate typing in my password every time I purchase a show" or "I'm way behind on this show"-we can often identify solutions that address the underlying cause. So, do note when a customer expresses a feeling, but consider it a signpost, and remember to let it direct you to the underlying opportunity.

Chapter 7: Prioritizing Opportunities, Not Solutions

For too long, product teams have defined their work as ship ping the next release. When we engage with stakeholders, we talk about our roadmaps and our backlogs. During our performance reviews, we highlight all the great features we implemented. The vast majority of our conversations take place in the solution space We assume that success comes from launching features. This is what product thought leader Melissa Perri calls "the build trap".

This obsession with producing outputs is strangling us. It's why we spend countless hours prioritizing features, grooming backlogs, and micro-managing releases. The hard reality is that product strategy doesn't happen in the solution space. Our customers don't care about the majority of our feature releases. A solution-first mindset is good at producing output, but it rarely produces outcomes.

Instead, our customers care about solving their needs, pain points, and desires. Product strategy happens in the opportunity space. Strategy emerges from the decisions we make about which outcomes to pursue, customers to serve, and opportunities to address. Sadly, the vast majority of product teams rush past these decisions and jump straight to prioritizing features. We obsess about the competition instead of about our customers. Our strategy consists of playing catch-up, and, no matter how hard we work, we always seem to fall further and further behind.

Focus on One Target Opportunity at a Time

In the Opportunity Mapping chapter (Chapter 6), you learned that, as you work vertically down the opportunity solution tree, you are deconstructing large, intractable opportunities into a series of smaller, more solvable sub-opportunities. The benefit of this work is that it helps us adopt an Agile mindset, working iteratively, delivering value over time, rather than delivering a large project after an extended period of time.

By addressing only one opportunity at a time, we unlock the ability to deliver value iteratively over time. If we spread ourselves too thin across many opportunities, we'll find ourselves right back in the waterfall mindset of taking too long to deliver too much all at once. Instead, we want to solve one opportunity before moving on to the next.

Assessing a Set of Opportunities

1. Opportunity sizing helps us answer the questions: How many customers are affected and how often? However, we don't need to size each opportunity precisely. This can quickly turn into a nev- er-ending data-gathering mission. Instead, we want to size a set of siblings against each other. For each set that we are considering. we want to ask, "Which of these opportunities affects the most customers?" and "the most often?" We can and should make rough estimates here.

2. Market factors help us evaluate how addressing each opportunity might affect our position in the market. Depending on the competitive landscape, some opportunities might be table stakes, while others might be strategic differentiators. Choosing one over the other will depend on your current position in the market. A missing table stake could torpedo sales, while a strategic differentiator could open up new customer segments. The key is to consider how addressing each opportunity positions you against your competitors.

3. Company factors help us evaluate the strategic impact of each opportunity for our company, business group, or team. Each organizational context is unique. Google might choose to address an opportunity that Apple would never touch. We need to consider our organizational context when assessing and prioritizing opportunities. We want to prioritize opportunities that support our company vision, mission, and strategic objectives over opportunities that don't. We want to de-prioritize opportunities that conflict with our company values.

Across all three levels-company, business group, and team- ket you'll also want to consider strengths and weaknesses. Some companies will be better positioned to tackle some opportunities over others. Some teams may have unique skills that give them an unfair advantage when tackling a specific opportunity. We want to take all of this into account when assessing and prioritizing opportunities.

4. Customer factors help us evaluate how important each opportunity is to our customers. If we interviewed and opportunity mapped well, every opportunity on our tree will represent a real customer need, pain point, or desire. However, not all opportunities are equally important to customers. We'll want to assess how important each opportunity is to our customers and how satisfied they are with existing solutions. We want to prioritize important opportunities where satisfaction with the current solution is low, over opportunities that are less important or where satisfaction with current opportunities is high.

Embrace the Messiness

When we turn a subjective, messy decision into a quantitative math formula, we are treating an ill-structured problem as if it were a well-structured problem. The problem with this strategy is that it will lead us to believe that there is one true, right answer. And there isn't. Once we mathematize this process, we'll stop thinking and go strictly by the numbers. We don't want to do this.

Instead, we want to leave room for doubt. As Karl Weick, an educational psychologist at the University of Michigan, advises in the second opening quote, wisdom is finding the right balance between having confidence in what you know and leaving enough room for doubt in case you are wrong.

Avoid These Common Anti-Patterns

Delaying a decision until there is more data. We do want to be data-informed. But we also want to move forward. We'll learn more from testing our decisions than we will from trying to make perfect decisions. The best way to prevent this type of analysis paralysis is to time-box your decision. Give yourself an hour or two or, at most, a day or two. Then decide, based on what you know today, and move on. Trust that you'll course-correct if you get data down the road that tells you that you made a less-than-optimal decision.

Over-relying on one set of factors at the cost of the others. Some teams are all about opportunity sizing. Others Focus exclusively on what's most important to their customers. Many teams forget to consider company factors and choose opportunities that will never get organizational buy-in. The four sets of factors (opportunity sizing, market factors, company factors, and customer factors) are designed to be lenses to give you a different perspective on the decision. Use them all.

Working backwards from your expected conclusion. Some teams go into this exercise with a conclusion in mind. As a result, they don't use the lenses to explore the possibilities and instead use them to justify their foregone conclusion. This is a waste of time. Go into this exercise with an open mind. You'll be surprised by how often you come away from it with a new perspective.

Chapter 8: Supercharged Ideation

"Creative teams know that quantity is the best predictor of quality."- Leigh Thompson, Making the Team

"You'll never stumble upon the unexpected if you stick only to the familiar." - Ed Catmull, Creativity, Inc.

Quantity Leads to Quality

For many of us, brainstorming seems unnecessary. We hear about a customer problem or need, and our brain immediately jumps to a solution. It's human nature. We are good at closing the loop-we hear about a problem, and our brain wants to solve it However, creativity research tells us that our first idea is rarely our best idea. Researchers measure creativity using three primary criteria: fluency (the number of ideas we generate), flexibility (how diverse the ideas are), and originality (how novel an idea is).

Similar research shows that fluency is correlated with both flexibility and originality. In other words, as we generate more ideas, the diversity and novelty of those ideas increases. Additionally, the most original ideas tend to be generated toward the end of the ideation session. They weren't the first ideas we came up with. So even though our brain is very good at generating fast solutions, we want to learn to keep the loop open longer. We want to learn to push beyond our first mediocre and obvious ideas, and delve into the realm of more diverse, original ideas.

Now, not all opportunities need an innovative solution. You don't need to reinvent the "forgot password" workflow (but you should still test it-more on that in Chapter 11). But for the strategic opportunities where you want to differentiate from your how competitors, you'll want to take the time to generate several ideas to ensure that you uncover the best ones.

The Problem With Brainstorming

Osborn outlined four rules for brainstorming. One, focus on quantity. In other words, generate as many ideas as you can. Two, defer judgment, and separate idea generation from idea evaluation. Three, welcome unusual ideas. And four, combine and improve ideas. He suggested that groups come together real-time, face-to-face (remember he predated the digital, asynchronous communication tools we have today), to generate ideas together, using these four rules. For those of you who have participated in brainstorming sessions, these rules aren't new. They are still in common use today.

So, if we stick to what the research tells us, individuals generating ideas on their own do outperform brainstorming groups. Even if we think brainstorming is effective, we are probably falling prey to the illusion of group productivity. Working in a group feels easier, so we think we must be performing at a higher level. Now, if you are wondering if there is a way to get the benefit of reducing cognitive failures that we see in groups with the creative output of individuals brainstorming on their own, it turns out there is.

The key difference here is that individuals still generated ideas on their own. Participants started by ideating on their own. Then they shared their ideas with the group. Then they went back to ideating on their own. They never ideated as a group, but they received the benefit of hearing each other's ideas. You'll see in the methods described below that we'll be putting this same pattern into practice.

Getting Unstuck

First, don't try to spend an hour generating ideas. Take frequent breaks. Spread it out throughout your day. Try to generate ideas in the few minutes you have between meetings. After lunch, go for a walk, and daydream about what you might build. A change of scenery can often inspire new ideas. Try generating ideas at different times of the day. Some of us will be better first thing in the morning, when we have a lot of mental energy, others might find the late afternoon to be the optimal idea-generation time. Experiment. Find what works best for you.

In addition to ideating at different times and in different places, take advantage of incubation. Incubation occurs when your brain continues to consider a problem even after you've stopped consciously thinking about it. You've probably experienced this often in your life. After working on what seems like an unsolvable challenge all day, you finally go home and take a break. After a good night's sleep, you come to work, returning to the problem. Instantly, you identify a solution. In fact, it's hard to imagine why it seemed like such a hard problem yesterday. That's incubation. And it works. Incubation can be particularly helpful after hearing other people's ideas. You may not think of new ideas right away, but odds are, your brain is still working on it in the background. So, if you get stuck, sleep on it. Tomorrow will likely bring fresh ideas.

When you get stuck, start with your competition. But then look wider. Ask yourself: What other industries have solved similar problems? They don't need to be similar or even be in an adjacent industry. You are looking for similarities in the target opportunity. For example, if you work for a job board and you are helping recruiters evaluate job candidates, you can look at other job boards, but you can also look at how online shopping sites help shoppers evaluate products, you can look at how travel aggregators help travelers choose hotels, and you can look at how insurance companies present different policies. These industries are unrelated to each other, but they are each solving analogous problems.

Additionally, when you are stuck, you can start to consider what your extreme users might need. What would a power user want? What does the first-time user need? What about people with different disabilities? How about people who live in remote locations or bustling cities? Young people? Senior citizens? Your extreme users will vary by product, but thinking about the needs of different types of users as they relate to your target opportunity can help you generate more ideas that may work for everyone.

And finally, don't be afraid to consider wild ideas. Some people don't like this suggestion, because wild ideas are rarely pursued. But wild ideas can improve more feasible ideas. This is where the power of mixing and matching different solutions to identify even-better ideas comes into play. So, when ideating, pretend you have a magic wand anything is possible.

Putting It All Into Practice

  1. Review your target opportunity. Make sure that everyone on your team knows what it means and is familiar with the necessary context. Make sure it's distinct from the other opportunities that you've discussed and that it's an appropriately sized leaf-node opportunity. If you need to revisit any of these concepts, review the prior chapter before continuing.
  1. Generate ideas alone. Take some time to jot down as many ideas as you can. When you get stuck, take a break, and come back to it. If you are still stuck, try to find inspiration from your competitors and analogous products. For analogous products, think broadly. You aren't limited to just the other players in your space.
  1. Share ideas across your team. You can do this in a face- to-face, real-time meeting, or you can do it asynchronously in a digital chat channel (e.g., Slack or Microsoft Teams). The key is to take the time to describe each of your ideas, allow people to ask questions, and to riff on the ideas.
  1. Repeat steps 2 and 3. Remember, the benefit of sharing your ideas is that hearing other people's ideas will inspire even more ideas. So, don't skip repeating step 2 to ensure that you reap the rewards. Repeat until you've generated between 15 and 20 ideas for your target opportunity. Remember, research shows that your first ideas are rarely your best ideas. The goal is to push your creative output to find the more diverse and more original ideas.

Evaluating Your Ideas

Research shows that while we are better at generating ideas individually, we are better at evaluating ideas as a group. To dot-vote, allot three votes per member. As you vote, the only criteria you should be considering is how well the idea addresses the target opportunity. You aren't voting for the coolest, shiniest ideas. Nor are you ruling out the hardest or even impossible ideas. We'll deal with that down the road. Each person can assign their votes however they please. They can place all three votes on one idea or vote for three ideas individually. Once everyone has placed their votes, you'll select the top three ideas that garnered the most votes.

It may take a few rounds of dot-voting to get to your top three. For example, if after round one of voting, several ideas have one or two votes, but no ideas have three or more votes, take some time to discuss your votes. Don't let this turn into a long debate or argument. Instead, let each person pitch the ideas they voted for. During your pitch, be sure to highlight why each idea best addresses the target opportunity. Then vote again.

Avoid These Common Anti-Patterns

Not including diverse perspectives. Most of the exercises in this book are designed for product trios to do together. However, ideation is best done with the entire team. You want to make sure everyone has a chance to contribute their ideas. You also might consider inviting key stakeholders who bring a different perspective. The more diversity in the group, the more diverse ideas you'll generate. However, make sure that you set the context for ideation by sharing your target opportunity and the customer context in which that opportunity occurs.

Generating too many variations of the same idea. Instead, So deliberately work to identify categorically different ideas. If you get stuck, look for inspiration from analogous products. Analogous products don't need to be from your industry. In fact, the further away they are from your industry, the more likely you'll uncover diverse ideas. So, ask, "Who else has to solve a problem like this?" and then investigate how they solve it.

Limiting ideation to one session. Instead, give ideation the time it deserves. Let your ideators consider ideas over time. Take advantage of the brain's innate ability to incubate a problem.

Selecting ideas that don't address the target opportunity. When ideating, you want to encourage your participants to defer judgment. As a result, it's not uncommon to end up with solutions that don't address your target opportunity. Before dot-voting, remove any ideas that don't address your target opportunity. Otherwise, it can be easy to get distracted by a shiny idea that might be a good idea for some day in the future but isn't a good idea right now. In the previous chapter, you made a strategic decision when you chose a target opportunity. Don't undo that work now.

Chapter 9: Identifying Hidden Assumptions

Types of Assumptions

Desirability assumptions: Does anyone want it? Will our customers get value from it? As we create solutions, we assume that our customers will want to use our solution, that they will be willing to do the things that we need them to do, and that they'll trust us to provide those solutions. All of these types of assumptions fall into the desirability category.

Viability assumptions: Should we build it? There are many ideas that will work for our customers but won't work for our business. If we want to continue to serve customers over time, we need to make sure that our solutions are viable-that they create a return for our business. This typically means that the idea will generate more revenue than it will cost to build, service, and maintain. However, some ideas are designed to be loss leaders and instead contribute to another business goal besides revenue. But somewhere down the line, the idea must create enough value for the business to be worth the effort to create and maintain.

Feasibility assumptions: Can we build it? We primarily think about feasibility as technical feasibility. Is it technically possible? Feasibility assumptions, however, can also include, "What's feasible for our business?" For example, will our legal or security team allow for it? Will our culture support it? Does it comply with regulations?

Usability assumptions: Is it usable? Can customers find what they need? Will they understand how to use it or what they need to do? Are they able to do what we need them to do? Is it accessible?

Ethical assumptions: Is there any potential harm in building this idea? This is an area that is grossly underdeveloped for many product trios. As an industry, we need to do a better job of asking questions like: What data are we collecting? How are we storing it? How are we using it? If our customers had full transparency to those answers, would they be okay with it?

Story Map to Get Clarity

Start by assuming the solution already exists. You aren't story mapping what it will take to implement an idea. Instead, you are mapping what end-users will do to get value from the solution once it exists in the world.

Identify the key actors. Who needs to interact with whom for the idea to work? Some products like Slack or Facebook require that two or more end-users interact with each other for anyone to get value from the product. If this is the case, you'll want to represent multiple end-users in your story map. In two-sided marketplaces, you might have different types of end-users (eg., buyers and sellers). In some products or services, the interface or software itself may be an actor in your map (e.g., for an end-user conversing with a chatbot, the chatbot should be listed as a player in the story map).

Map out the steps each actor has to take for anyone to get value from the solution. Be specific. What does each actor need to do in order for someone to get value from the solution? For exam-ple, an actor has to engage with a chatbot by asking a question or making a request, the chatbot then needs to respond, and so on.

Sequence the steps horizontally over time. Lay out the steps horizontally one after the other. Sequence them in the order they need to happen. You may need to jump back and forth between players if they need to take turns taking actions. It's okay if some steps are optional. List them in the map where they might occur. If an end-user can choose multiple paths, map out the successful path. If there are multiple successful paths, map them out sequentially.

Conduct a Pre-Mortem

Story maps aren't the only way to help us see our own assumptions. Gary Klein, a cognitive psychologist and author of several books on decision-making, flipped the idea of a post-mortem on its head. Post-mortems are after-project reviews where participants assess what went wrong and what could have gone better. Sprint retrospectives are a type of post-mortem. Pre-mortems, on the other hand, happen at the start of the project and are designed to suss out what could go wrong in the future.

Pre-mortems are a great way to generate assumptions. They leverage prospective hindsight-a technique where you imagine what might happen in the future. A pre-mortem encourages you to ask, "Imagine it's six months in the future; your product or initiative launched, and it was a complete failure. What went wrong?"

Prioritizing Assumptions

prioritizing assumptions

Prioritizing Assumptions

Avoid These Common Anti-Patterns

Not generating enough assumptions. Generating assumptions, like ideating, is intended to be a divergent exercise. The goal is to identify as many "gotchas" as you can to increase the chance that you generate the riskiest ones. However, many teams dramatically underestimate how many assumptions underlie their ideas. When I do these exercises, I often generate 20-30 assumptions for even a simple idea. If that sounds overwhelming, remember, you won't need to test all of these assumptions. Most of them will be harmless. You'll use the assumption-mapping exercise to quickly find the riskiest ones. However, if you don't generate the riskiest assumptions, the mapping exercise won't help you sort something that you haven't uncovered. Use the five assumption categories and the exercises in this chapter to help you generate as many assumptions as you can.

Phrasing your assumptions such that you need them to be false. Generating assumptions can be a bit of a devil's-advocate exercise. You are looking for what might go wrong with your ideas. As a result, you might be tempted to phrase your assumptions negatively. For example, if you need your users to log in to your service, you might phrase your assumption as, "Customers won't remember their password." However, this is backwards. You need customers to remember their password for your idea to work. When you are generating assumptions, always phrase your assumptions such that you need them to be true: "Customers will remember their passwords." For many assumptions, you'll find that this positive framing will make them easier to test.

Not being specific enough. I see many teams generate assumptions like this: "Customers will have time," "Customers will know what to do," and "Our engineers can build something like this." These assumptions are not specific enough to test. What will customers have time for? What do you need them to know how to do? What do engineers need to build? Be specific. These assumptions are much better: "Customers will take the time to browse all the options on our getting-started page," "Customers will know how to select the right option based on their situation," and "Our engineers can identify the right subset of options to show the customer based on the customer's profile data."

Favoring one category at the cost of other categories. Most teams have a bias toward one or two categories at the cost of the other categories. Some teams conflate desirability and usability and forget that just because a product is usable doesn't mean it's desirable. For products with challenging feasibility issues, it can be hard to remember to first test and see if customers even want the solution. Most teams forget about ethical assumptions altogether.

Chapter 10: Testing Assumptions, Not Ideas

"Good tests kill flawed theories; we remain alive to guess again." - Karl Popper

"Each answer a team collects-positive or negative-is a unit of progress" - Jeff Gothelf and Josh Seiden, Sense & Respond

Simulate an Experience, Evaluate Behavior

By defining these criteria upfront, you are doing two things. First, you are aligning as a team around what success looks like so that you all know how to interpret the results. This will help to ensure that your assumption tests are actionable. And second, you are helping to guard against confirmation bias. Remember, confirmation bias makes us more likely to see the evidence that our idea will succeed than the evidence that it might not succeed. If we don't define our success criteria upfront, when we try to interpret the results, our brains will actively look for evidence that supports the assumption, and we'll likely miss the evidence that could refute it. To avoid this, we want to define what success looks like upfront (before we see the results).

So how do we choose the numbers? This is a subjective decision. Your goal is to find the right balance between speed of testing and what aligns your team around an actionable outcome. You want to test your assumption with as few people as possible (as it will be faster) but with the number of people that still gives your team the information they need to act on the data. Now remember, you aren't trying to prove that this assumption is true. The burden of truth is too much. You are simply trying to reduce risk. Keep your assumption map in mind. Your goal is to move the assumption from right to left. How many people would convince you this assumption is more known? That's the negotiation you are having as a team.

If your simulation is less than optimal, as we saw with the above examples, you'll need to modify your numbers to accommodate for these shortcomings. If someone raises the concern that some sports fans might want to watch sports on our platform, but in the moment we ask them, they might be more likely to choose a comedy, then you might lower your threshold for success to account for that. If someone else is worried that choosing from three subscriptions biases the results in favor of your subscription (because the reality is people have, on average, five subscription services), then you could either decide to change your mockup to show five services or raise your threshold for your success criteria.

The key outcome with this exercise is to agree as a team on the smallest assumption test you can design that still gets you results that the team will feel comfortable acting on.

Early Signals vs. Large-Scale Experiments

Inevitably, someone on your team is going to raise a concern with making decisions based on small numbers. How can we have confidence in the data if we talk to only five customers? You might be tempted to test with larger pools of people to help get buy-in. But this strategy comes at a cost-it takes more time. We don't want to invest the time, energy, and effort into an experiment if we don't even have an early signal that we are on the right track.

Rather than starting with a large-scale experiment (e.g., surveying hundreds of customers, launching a production-quality A/B test, worrying about representative samples), we want to start small. You'll be pleasantly surprised by how much you can learn from getting feedback from a handful of customers.

With assumption testing, most of our learning comes from failed tests. That's when we learn that something we thought was true might not be. Small tests give us a chance to fail sooner. Failing faster is what allows us to quickly move on to the next assumption, idea, or opportunity. Karl Popper, a renowned 20th-century philosopher of science, in the opening quote argues, "Good tests kill flawed theories," preventing us from investing where there is little reward, and "we remain alive to guess again," giving us another chance to get it right.

As we test assumptions, we want to start small and iterate our way to bigger, more reliable, more sound tests, only after each previous round provides an indicator that continuing to invest is worth our effort. We stop testing when we've removed enough risk and/or the effort to run the next test is so great that it makes more sense to simply build the idea.

Understanding False Positives and False Negatives

Now, this method isn't flawless. When working with small numbers, we will encounter false positives and false negatives. Let's explore the impact of these errors on our work.

When our experiment fails, even though our larger population exhibits the behavior that we want to see, we call this a "false negative." Our test is providing data that indicates our assumption is faulty when it may not be.

A false positive is when our test gives us data suggesting that our assumption is true, when it isn't. This sounds far riskier than a false negative, but, in practice, it's not. Suppose we run our small test, and we learn that everyone wants to watch sports, so we call our test a success, and we move forward. Remember, we aren't making a go/no-go decision based on one assumption test. We are either moving on to test another assumption related to the same idea, or we are running a bigger, more reliable test on the same assumption. If our idea really is faulty, odds are that our next round of assumption testing will catch it. False positives usually get surfaced in successive rounds of testing. The cost of a false positive in a small test is usually the time and effort required to run the next-bigger test. That's not trivial, but we still avoid the far-bigger cost of building the wrong solutions.

There is a cost to false negatives and false positives. And we should be aware that these costs exist. But the cost is not so great that we should be starting with large-scale, quantitative experiments every time. If we did that, we would never ship any software. Our tests would simply take too long. The vast majority of the time, you will learn plenty from your small-scale tests.

A Quick Word on Science

So, while we want to adopt a scientific mindset and we want to think about the reliability and the validity of the data that we collect, we are not running scientific experiments. While we need to be thoughtful about our research methods, we also need to be aware that we are not validating or invalidating anything. It's important that we recognize that our research findings are not truths-they are merely confirming or dis-confirming evidence that either supports or refutes our point of view. Our goal as a product team is not to seek truth but to mitigate risk. We need to do just enough research to mitigate the risk that our companies cannot bear and no more.

Running Assumption Tests

There are two tools that should be in every product team's toolbox-unmoderated user testing and one-question surveys. Unmoderated user-testing services allow you to post a stimulus (e.g., a prototype) and define tasks to complete and questions to answer. Participants then complete the tasks and answer the questions on their own time. You get a video of their work. These types of tools are game changers. Instead of having to recruit 10 participants and run the sessions yourself, you can post your task, go home for the night, and come back the next day to a set of videos ready for you to watch.

However, unmoderated testing and one-question surveys aren't the only ways to test assumptions. Sometimes we already have the data we need in our own database. For example, we might look at how many of our current subscribers have searched for sports on our platform and use this as an indicator of interest in sports. Before you dive into the data, be sure to define your evaluation criteria upfront. How many search queries will you sample? How many need to be related to sports? How will you determine "related to sports"? Remember, aligning around success criteria upfront guards against confirmation bias and ensures that your team agrees on what the results mean.

Product teams can typically test most of their assumptions with a combination of prototype tests (either unmoderated or in person), one-question surveys, or through data-mining. However, there are dozens of experiment types. If you want to do a deep dive on qualitative tests, pick up a copy of Laura Klein's UX for Lean Startups. She does a good job of surveying a wide breadth of methods. Another great reference is David Bland's Testing Business Ideas. The last third of David's book is an encyclopedia of experiment types. However, don't get overwhelmed with having to master all of these experiment types. If you keep the simple assumption-simulate-evaluate framework in mind, you'll be well on your way to becoming a strong assumption tester.

Avoid These Common Anti-Patterns

Overly complex simulations. Some teams spend countless hours, days, or even weeks trying to design and develop the perfect simulation. It's easy to lose sight of the goal. In your first round of testing, you are looking to design fast tests that will help you gather quick signals. Design your tests to be completed in a day or two, or a week, at most. This will ensure that you can keep your discovery iterations high.

Using percentages instead of specific numbers when defining evaluation criteria. Many teams equate 70% and 7 out of 10. So instead of defining their evaluation criteria as 7 out of 10, they tend to favor the percentage. These sound equivalent, but they First, when testing with small numbers, we can't conclude that 7 out of 10 will continue to mean 70% as our participant size grows. We want to make sure that we don't draw too strong a conclusion from our small signals. Second, and more importantly, "70%" is ambiguous. If we test with 10 people and only 6 exhibit our desired behavior, some of us might conclude that the test failed. Others might argue that we need to test with more people. Be explicit from the get-go about how many people you will test with when defining your success criteria.

Not defining enough evaluation criteria. It's easy to forget important evaluation criteria. At a minimum, you need to define how many people to test with and how many will exhibit the desired behavior. But for some tests, defining the desired behavior may involve more than one number. For example, if your test involves sending an email, you might need to define how many people will receive the email, how long you'll give them to open the email, and whether your success criteria is "opens" or "clicks." Pay particular attention to the success threshold. Complex actions may require multiple measurements (e.g., opens the email, clicks on the link, takes an action).

Testing with the wrong audience. Make sure that you are test-ing with the right people. If you are testing solutions for a specific target opportunity, make sure that your participants experience the need, pain point, or desire represented by that target opportunity. Remember to recruit for variation. Don't just test with the easiest audience to reach or the most vocal audience.

Designing for less than the best-case scenario. When testing with small numbers, design your assumption tests such that they are likely to pass. If your assumption test passes with the most likely audience, then you can expand your reach to tougher audiences. This might feel like cheating, but you'll be surprised how often your assumption tests still fail. If you fail in the best-case scenario, your results will be less ambiguous. If your test fails with a less-than-ideal audience, someone on the team is going to argue you tested with the wrong audience, and you'll have to run the test again. Remember, we want to design our tests to learn as much as we can from failures.

Chapter 11: Measuring Impact

"Your delusions, no matter how convincing, will wither under the harsh light of data." - Alistair Croll and Benjamin Yoskovitz, Lean Analytics

First, it's easy to get caught up in successful assumption tests. The world is full of good ideas that will succeed on some level. However, an outcome-focused product trio needs to stay focused on the end goal-driving the desired outcome. We need to remember to measure not just what we need to evaluate our assumption tests, but also what we need to measure impact on our outcome.

Second, this story also highlights the iterative nature of discovery and delivery. Many teams ask, "When are we done with discovery? When do we get to send our ideas to delivery?" The answer to the first question is simple. You are never done with discovery. Remember, this book is about continuous discovery. There is always more to learn and to discover.

This is why we say discovery feeds delivery and delivery feeds discovery. They aren't two distinct phases. You can't have one without the other. In Chapter 10, you learned to iteratively invest in experiments, to start small, and to grow your investment over time. Inevitably, as your experiments grow, you are going to need to test with a real audience, in a real context, with real data. Testing in your production environment is a natural progression for your discovery work. It's also where your delivery work begins. If you instrument your delivery work, discovery will not only feed delivery, but delivery will feed discovery.

Don't Measure Everything

It's counterintuitive, but when instrumenting your product, don't try to measure everything from the start. You'll quickly get overwhelmed. You'll spend weeks debating what events to track, how to name your events, and who is responsible for what before you even get started. This is a waste of time. There is no way to know from the outset how you should set everything up. No matter how much planning you do, you'll make mistakes. You'll measure something that you thought meant one thing and discover later that it really meant something else. You'll develop a naming schema only to later discover that you forgot about a key part of the product. You'll find the perfect way to measure a key action only to learn months later that you had a bug that caused that event to trigger ten times more often than it should have. It happens to all of us. Trust that you'll learn as you go.

Instead of trying to plan everything upfront, start small, and experiment your way to the best instrumentation.

Instrument Your Evaluation Criteria

Start by instrumenting what you need to collect to evaluate your assumption tests. As you build your live prototypes, consider what you need to measure to support your evaluation criteria. Don't worry about measuring too much beyond that.

Notice, however, that we did not start by measuring everything. We didn't track every click on every page. We started with our assumptions, and we measured exactly what we needed to test our assumptions.

Measure Impact on Your Desired Outcome

In addition to instrumenting what you need to evaluate your assumption tests, you also want to measure what you need to evaluate your progress toward your desired outcome.

Our outcome at AfterCollege was to increase the number of students who found jobs on our platform. For our assumption tests, we were measuring search starts, job views, and job applications, but these metrics were only leading indicators of our desired outcome.

Some people in the company argued that we should measure our success by job applications. After all, we had no control over who a company hired or how a student interviewed. But the number of job applications was an easy metric to game. It would be easy to encourage students to apply to many jobs, but this wouldn't necessarily increase their success of finding a job. If we wanted to measure the value we created for our customers, we knew we needed to measure when a student got a job. We couldn't be afraid to measure hard things.

Since most college students have little to no interviewing experience, nor do they know how to negotiate offers, we decided that we could use this lack of knowledge to help us measure what happens after they completed an application. 21 days after a student applied for a job, we sent the student an email and asked them what happened. The email gave them four options:

  1. "I never heard back." If they selected this option, we encouraged them to find new jobs to apply to.
  2. "I got an interview." If they selected this option, we gave them tips for how to prepare for their interview.
  3. "I got an offer." If they selected this option, we gave them tips on how to evaluate and negotiate their offer.
  4. "I got the job." If they selected this option, we congratulated them.

Here's the key lesson. Just because the hire wasn't happening on our platform didn't mean it wasn't valuable for us to measure it. We knew it was what would create value for our students, our employees, and ultimately our own business. So, we chipped away at it. We weren't afraid to measure hard things and you shouldn't be, either.

Avoid These Common Anti-Patterns

Getting stuck trying to measure everything. By far the most common mistake teams make when instrumenting their product is that they turn it into a massive waterfall project, in which they think they can define all of their needs upfront. Instead, start small. Instrument what you need to evaluate this week's assumption tests. From there, work toward measuring the impact of your product changes on your product outcome. And with time, work to strengthen the connection between your product outcome and your business outcome.

Hyper-focusing on your assumption tests and forgetting to walk the lines of your opportunity solution tree. It's exhilarating when our solutions start to work. It feels good when customers engage with what we build. But sadly, satisfying a customer need is not our only job. We need to remember that our goal is to satisfy customer needs while creating value for our business. We are constrained by driving our desired outcome. This is what allows us to create viable products, and viable products allow us to continue to serve our customers. So, when you find a compelling solution, remember to walk the lines of your opportunity solution tree. Desirability isn't enough. Viability is the key to long-term success.

Forgetting to test the connection between your product outcome and your business outcome. Unfortunately, it's not enough to drive product outcomes. The connection between our product outcome and our business outcome is a theory that needs to be tested. As you build a history of driving a product outcome, you need to remember to evaluate if driving the product outcome is, in turn, driving the business outcome. It's what keeps our businesses thriving, allowing us to continue to serve our customers.

Chapter 12: Managing the Cycles

Read the books for the stories

Avoid These Common Anti-Patterns

Overcommitting to an opportunity. Throughout your discovery, you will uncover opportunities that are important to your customers that you won't be able to deliver on. One of the hardest challenges with opportunity selection is identifying the right opportunity for right now. However, a round of assumption tests should help you assess fit quickly.

Avoiding hard opportunities. Some teams interpret continuous delivery to mean continuous delivery of easy solutions. Quick wins have a time and a place in our work. If we can deliver impact this week, we should. However, many of the opportunities we uncover will take time to address adequately. Don't confuse quick testing and iterative delivery with easy solutions. Before we invested months into building a robust machine-learning solution, we started with a crude approximation that we could prototype in a few days.

Drawing conclusions from shallow learnings. As you learned in Chapter 2, discovery requires strong critical-thinking skills. Otherwise, it's easy to draw fast conclusions from shallow learnings.

Giving up before small changes have time to add up. While you do want to measure the impact of your product changes, don't expect to see large step-function results from every change. Oftentimes it takes a series of changes to move the needle on our outcome.

Chapter 13: Show Your Work

"The more leaders can understand where teams are, the more they will step back and let teams execute." - Melissa Perri, Escaping the Build Trap

Don't Jump Straight to Your Conclusions

When preparing for a meeting with stakeholders, we tend to focus on our conclusions-our roadmap, our release plan, our prioritized backlog. More often than not, this is exactly what our stakeholders are asking us to share. Even in companies that espouse a focus on outcomes, we still tend to spend most of our time talking about outputs.

The challenge with this approach is that our stakeholders often have their own conclusions. It's easy to have an opinion about outputs. We all have our own preferences about how a product or service should work. When we anchor the conversation in the solution space, we encourage our stakeholders to share their own preferences, However, these preferences aren't always grounded In good discovery. After all, it's our job to do discovery, not our stakeholders'.

When you frame the conversation in the solution space, you are framing the conversation to be about your opinion about what to build versus your stakeholders' opinion about what to build. If your stakeholders are more senior to you, odds are their opinion is going to win. This is why we have the dreaded HiPPO acronym (the Highest Paid Person's Opinions) and the saying "The HiPPO always wins." Many product trios complain about the HiPPO but miss the role they play in creating this situation.

When we present our conclusions, we aren't sharing the journey we took to reach those conclusions. Instead, we are inviting our stakeholders to an opinion battle-a battle we have no chance of winning.

Slow Down and Show Your Work

When meeting with stakeholders, don't start with your conclusions. Instead, slow down and show your work. Throughout this book, you've learned to use an opportunity solution tree to help you chart the best path to your desired outcome. This same visual can help you share your work with your stakeholders. Just like it's easy for us to get distracted by shiny new ideas, it's also easy for our stakeholders to get distracted. It's our job to set the context for how product decisions are made. Your opportunity solution tree helps you do exactly that. And just like it helped you and your team build confidence in your decisions, it will do the same for your stakeholders.

When meeting with stakeholders, start at the top of your tree. Remind your stakeholders what your desired outcome is. Ask them if anything has changed since you last agreed to this outcome. This sets the scope for the conversation.

Share how you mapped out the opportunity space. Highlight the top-level opportunities. Drill into the detail only when and where they ask for it. Ask them if you missed anything. Consider that they may have knowledge of opportunities that you might have missed. Capture their suggestions. You can always vet them in your future customer interviews.

Share how you assessed and prioritized the opportunity space. Use the tree structure to walk through each decision you made. Choose the appropriate level of detail based on the stakeholder you are talking to. Ask them if they would have made a different decision at each decision point. Consider their feedback.

Share more context about your target opportunity. Help them fully understand the customer need or pain point you intend to address. Use your interview snapshots to help your stakeholders empathize with your customers. Answer their questions. This step is critical. Your stakeholder needs to fully understand the opportunity you are pursuing before you share solutions with them. This is what sets the context for how to evaluate solutions and moves the conversation away from opinions and preferences.

Share the solutions you generated. Ask them if they have any of their own ideas. Make sure you capture and consider them. Share the set of three solutions you plan to move forward with. Ask them if they would have chosen a different set. Stay open-minded. You may have invested time and energy into your solution set, but remember: Solution ideas are a dime a dozen. The key criteria for your first solution set is diversity. Your stakeholders can often help you generate more diverse ideas than what your team can do on their own. If that's the case, don't be afraid to swap in some of their ideas.

If you've already started assumption testing, share your story maps and your assumption lists. Make sure your stakeholders fully understand how each solution might work. Remember, this is where opinions and preferences might pop up again. Gently remind your stakeholders what your target opportunity is. Ask your stakeholders to add to your assumption lists. This is where their Unique knowledge and expertise can be invaluable for helping us catch our own blind spots.

Share your assumption map. Be sure to add any of the assumptions that your stakeholders identified. Ask them if they would have prioritized the assumptions differently. Make adjustments as needed.

Share your assumption tests. If you have data, share the data. Otherwise, share your execution plans. Ask for feedback. Consider and integrate their feedback.

Repeat.

When we show our work, we are inviting our stakeholders to co-create with us. Instead of sharing our conclusions and inviting them to share their preferences, we are sharing our work and inviting them to assess our thinking and to add their own. We are leveraging their expertise and improving our process.

Finally, I described this process as a one-time event. But a good product trio knows to continuously manage stakeholders. Share your work along the way, rather than all at the end. Be thoughtful about when and how to share your work. Some stakeholders will want all the details week over week; others might want the high-lights monthly. Adapt to what your stakeholders need. But even if they ask for outputs, take the time to show the work that helped you conclude those were the right outputs.

Generate and Evaluate Options

When we take the time to show our work, using visual artifacts like experience maps, opportunity solution trees, and story maps, we are inviting our stakeholders along for the journey with us. Instead of presenting our conclusion-this is the roadmap, release plan, and backlog that will help us reach our desired outcome-we are presenting the potential paths we might take to get there. We are inviting our stakeholders to help us choose the right path. Instead of presenting a conclusion, we are generating and evaluating options. This allows our stakeholders to be a part of the process.

We are inviting them to co-create with us, which leads to much more buy-in and long-term success.

Common Anti-Patterns

Telling instead of showing. Even though we all know that showing is better than telling, all too often, we fall into the trap of telling instead of showing. We are proud of our work. We are excited about our conclusions. We love our ideas, so, of course, our stakeholders will love our ideas, too. We rush into telling our stakeholders everything we've learned instead of showing our stakeholders so that they can draw their own conclusions.

There's a cognitive bias that is coming into play when we do this. It's called the curse of knowledge. Once we know something (like we do in this situation, we have a wealth of discovery work that supports our point of view), it's hard for us to remember what it was like not to have that knowledge. In fact, our conclusions-our roadmaps, our backlogs, our release plans-start to become obvious. We forget that not only are they not obvious to our stakeholders but also that they very likely have their own conclusions that seem obvious to them. The key to avoiding this *curse of knowledge" is to slow down. Start at the beginning. Walk your stakeholder through what you learned and what decisions you made. Give them space to follow your logic, and, most importantly, give them time to reach the same conclusion.

Overwhelming stakeholders with all the messy details. Even though we want to slow down and show our work, we don't want to overwhelm our stakeholders with every last detail of what we've learned. If you are interviewing customers and running several assumption tests every week, everything you are learning will quickly overwhelm a busy stakeholder.

Instead, you need to act as a smart filter. Tailor the detail and context to the stakeholder you are talking to. What does this stakeholder, in particular, need to know? Your boss might enjoy the discovery journey and want week-over-week updates of how things are going. Your marketing manager probably doesn't want that much detail. Instead, monthly updates with just the highlights might be adequate. Your CEO probably needs even less detail.

However, when someone wants less detail, it doesn't mean you aren't showing your work. Even with a busy CEO, you still want to start with the outcome you are driving, highlight the top two or three opportunities, give a quick explanation of why you chose the one you did, highlight your top solutions, and share the results of one or two assumption tests that support your final decision.

Arguing with stakeholders about why their ideas won't work. As you do more and more assumption tests, assumptions become building blocks. You start to learn which building blocks will work and which won't. When you hear a new idea, you are going to be able to quickly assess it based on those building blocks. However, when working with stakeholders, we need to remember that they aren't starting from the same set of building blocks. The fastest way to discourage your stakeholders is to shoot down their ideas Remember, nobody likes the know-it-all.

Instead of jumping straight to why an idea won't work, use your discovery framework to help the stakeholder see where their idea does fit. For example, is the stakeholder focused on a different outcome from you? If yes, then don't shoot down their idea. Even if you don't like the idea (remember, our preferences don't matter), you can remind your stakeholder that, while their idea might be a good fit for their outcome, it doesn't support your outcome right now. You can follow this same strategy if their solution addresses a different opportunity. You can always say something like, "That idea has promise. We'll consider it when we address that opportunity." You can even capture it on your tree or in your idea backlog (not your development backlog) so that you remember to return to it later.

If your stakeholder is suggesting a solution for your target opportunity, consider it. Should it be in your consideration set? If you can see that it is based on a faulty assumption, don't just call that out. Help your stakeholder reach that conclusion on their own. You can do this by story mapping their idea together. Generate assumptions together. When your stakeholder sees what assumptions their idea is based upon, you can now share what you've learned about those assumptions in your past assumptions tests. This helps your stakeholders reach their own conclusions about their own ideas.

Trying to win the ideological battle instead of focusing on the decision at hand. No matter how strong your discovery process is, there will still be times when your stakeholders swoop in and ask you to do things their way. If they are more senior to you in the corporate hierarchy, that's their prerogative. What you can control is how you respond. I strongly recommend that you don't turn the conversation into an ideological battle. In fact, if you ever catch yourself saying, "This is the way it's supposed to be done," take a deep breath, and walk away from the conversation. You aren't going to win the ideological war in one conversation.

Instead, you need to take stock of the decision that needs to be made and focus on the best outcome given what you have to work with. Save the ideological war for later (or never). You aren't going to convince your stakeholder that their worldview is wrong. In fact, this is tied to the "Show, don't tell" advice above. When you are asked to deviate from your discovery process, telling your stakeholders that they are doing it wrong isn't going to get you anywhere. Focus on the opportunities with which you can show the benefit of working this way. Choose your battles. Don't fight the ones you can't win.

Part III: Developing Your Continuous Discovery Habits

Chapter 14: Start Small, and Iterate

Build Your Trio

Don't work alone. The habits in this book are designed to be adopted by a cross-functional trio. Even if your team isn't fully resourced or your company culture doesn't support the trio model, you can start building these relationships yourself. If you are a product manager, find a designer and an engineer to partner with. Consult them on key decisions. Work together to decide what to build.

If your teammates change from project to project, your trio may change with it. That's okay.

If the first person you ask isn't interested, find someone who is. Start small with your ask. Instead of asking them to partner with you on all of your discovery decisions, ask them to weigh in on one small decision. Iterate from there.

If your company doesn't hire designers, find someone who is design-minded. Every company has people who naturally think from a usability perspective. Look for people who are good at simplifying complex concepts, have firsthand experience with your customers, and have an abundance of empathy for your customers' challenges.

Your guiding principle is simple: How can I include all three disciplines in as many discovery decisions as I can? Make next week look better than last week. Repeat.

Once you have your trio in place, you are ready to adopt the keystone habit of continuous discovery.

Start Talking to Customers

If you aren't familiar with the concept of a keystone habit, it comes from Charles Duhigg's book The Power of Habit: Why We Do What We Do in Life and Business. Duhigg argues, "Keystone habits start a process that, over time, transforms everything." They are habits that, once adopted, drive the adoption of other habits.

For most people, exercise is a keystone habit. When we exercise more energy, and thus we are more productive at work. For others, making your Bed each morning is a keystone habit. It sets the tone of rigor and discipline from the start of your day. This is why many military leaders advocate for this habit.

To be clear, it's not that exercise makes you eat better or making your bed makes you more disciplined, but doing the former makes the latter easier. The keystone habit builds motivation for the subsequent habits.

When product teams engage with their customers week over week, they don't just get the benefit of interviewing more often they also start rapid prototyping and experimenting more often. They remember to doubt what they know and to test their assumptions. They do a better job of connecting what they are learning from their research activities with the product decisions they are making.

I believe continuous interviewing is a keystone habit for continuous discovery. Of all the habits in this book, if you are looking for one place to get started, this is it.

No matter your situation, this is the habit to start with.

Work Backward

When you are asked to deliver a specific solution, work back-ward. Take the time to consider, "If our customers had this solution, what would it do for them?" If you are talking to customers regularly, ask them. Try to uncover the implied opportunity. Even if it's a wild guess, starting to consider customer needs, pain points. and desires will help you deliver a better solution.

You can apply the same question to your business to uncover the implied outcome, "If we shipped this feature, what value would it create for our business?" Refine your answer until you get to a clear metric-that's your outcome. By the way, by asking those two questions, you've also built your first opportunity solution tree.

As you work on requirements for the solutions you were asked to build, remember to story map your ideas. Use your story maps to identify hidden assumptions. Even if you don't have the infrastructure to quickly prototype or test your assumptions, being aware of your assumptions will help you notice the evidence around you that either supports or refutes them. When you uncover a faulty assumption, work with your stakeholders to evolve the idea. Better yet, when a stakeholder brings a solution to you, story map and identify assumptions with them. The idea will improve right then and there.

Work with your stakeholders to identify the impact they expect a given feature to have. Document that conversation. As you implement the feature, be sure to instrument what you need to measure against the expected impact. Start doing post-release impact reviews with your stakeholders. Remind them what impact they expected a feature to have. Share with them the impact the feature actually had. If it falls short, as it inevitably will, share the implied opportunity you uncovered by asking, "Are we trying to solve this customer problem with this feature?" If your stakeholder agrees, ask if you can consider alternative solutions to that same customer need. Or better yet, ideate with your stakeholders. Congratulations! You just built out the first mini-branch of your opportunity solution tree.

The best time to advocate for discovery is when a feature falls short of expectations. You can gently suggest ways that you could have discovered the gap earlier in the process. This is a great time to share what you are learning in your interviews. But be careful. You don't want to come across as a know-it-all or have an "I told you so" attitude. Instead, approach the situation as a collaborative problem solver. Work with your stakeholders to evolve your pro-cesses, If they push back, let up. Remember, you don't have to worry about how other people work. You can make great strides yourself, focusing on how you work. But you'll be pleasantly surprised at how receptive folks are to small changes when things don't go as expected. Read the room, and adjust your suggestions accordingly.

Use Your Retrospectives to Reflect and Improve

Meet regularly as a trio to reflect on your discovery process. If you already do Scrum retrospectives, it's easy to add a couple of reflective questions to this meeting to also reflect on your discovery process. I encourage my teams to ask, "What did we learn during this sprint that surprised us?" This could be anything from a feature release didn't have the expected impact, we learned a new insight in a customer interview, or we ran into a feasibility hurdle that required us to redesign a solution. Make a list.

Then, for each item on the list, ask, "How could we have learned that sooner?" The answers to these questions will help you improve your discovery process. If a release didn't have the intended impact, was there a faulty assumption that you neglected to uncover? Did it not get prioritized as one of your "leap of faith" assumptions for testing? If you learned a new insight during a customer interview, was it because you misunderstood a customer need, or did you uncover a new part of the customer experience for the first time? If you ran into a feasibility hurdle, is it because the requirements were misunderstood (maybe you need to revisit your story maps)? Or perhaps feasibility assumptions are a bit of a blind spot for your team.

As you conduct this retrospective, be nice to yourselves. Remember, no matter how good you get at discovery, you'll still run into surprises. Surprises help us improve. Take the time to learn from them.

Avoid These Common Anti-Patterns

Focusing on why a given strategy won't work (AKA "That will never work here"), instead of focusing on what is within your control. After every conference or meetup talk, participants always ask a question that falls into the form of, "That would never work at my company." It's easy to hear or read about what other companies do and think their tactics won't work at your company. Its true that every organization is unique. However, I've worked with teams in a variety of industries (from banking to healthcare to retail to marketing automation to security), at companies of all sizes (from two founders just getting started to global companies with hundreds of thousands of employees), on all types of products and services. The habits in this book have been adopted and worked at all of them. Do they need to be adapted to the unique organizational context? Absolutely! But in every instance, we were table to look at what each team could do, given the context in which they worked, and we found a way. So, I encourage you to consider what you can do and let go of the "That would never work here" mentality that is so easy to fall into. selves

Being the annoying champion for the "right way" of working. Some people, instead of adopting a "That will never work here" mindset, swing the pendulum too far in the other direction. They want to work using the "one right way" to do discovery. I have news for you. There is no "one right way" to do discovery. All of the hab-its in this book can and should be adopted to match your team's preferences and needs. This book isn't designed to be recipes that Should be followed to the T, but rather templates that should help you get started. Once you have a handle on how they work, you can and should adopt them to better meet your own needs. Especially when you're new to adopting continuous discovery habits, don't let perfect be the enemy of good. Instead, adopt a continuous-improvement mindset. If next week looks better than last week, you are on the right track.

Waiting for permission instead of starting with what is within your control. I've met dozens of teams who have never talked to customers because they believe they aren't allowed to. However, they regularly engage with customers outside of work. They work for a major bank, and most (if not all) of their friends have a bank account. They build sales software, and their best friend's dad works in sales. They work on hospital badge systems, and they have three clinicians in their extended family. Don't let perfect be the enemy of good. Get started by talking to anyone who is like your customers. Iterate from there.