@johncutlefish's blog

I am currently writing weekly here and have all my 2020 posts here.

50 Short Product Lessons

Published: June 29, 2020

layercake

Here are fifty short product lessons.

I put these together by transcribing brief talks, so the tone is conversational. Apologize in advance for my ability to run on.

I hope you find them helpful.

Table of Contents

Bets

One interesting thing about bets is that they come in all different sizes. You can have big bets, small bets, safe bets, or risky bets. There is also an element of time: bets can have different durations. You can have really, really short bets that might be short, but still very large. And you can have really long-evolving bets that might take a very long time to mature. At any given time, a company will have a portfolio of bets developing in parallel that are all interlinked and related in some way.

You might ask, “Why is it important to think in terms of bets?” The simplistic answer is, “We’re making an investment, and we want a really good return on that investment.” That is true, but the usefulness of thinking in terms of bets is that the different types of bets, the shape of bets, and the relatedness of bets are very pertinent to product development. There are helpful mental models for what we’re doing.

Some bets take a long time to mature. A startup might be based on one, two or three core fundamental ideas. Although you want to update your beliefs as you start to accumulate data, some things just take a while to mature. It takes you time to understand what’s actually going on. That’s a really good example of how thinking about those fundamental aspects of your company and making those very clear to the people you’re working with can both inspire them to help you update your prior understanding—what information you have relating to those bets—but also help you to communicate the real direction. A classic problem when someone joins a startup is that they come in and appraise what’s going on and say, “Why do we believe that? It seems a little risky. Should we challenge that idea? Should we challenged that other idea?” Often, what they don’t realize is that people have gone through that particular thought process and have decided where to make operating assumptions and have decided if these are bets that they want to play. They’ve decided what they know to be risky and what they know to be safe for this case.

When you can communicate those bets and beliefs to someone who is just joining the company, you save them the hassle of not understanding the thought process of the people who founded the company. The other important thing when you think in terms of bets—as anyone who plays games of skill that involve betting knows—is that there are better ways to place your bets. There are often better ways to play those games. If we have an opportunity to create a game where we can bet more incrementally instead of a big batch bet, of course we’re going to try to create an environment where that is possible.

Some things don’t work like that. Sometimes we don’t have that opportunity, but if we at least understand the game and the bets we’re making, it challenges us to think about the potential for changing how we play that particular game. Another key benefit is that it’s tempting to try to take a one-size-fits-all approach to all the work we’re doing. And just when we try to think about the various bets that we make in our lives or in other areas, we realize that there’s an almost infinite number of variations of these styles of bets.

Yes, they can be sorted into a few categories. and there are some models that encourage you to think about certain categories of bets for your company. But it really makes you aware that you need an adaptive approach to working on these things, otherwise you’re going to try to treat everything in the exact same way. Underscoring this idea of the interrelatedness of your bets and their circumstances is that you can have core beliefs that then filter all the way up into the work that’s happening right now. You can have bets across one to three decades, or a series of bets and beliefs that impact the work that you’re doing right now. That interrelatedness is extremely important to consider.

Top

Opportunities vs. Interventions

One challenge with dividing things into problems versus solutions is something that anyone who has worked in product development understands: almost every problem is a nested solution to a higher-level problem. For example, when I have issues with how we’re going to generate revenue for the company, that’s a problem. But it’s also a solution to some other higher-level mission for the company about why you exist, the change that you hope to engender, or how you want to create long-term growth and value for your company. Similarly, a low-level issue could involve improving the performance of a particular page. That’s also a problem, and it’s a perfectly compelling and interesting problem, but it’s actually a solution for some higher level goal that you’re pursuing.

The challenge is that organizations often struggle to define who owns problems and who owns solutions. The debate will never be resolved completely because at its heart, it’s a debate over how things are decomposed. It’s a debate over who decides how those things are all linked together. One test I use to tease out a problem’s full connection to what’s going on within the company is to imagine some really smart person in the back of the room continually asking, “Why,” or “What are we hoping to achieve with this?”

Now, opportunities and interventions are a little different, because the word intervention implies that we’re somehow doing something that may change behavior for our benefit or that may actually change behavior in a way that we didn’t expect—or which might not even be beneficial for the company.

And it implies a temporary nature to our efforts. It implies that we might decide not to continue intervening that way. It’s a useful framing for how people think about delivering features. They imagine a level of permanence to what they’re doing, that it’s going to stick around forever and that they need to perfect what they’re putting out there in the world. That kind of pressure on what you’re doing causes a lot of problems.

Similarly, with any product, there is guaranteed to be a steady stream of problems to solve, but just because a problem exists (or just because customers are complaining about something), this doesn’t necessarily mean that there’s value in solving that problem.

You also expose this by framing things as opportunities. For example, we have an opportunity to make customers more successful at something, or we have an opportunity to unlock this part of the market. Yes, you could also frame that as a “problem.” However, this lens reveals that there’s not really one underlying binary problem that is either solved or isn’t solved.

Perhaps you’ve have or haven’t successfully realized opportunities or explored a certain area. We learn more about the opportunity as we dig into it. You can incrementally capture value from a particular opportunity. Those are characteristics of problems that are hard to talk about. And things will also pop up that don’t strike anyone as a genuine problem.

Let’s take a not uncommon example: a status quo exists, and people are generally happy with how things are. You could look at this and think there’s not really much you can do there. Now, opportunities imply that there may be ways shift the paradigm from how you’re currently working.

Thinking about prioritizing problems to solve, it actually feels a little different to when you’re prioritizing opportunities to capture additional value. This perspective tends to inspire people to think a little bit more broadly about what they’re doing and can actually inspire people to be a little bit more strategic about what they’re doing. Framing situations in terms of opportunities and interventions can go a really long way in terms of getting your team to be more impact-focused.

Top

Danger of premature convergence

So what is premature convergence? Premature convergence is trying to zero in on a solution (or decision) earlier rather than later, potentially too early. Now, this can obviously be a little tough to judge. There’s an ideal spot that that’s probably a range for most of your efforts. But the really important thing is that our intuition often drives us to converge on a solution earlier than would be optimal. This happens because we don’t like uncertainty. We want plans, we want to be able to say exactly what we’re going to do.

Uncertainty is often not highly regarded in a company environment. People are rewarded for having a very specific plan and being able to say specifically what they’re going to do. The other temptation is that people want to keep a team really busy, or they want to make sure that there’s something teed up ready for the team to jump into next. There’s pressure on them to formulate those plans and lock down what those plans are earlier rather than later.

All the reasons why you want to converge earlier are very clear, but all the costs of doing that are often not very visible. So unless you’ve actually experienced it and experienced the benefits of waiting to converge—or of allowing a period of messy reality—it’s unlikely that you’re going to really see the net benefit.

Often, when you converge later, you realize you’re solving the wrong problem. You have an opportunity to gather diverse perspectives, you get more people seeing the problem for the first time and experiencing the problem for the first time. This is difficult when you just drop things on a team out of the blue. They don’t have that experience of grappling with the problem for the first time. They might not think about creative ways of solving that particular issue. And the people who have the idea are often subject to a lot of confirmation bias and sunk cost bias. They’ve invested a lot of time in coming up with that particular solution or converging on that particular problem. Now, even though we know this, it comes up again and again and again, and it’s tough to even call it an antipattern because there’s so many near-term reasons why we think it’s right to do that, that it’s almost an intuition trap.

So when is the right time to converge? It’s easy for me to say, but I think the answer is “a little later than is comfortable.” This idea is borne out as you look at a cross-section of teams. When you’re converging at the right time, there is a period of messiness, there is a period that includes a little bit of discomfort. This is how you know you’re on the right track. When you’ve converged a little too early, what you’ll typically observe is near-term speed and efficiency by jumping into the problem, a kind of near-term momentum. But often just a very short amount of time into the effort, it really becomes clear. Maybe you started off on the wrong track, or it’s very hard for people to communicate all the requisite contexts that they have because they converged early. You’ll experience a big hiccup. That’s one way of recognizing that you’ve converged a little too early: look for that initial sense of certainty. It’s usually a pretty good sign of what’s happening.

One additional thing to be on the lookout for is that when things are moving slowly, there’s often a heavy, heavy urge to figure out all of this stuff upstream. This is particularly strong when people are twiddling their thumbs because there’s not a lot of flow in the system. People tend to converge more and more on plans. You should resist these impulses; the whole idea is to plan at the last responsible moment instead of planning instinctively because you’re so nervous that things aren’t happening right now.

Top

Mission vs. Projects

There have been a couple really amazing talks recently about product thinking versus project thinking, and I think that those discussions are extremely valuable. It’s very important to understand the difference between what a project is and what a product is. But I don’t actually think the comparison is apples to apples. It’s a challenge in the sense that when you are iterating on or offering a product, there tend to be initiatives or missions baked into doing that. You could make a reasonable argument that a product is the byproduct of a series of projects that have been brought to completion. That is a reasonable argument to make, although there are some important differences between project thinking and product-oriented thinking.

I like to think about missions or initiatives, and how those differ from projects. They do differ significantly from projects in the sense that they can be open ended. They don’t necessarily end with delivery. Which isn’t to say that all projects are like that, but that is a common framing of a project. The longer that you spend on a mission, the greater the suggestion that that particular mission is valuable, that you’re having success improving a particular metric or improving outcomes for your users, or accomplishing any number of things.

And that’s a huge difference to how most people think. Most people think that the quicker you get things done, the better. In mission-oriented thinking, yes, you want to learn quickly, and you want to quickly figure out how to offer more and more value. But you’re not constrained by this idea and to the factory metaphor of delivery— just dropping things off the end of the assembly line. Instead, frame things as bets as I discussed in the bet section.

Missions are also often nested. There are really small missions that might take a couple of days, and those that in some way feed into or are linked to larger missions that might take a couple of months, a couple of quarters or even a couple of years. The whole company is based on a series of missions, just like it’s based on a series of bets and beliefs.

The important framing is that a mission might also have a stopping function. The team might have an agreement about when they will decide that pursuing this mission any further might not be beneficial. That is very, very different than a predetermined definition of done, a delivery or state you achieve that clearly determines the endpoint. An example of a stopping function might be when the rate that we’re able to improve what we’re working on drops below a certain threshold. At that point, we might decide to reconsider. We might want to consider stopping, pivoting, or embarking upon a different approach.

An argument could be made that this is just semantics, that you can obviously just mold the project idea to encompass a definition of done for some kind of outcome. But I think that philosophically, what you’re talking about is shaping an approach that people enjoy. I think that humans do enjoy this idea of the initiative for the mission, and products risk devolvíng into a feeling of endless iteration. Having a container for an initiative does make sense on a human level. Reframing the idea of a predetermined outcome for the effort into improving someone’s life or improving a particular metric or entering your particular market can be extremely powerful.

Top

Starting Together

Starting together is something that I’ve spoken and written about a great deal. The whole idea of starting together is that there is a tendency to send people upstream or to have smaller groups of people initiate work. And the fascinating thing is, if you ask a team what work that they have in progress, they’ll often show you some set of work. But when you ask what the people are actually working on—and I mean everyone, what is everyone working on—you’ll hear about lead architects being in meetings for something that’s supposed to happen in eight months. There will be PMs and user experience meeting for things that have potential. You often find that the amount of work that’s theoretically in progress is actually dwarfed by all of this planning and decomposing and pitching and discussing. The whole idea of starting together is trying to limit that planning inventory, trying to really kick off an effort with all of the people involved, and striving to minimize premature convergence.

This doesn’t preclude people trying to build context around something or understand the size of an opportunity or to set context. What it does mean is that instead of a small number of people dropping it on the team, you try to get the whole team experiencing the problem for the first time. I like to use an analogy from movies when the group of friends opens the door of a haunted house together and you see all of their eyes go really, really wide. That’s the sign that they’re experiencing the problem for the first time. You see a lot of movies where there’s a small group of people involved in the first 10 to 20 minutes of the movie. Gradually, they assemble the whole team after they’ve endured some trials and tribulations in the beginning. That’s the point where you get the team preparation montage, the tension builds a little bit, and they get all of these unique ideas and ways to solve the problem because the team has been assembled. That’s the kind of starting together that I’m talking about, because what you find in those movies is that a lot of that initial battling doesn’t necessarily equate to the power of the team once the team has actually assembled.

The whole idea of starting together is to figure out how you can plan a sequence, how you can arrange and construct your teams in a way that they can truly start working on something and clear their calendars. This is not, “Oh we do one meeting in the morning and then we continue to do business as usual,” it’s really just clearing their calendars so that they can all jump into this problem together.

And what might this look like? There might be customer interviews or joint research activities. You might have various people presenting bits of data that might be known about the particular problem. You might do customer visits. You might try to start doing some rapid prototyping. There are many different paths a team might take, and it’s difficult. You don’t want to adopt a one-size-fits-all approach to starting together—that’s not really recommended for anything. But the whole idea is getting everyone in the room, clearing out the time, and doing some kind of kickoff that really aligns people on the opportunity. Then you experience that initial exploration together, as a team.

When I see a lot of team struggles, especially in terms of getting things done, you can often trace it back to the kickoff. You can often trace it back to there being a lack of alignment. That might not even be the word you’re looking for, because poor alignment at the beginning is quite common, but if everyone is experiencing that at the same time, then when you actually do get to the point of a deeper level of alignment, the result is all the more powerful. Generally, the idea is to create an environment in which the team can experience the problem for the first time together and connect with the customer.

Top

Coherence and the messy middle

One thing that you find when you talk to teams is that it is often very difficult for the people working on the front lines to connect their low-level work all the way up to the larger bets of the company. You find that the near-term work in the one to three-week, or even the one to three-month range, is usually pretty known because that’s what’s dictating people’s lives.

The larger company bets, the one to three-year bets are also known, but they’re almost by definition somewhat vague. They’re very directional. They don’t feel real at that moment. They’re hard for people to wrap their heads around. And then there’s a middle layer, which I like to call the “messy middle,” which is neither short-term work nor really long-term work. It’s those bets, the things at that level that teams often have the hardest time wrapping their heads around. They have difficulty conceptualizing how their work fits into that messy middle work, and how that work fits into larger long-term missions.

One of the reasons that this happens is simple: we spend a lot of time talking about short-term work and the large, long-term things. But once the bets have been made, often, that one to three-quarters of the work is not really discussed all that much. It’s only mentioned if you’re being very intentional about it. The key to building this kind of coherence is that you need to keep reiterating the thought behind the bets in that middle range: what you know about them, your progress towards resolving those particular bets. Have your beliefs changed? How and what are the teams doing? And even how those bets are related to the high-level goals of the company.

The important reason that you have to keep repeating those things is that when people talk about being more outcome-focused or impact-focused, almost by definition near-term work isn’t very outcome-focused. One exception is if you’re in an environment where you’re making these tiny little changes and a million people are viewing it that day, and you’re busy tweaking things day in and day out.

And if it’s a highly specific customer outcome and it’s not linked to these kind of powerful customer outcomes, it can take a little longer for you to understand whether you’re moving the needle on those things. For example, you could make something new possible for a customer. Yes, you’ve achieved an outcome, you’ve solved that particular problem, but it’s still unclear. It can often be unclear whether doing that thing had a more fundamental impact for customers. And that fundamental impact is typically captured in these larger increments, a bit like pebbles, rocks and larger rocks—boulders. These aren’t the mountains of the company, but they are the boulders of the company. If you’re not reflecting back on those things, it’s difficult. But back to this idea of coherence: one of the things that’s misunderstood is that people believe that you need certainty to have coherence. That’s not what you need.

Coherence is the ability to be able to navigate and link together the work that you’re doing in all of it’s messiness and all of its uncertainty and just tell a coherent or persuasive or connected story about how this relates to the larger things that you’re doing as a company.[a]

People often chase certainty, and the problem is is that when you manufacture a certainty, you actually limit coherence. The smart person in the room will look at that and say, ”Well, that’s not very coherent. We don’t really know that yet,” or, “We’re still trying to discover that, and we still haven’t gotten that thing going.”

Manufacturing certainty is not coherence. Coherence is taking the reality that’s in the room and visualizing it or presenting it in a way that people can understand it and navigate it. Coherence also doesn’t mean that everyone needs to think the same way; that would be building a false consensus. If you have three different perspectives, the coherent way to describe them would be to describe those three different perspectives and how they interrelate.

Top

Multiple operating models at once

One thing that becomes abundantly clear when you talk to teams is that there are often many different types of bets in progress at once. In fact, you want a balanced portfolio of bets, which means that you’re going to have teams tackling very different types of work simultaneously. An antipattern you observe is that some form of process is expected to apply to all the types of bets in your portfolio, and that somehow you’ll find some one-size-fits-all approach that will work for that. You see this a lot in program or portfolio management. In essence, portfolio management should include a portfolio of bets, but everything ends up passing through the same gates, and teams are all expected to do the same types of things.

One thing you observe with higher-performing teams is that they tend to have a relatively stable set of patterns for how they approach all of the things they do, while also being able to absorb the unique nature of the different types of work that they’re doing.

One great example would be where some companies get a lot of small customer feature requests which, if they could just knock them out, would have a relatively low blast radius. They’re pretty easy. They make sense. How the team approaches working through those is very different from how we’ll approach more exploratory efforts. This requires an approach that will accommodate both. Now, what’s also really, really interesting is that when companies realize that they need different operating models and and the ability to run those in parallel, one temptation is to completely factor out those teams so they don’t interact at all. Now, that does have some benefits, but it can also be demoralizing. For example, if you’re working on one effort in the company and the company’s constantly starting these very interesting innovation efforts and they’re not letting anyone participate or giving people an option. Running concurrent operating models or things in parallel is actually very nuanced and difficult. It is not a silver bullet that is going to solve all your problems. It actually takes a lot of work to get right.

Another big area where this manifests is that a lot of teams approach things in an approximately agile way. There’s this idea that all things are somehow emergent: architectures are emergent and all these things are emergent.

But there are certain classes of problems that are riskier from an architectural standpoint and need to be thought through more carefully. You need the right people in the room, and they need time and space. That’s another example of how trying to treat all things as “Let’s just get started and see what happens” doesn’t necessarily apply in every case. The relevance of this for product is that often, product managers with different skills might be really adept at tackling different types of bets. Some people might be extremely good at things involving a large number of partners, for example, or understanding the business landscape or the partner landscape. Another person might be really amazing at extremely disciplined, tiny incremental improvements.

Layering one on top of another, someone else might be great with new ideas, an enthusiastic idea person who likes to be involved really early in validation. This relates to how you staff, and it relates to what you’re doing. An interesting model I heard about recently is that someone talked about areas where they were playing offense, areas where they were playing defense, and areas where they were diversifying that specific model. This is something to keep in mind.

Top

Data as a trust proxy

One thing that you observe as teams grapple with using data, measurement, insights, or any number of things, is that there’s often this idea that data will serve as some kind of trust proxy. That somehow a piece of data will settle all the ties in opinion, that the highest paid person will suddenly stop enforcing their opinion with other people, that suddenly you’ll have all this certainty about what you’re doing.

What this misses is that when you see teams making effective use of data in their environments, there’s often a lot more uncertainty. There are a lot more questions about why what you’re doing isn’t working. There are wide spectrums of confidence. When you initially embark on some effort, you only have rough assumptions that you’ve baked into the metrics that you’re tracking or the analysis that you’re doing. Over time, your confidence increases, but it is an ongoing effort. You have to renounce the idea that the data will pass/fail your teams or will once and for all tell you if you’re moving in the right direction. Instead, it’s important to adopt a learning stance, one that acknowledges measurement as a catalyst for learning. You’re peeling away the layers, and measurement doesn’t instantly give you some hard kernel of truth. It may, however, help you peel away one layer of the onion and better understand more of what’s going on.

You see this as well in situations where people imagine that there won’t be any more need to do qualitative research or to connect with users or customers. An erroneous belief can develop that everything will be like a science experiment. But what I’ve observed in talking to these really high-performing teams is that it really is an art. There’s a lot of luck involved in what we do, and a lot of fortuitous timing.

The idea that you’re going to structure these perfect experiments and collect this perfect data and shut everyone up and make your bulletproof case to management must be tempered by the reality that it often doesn’t look like that. There’s a quote that goes,“If we have data, let’s look at data. If all we have are opinions, let’s go with mine.” What I think is kind of funny about that is that there are so many ways to warp data. There are so many ways to twist it, that people often use it to simply back up their opinion. There’s a lot baked into that statement in the sense that people’s priors, the stuff that they’ve observed, their instincts and other things, are pretty valid.

They are valid things to consider, in a sense they’re even data. It might not be as crystal clear as other people’s version of data, but all that stuff is data. So to create an environment that is conducive to being more evidence-driven or data informed, you need to create a safe environment for uncertainty, an environment where people are open to the idea that you’re iterating on how you use data. For example, initially you might only have some very rough KPIs.They’re a useful mechanism to accurately communicate your beliefs. Now, over time, you might really begin to build a deeper picture of causal relationships between things, to develop predictive models to be able to prove and disprove things. But initially, you’re lucky if you just use these things to represent your beliefs and move forward.

Top

Chronic vs. Acute Issues

I’m always struck when people read a blog post from a popular company and then say things like, “It’s perfect there, look at how well that’s working,” and “Why can’t we work like that?” As someone now who’s managed to talk to a lot of those companies, one thing I can say for sure is that it doesn’t get easier. Maybe you learn faster or you go faster or have more impact. But a lot of the acute challenges that arise happen in those particular teams. Interestingly, I would say that the difference is that there tend to be fewer chronic problems there. Such organizations find a way to be resilient and to prevent things from becoming chronic and to solve more acute issues.

You’ll talk to an executive who’ll say, “We’ve got all these problems, but we tend to just knock out the ones that are really limiting us that people bring up.” And if you talk to the employees on those teams, they’ll say the same thing. They’ll say, “Nothing’s perfect here. We’ve got our fair share of problems. Two years ago, we encountered that, and we kind of worked it out. And a year and a half ago, we encountered this other thing. We worked it out.” These companies have a sense of self repair, a sort of immune system. The company is strong—but it’s not like they don’t get sick and get a fever—but they’re eventually able to fight off these particular issues.

Now, compare that case to companies that seem to be struggling with a lot of chronic issues: companies that are carrying a lot of debt, or where it might take years for a toxic person to be let go, or that have been affected by a merger or an acquisition (perhaps they still haven’t resolved the issues of how the parent company should interact with the acquisition), or any range of things like that.

From the outside, there’s often an impression that everything is rosey, but that’s just not the case. What distinguishes them is this element of self repair. What’s very important too, as you talk to these companies, is that it would be easy to say that the silver bullet to achieve a self-repairing company is transparency and other things (psychological safety would be a good example). It might be easy to point out commonalities between those companies. But when you dig deeper, you see that the way that that company achieves that positive net effect, that ability to self repair, can vary greatly depending on the company culture.

As a result, you might see one company that’s actually pretty hierarchical. They have many layers of management and teams which are rather isolated and which don’t necessarily have insight into the inner workings of how the C-suite is resolving these chronic issues. It’s not really out in the open, but it happens, shit gets done, problems get resolved. Another company might be extremely transparent, extremely flat, with high transparency between different groups and much more openness about its warts. But the net effect is the same: both successfully deal with those chronic issues. This goes beyond truisms like “toxic leaders are bad,” or similar platitudes that could apply in many different situations. You’ll find unique approaches that individual cultures have developed for helping information flow to the people who need it and for resolving chronic issues.

These approaches and who’s involved depend a great deal on the culture. In some cases, you can see that it’s derived from the management layer, in others it’s more a result of work by the front line. And in still other cases, it’s a strong CEO who believes incredibly deeply in a handful of things happening in their company. The mistake is to assume that all of these companies are healthy for the same reasons. Although you can find commonalities, digging a little deeper reveals a lot of variety.

Top

The attraction of short-term thinking

It’s almost impossible to find companies that aren’t struggling with the tension between mid and long-term outcomes (and upside and potential) and the pull of short-term demands. Any company that says that they’ve solved this problem is probably lying; it’s always a balancing act. I think that has to do with the fact that in our personal lives, we have a hard time thinking about the mid to long-term. We get distracted by shiny objects, we get pulled into the magic diet, we get are constantly being drawn into success theater in our own lives. Observing a cross-section of companies, you see the same dance being played out constantly.

There are a couple of solutions to this. One is to be very stubborn. Many people who’ve had multiple failures in their careers start to develop a level of stubbornness about what they’re doing. This often relates to not chasing short-term outcomes. It’s a level of stubbornness related to how they want their company to be—leaps of faith that defy the short-term—that they feel will build a more resilient company in the long run. This is very difficult to do. For example, if you’re a first-time startup founder without a track record, and you have investors dogging you to create this kind of short-term growth.

As people become more experienced, they begin taking this more disciplined, almost stubborn approach. You’ll see this reflected in much of what has been written about Netflix. The founder, Reed Hastings, had had certain experiences in previous companies. This led to the writing of the text “Netflix Culture: Freedom & Responsibility,” which has become a very popular bit of writing. They went out of their way to make sure that the things that had happened in other companies would never happen again moving forward.That’s the level of stubbornness I mean.

The other opportunity is to compress the durations of activities. The standard problem with these short-term gains is that they can generate negative side-effects. If you try to build quickly, and in the process introduce a lot of technical debt into your product while chasing some short-term outcome, you will only feel the impact of those decisions after a certain amount of time has passed. This lag time impacts your ability to course correct.

One option is to become better at sensing the early indicators that you’re going off the rails, and then feed that back into the decision-making mechanism. And that certainly is what you see a lot of companies doing well. It’s not like they ignore the pull of short-termism and short-term successes, but they are very good at recognizing that something is wrong based on the early signs. Not only that, they’re very good at acting on those signals when they are detected. An example would be a team that raises a red flag when they begin to sense that they are accumulating a lot of technical debt. Or perhaps things are being impacted by dependencies that weren’t previously apparent. The organization would then be able to initiate efforts to address that problem. This goes back to the idea of chronic issues versus acute issues: the most successful companies simply allow fewer chronic issues to develop.

Top

The time it takes to get good

It does take time to get good at this—or to get good at anything. But you observe again and again that companies trying some new process or technique for the first time often incorrectly expect to see immediate results and to be good at it instantly. Frequent examples of this involve research sprints and approaches to measurement or mapping.

I always like to mention that I worked in a company environment that was really pretty healthy. It was a great product, and there was a good attitude towards product. New junior people like designers, engineers and product people would join the company straight out of college. Full onboarding took time: it took as much as 12 to 18 months to get them to the point where they were really starting to grasp being given a more open-ended opportunity and had the ability to extract value from it.

This allowed them to really come into their own as product developers. Not only that, it wasn’t just the 12 to 18 months, it was repetition. It wasn’t just one big effort. They went through the mission cycle. They went through the cycle of encountering a new problem and tackling that problem, over and over again. And importantly, they were allowed to fail. Early on, they weren’t all that great at it. But instead of someone coming in and simply saying, “Well, you guys are doing it all wrong, and this is exactly how to do it,” there was a level of safety at the company where teams that were wandering off in the wrong direction were allowed to wander off in the wrong direction. Now, consider the power of that for a second. If all you’ve done is had people stop you from making mistakes, you haven’t really learned the hard way.

But if you’ve done something where you’ve gold plated a product or misread the problem or gone off for weeks or months in the wrong direction and really got to understand how gnarly that can get and what a quagmire it can become, you become that much stronger in the future in terms of your pattern matching and your resolve to make sure that that type of thing doesn’t happen again.

The takeaway when talking with teams is that they often believe it’s just about mastering a particular framework: “Oh, we’re going to install this framework and it’s suddenly going to fix things.” And it really doesn’t. There’s so much nuance in product development, so many moving parts and intricate facets, and the work itself is often so varied that it’s very difficult to instantly learn something.

There are myriad little patterns that you have to get good at matching to be successful. The consequence is that it takes practice and repetition. With repetition, it’s not just that people talk about agile and sprints, “While we go sprint for sprint, we’re learning.” Often, it takes months or even quarters for some bet to fully come to fruition. Certainly, you are learning every week. You’re learning every day, you’re learning every time you have an opportunity to reflect on what’s happening. But sometimes the biggest learnings take a year to materialize before you really understand how the whole thing played out. It’s important to keep this in mind.

The final thing related to this need to practice is the safety that allows people to speak freely about their experiences. It’s easy to point to “fail fast,” but that doesn’t do justice to the safety required within an organization to let people fail and then talk openly about what they might have done better, which enables people to make progressively better decisions in that environment.

It’s hard, and it takes practice, it takes a supportive environment, and even really experienced people need to practice when they shift domains or move to a new setting. It just doesn’t happen.

Top

Play Less Tetris

You end up meeting a lot of product managers who consciously or unconsciously perceive part of their job as loading up teams, engaging in what I view as an elaborate form of Tetris. They’re looking at individuals and saying, “What are they doing right now? Maybe they could take these three other things, or maybe that other team could split things up five ways—20% each, or when that engineer’s done with that one thing, I’ve got this next thing that I’m going to load up on them.” Or, “What can we fit into this quarter while we fit these other things? But there’s this other thing, and maybe I can negotiate this and move this around.”

One of the underlying components of this is the idea that it’s part of their job to till people up, to get more output. This is not limited to just product managers. You see this with engineering managers a lot. You see this with individuals a lot; as individuals, we also try to play Tetris with our time. We try to play three dimensional chess with our time—all the time. Consider when we’re blocked on one thing and then immediately try to fill up that time instead of unblocking ourselves. Or we don’t even allow ourselves time to contemplate, or use slack time on our own, which is valuable.

The important side effect of playing Tetris is that it leaves less time for experimentation and less time for exploration, etc. When we believe it’s our job to keep people busy, we tend to pre-converge on things, and we tend to rush things so we can drop them on teams. We allow less time for teams to truly start together, and we engage in dependency wrangling, something like, “What else do we need from these other teams to make this possible?” Five other teams are asking that same question. Maybe you have teams juggling 15 things at once for 30 different efforts, and all of those things come back to bite you. And as it relates to measurement: if everything is predetermined in the game of Tetris—the puzzle pieces have all been placed—you won’t have the leeway to iterate on things or to explore options. You’re going to be locked into a particular plan because you will have over-constrained yourself so much with all these commitments and individual backlogs and other things that you’ve put together. This is a really important thing to keep in mind, and that I see over and over: you have to release yourself from the Tetris game.

The difficult part here is that engineering teams are often looking to product to play Tetris. “Oh my goodness, why don’t we have another thing to work on right now?” Now, the minute that you say to them, “Can Joe work on this or can’t they? Because if Joe can’t work on it, then I have another thing for Joe,” you’re also encouraging this on the part of the engineering team, encouraging a high degree of specialization. You’re encouraging them to not necessarily treat things as a whole team. It’s very nuanced and very hard to break the habit of playing Tetris because there is a lot of pressure from all parts of the organization to keep playing. There is an idea that an engineer who doesn’t have their fingers typing on a keyboard at any particular time is somehow this massive waste. And that as long as there’s a free hand, there’s certainly some way to start some new thing that you’re working on. That’s Tetris in a nutshell.

It’s a bad habit to get into as an organization, and you really can’t embrace some of these more experimental, open-ended and impact-driven approaches when you’re actually optimizing to keep people busy. You are what you’re optimizing for. And if you’re optimizing for keeping people busy, you won’t be optimizing for outcomes in talking to teams.

Top

Moving fast and slow

A lot of organizations place a heavy emphasis on moving quickly and on output. You even see some pretty popular companies brag about how many features they managed to complete in a certain period of time. Maybe that’s good for those companies.

But there is a contrary movement. Designers and architects are often associated with the idea of questioning this pressure to move because it involves cutting so many corners. “We’re putting crap out there—why aren’t we taking our time? Do we actually need to deliver this quickly?”

The arguments are polar opposites. Teams that appear to be doing well are actually able to incorporate these impulses in a way that results in a bias for frequent integration. By that, I mean a bias for learning, for integrating assumptions and then testing them, and for making sure that they don’t go too far off course chasing some silver bullet. In fact, there is a bias for shipping or testing—for action. At the same time, there is longer-term thinking involved, a habit of leaving room to iterate on something and to explore. These teams allot time to “bake in” the product and investigate whether it’s working or not. This is a more deliberate approach.

A classic example is a team that’s shipping quickly. The flip side of that is a team that is shipping pretty quickly, but which is also learning as fast as they are shipping, as one coworker put it: they’ve harmonized learning and shipping. This is often surprising to designers, who are often used to situations where, unless they really dig into the problem and think about the solution, they run the risk of everyone just cutting corners and dumping an inferior product into the world. The idea of leaving room to truly iterate is foreign to a lot of designers.

If you put someone in the situation of always being fearful that someone’s going to yell “ship it,” of course they’ll gold plate what they’re doing. They will also be pretty resentful about all these iterative practices because the iteration is not being done in service to a better outcome. Iteration instead becomes shorthand for “releasing crap quickly.”

One idea to keep in mind is “working fast and working slow,” of remembering that there are benefits to rapid learning, integrating and testing where you’re at, of putting the pieces together quickly. I have a friend who is a VP of engineering. When his engineers are having problems moving at all, he adapts their work scheduled to do one-day sprints. That might seem terribly inefficient. And to an outsider, it is inefficient. But it’s sometimes better to have that bias for action than not, to be granted the power and the ability to do that. Some recent studies indicate that these top-performing teams are deploying often and and integrating often. It’s a skill and a power, but if you’re not using that power for good—if you’re not truly closing the loop on the learning—you run a terrible risk that you’re going to end up with a lot of crap in your product. Really think about balancing the bias to action with the bias for learning.

Top

Storytelling, repeated stories

One thing that becomes clear when you talk to teams that you know are doing well (i.e. increasing their market share and making humans love their product and all those types of things): they tell very coherent stories about how they work, and the way they describe how they work is very disciplined. Here’s what I mean. When you ask someone, “How’s it going? What are some recent product decisions? What is your product strategy?” a less experienced person will say, “We’re working on this right now.” But there’s not a lot of context around that.

When you talk to someone who is really on top of their game, they will paint the whole picture and how what they’re working on fits into the broader story. They’ll say something like, “In our company, there are three major forces that we think will shape the market for the next six years. This is one big unknown, and here’s another. Our unique edge on this is that, and the way this translates to our six-month areas of focus involves three primary puzzles that we’re grappling with,” etc. That will continue until you are given a coherent, holistic picture of the situation.

This is important, as it is when they talk about their recent product efforts, because there’s a depth to their explanations. They are discussing their assumptions. They are talking about what they thought happened. They are talking about what they learned and what surprised them. They’re talking about specifics: details and data—both qualitative and quantitative. And they’re telling good, meaningful stories, not vague stories about what happened in the last six months. This is important because this storytelling, this repeated reflection on what’s going on, this repeatable consistency in you’re describing is a real hallmark of these great teams. It’s something that you can practice as a team as well. Companies frequently have tech talks where they mention things that have shipped recently; this in-depth reflection happens a lot less frequently.

The team is getting up in front of the company and sharing these stories internally, talking about their missions in an honest, transparent and in-depth way, about their trials and tribulations, about what they learned and what they put out there—both the positive and the not-so-positive things. A lot of sharing takes place. This goes beyond simple dashboards, and it goes beyond someone just pointing out a big win at a quarterly meeting.

It’s the depth of their dialogue about the game that they’re playing, the bets that they’re making, and what they have learned—that is the muscle that you need to build, and it doesn’t come naturally. It’s also something that a lot of teams don’t leave the time and space to reflect on. Taking to product managers, they often say, “I’ve been meaning to write this blog post for a long time about something that we did. People internally are asking me about that.” To them, this is an extracurricular activity, and most of their time is spent on the stuff that’s happening right now or stuff that they’re trying to pitch. The trick in this regard is to carve out space and time for storytelling and for sharing experiences and building up that muscle for real coherence about how your work fits into the bigger picture, including what you have learned. And that’s a real superpower.

Top

Strategy and its relation to beliefs

Especially in teams that have really begun to embrace the idea of working very iteratively, of working with sprints or other similar things, strategy can often carry a negative connotation. It’s big, and there are a lot of assumptions. It’s something that executives put in PowerPoint decks. It’s very tired, and it’s a lot of talk. But it boils down to other things that we’ve discussed: there are a lot of product teams that almost have their hands tied, which doesn’t make sense. Another way to put this is that their future is largely dictated by a number of decisions about which markets to enter, which personas to target, where they think the market is going, what competitors are doing, and what’s going on in general. I think purists will say that if the product is good, it makes everything easier. So we end up focusing exclusively on solving a human need, and we’ll be good, which I actually truly believe in. I think that that’s a great principle to follow.

You see a great number of really promising products with a strong design culture. They’re doing a great job of connecting with that human need, but they aren’t really paying attention to the sea changes and the shifts and the broader gameplay in their particular market. What I like to ask teams is, “Which wave are you riding?” You see certain problems that over the years, multiple waves of people have been trying to solve. Some companies get in early and then perhaps become too big to innovate. Some people get in later and have the benefit of access to new technologies, but they can’t really compete. Some people enter later thinking that the problem is really X, and then the whole industry is turned upside down by some form of disruption.

And the reason this comes up in a lot of these talks is that someone will be obsessed with measurement in the here and now. There’ll be talking about some workflow or about how to measure this initiative to know it’s working. When you dig deep into their product strategy, you realize that there are these fundamental bets. There are these fundamental questions or moves that they’re making (or in some cases not making) that have a much greater influence on the approach to measurement that needs to take place. A lot of the work they’re doing at the moment consists of tiny little step changes, of improvements further up the funnel or in other parts of their business.

There are glaring questions about how the business will operate. A classic example of this is when a company is struggling to find product market fit. It’s easy to get distracted at that point and forget that you’re really in the stage of trying to find product market fit. There are also a variety of sub-stages of finding product market fit, which can be distracting.

This isn’t a suggestion that you just need a bunch of PowerPoint decks with lofty projections on the whole space. But it is important to lay out and map the core beliefs of where you think things in your industry or space will move. You should identify where you are relative to legacy solutions, relative to your potential disruptors and relative to your current competitors—the whole stack of what you do. Frequently, a company thought they were a banking company, but under closer scrutiny they discovered that they were actually a data company.

At that point it’s “Oops, now we’re competing against all the other data companies.” Think about strategy and how that relates to your beliefs.

Top

Prioritizing Opportunities

Prioritization is a huge topic and there are many approaches to thinking about it. One pattern that I see repeatedly, which is troubling and which almost serves as a blinder for teams, is a common way to think about prioritization: value and effort. Is this high-value, high-effort is a very simplistic way to think about it. It’s very simplistic because often the most valuable things you’re working on are huge multiples more valuable than the low-value things you’re working on. In terms of effort, there can be a wide range as well. It doesn’t really take into account things that are more friendly to experimentation, where you can iterate your way through. There is an area of confidence, and there’s risk. This relates to much of what we covered when talking about the intricacies of bets and how they work.

But if you step way back, I can paint you a picture. As a company, we may believe that there is a big opportunity if we really made a certain actor or persona a lot more efficient in the way they work. We think there’s a huge opportunity there if we can accomplish that. And there is a whole plethora of ways that we could try to make that happen.

Some of those are big lifts, and some are small lifts. There’s just a whole variety of ways that you can move them. But if we also consider our product strategy, that opportunity is by far the biggest opportunity that we can work on. Now, this ties together with some other things we’ve talked about, about premature convergence and starting together. The tendency at that point is to say, “What are we going to do to exploit that opportunity?” Someone will say, “This is low hanging fruit. There are other things that we could take care of. Here’s a level of effort.” The prioritization goes like that.[c] Things are then sequenced based on that opportunity size, adjusted by effort. Now the danger at that point is that we often forget how important that opportunity is. So instead of saying to a team, “There’s this huge opportunity. We trust you. There might be some small stuff or some big stuff, but whatever you do, if you can just keep it pretty snappy and experimental in the beginning, it’s going to be good for us. We just need to explore and extract that opportunity.” Some team has already begun discussing who might tackle a thing, and we’ll tackle this other thing. Or they’ve committed to doing this particular item. The important point here is that when you’re prioritizing, product should really consider prioritizing by the size of the opportunity and try to resist taking on too many opportunities at once.

The temptation is to split things apart and prioritize interventions prematurely, before a team has really had a chance to tackle that analysis. From an engineering standpoint, if you’re a shared team that might do infrastructure work for many other teams, and someone is asks you to decide ahead of time if you are going to solve a problem (to play Tetris with your own backlog in order to do all of these things), it can be really misleading. It can be really suboptimal in terms of what you get done. When you prioritize by opportunity size first, you resist making all of these assumptions about how the work will happen, how it might be split between teams, how shared teams will be involved, and what you’re going to do. You keep it crystal clear from a product angle that you believe that this opportunity is the largest thing. For me, if an engineering perspective determines that we want to get five teams working on that because it’s that big of an opportunity, “We’re going to get it done in a third of the time,” or “We have this novel way to try to make this possible,” then it’s amazing—you don’t want to impede teams from being creative about how they’re going to attack opportunities by already breaking them down and prioritizing solutions.

Top

Learning Cadence

We have touched on this in some of the other sections, but I wanted to zero in on the idea of learning cadence and to differentiate that from a shipping cadence or delivery cadence. The best way to tackle this is to imagine that you have asked a team to talk about a handful of things each week that it learned about customers or users. What would they talk about? What would be the volume of that learning? What would be the depth of that learning, and how might it shape what the team is doing right now? A question that I ask teams a lot is “What have the big Aha! moments been that really created a pivot for you, that really forced you to rethink how you’re approaching what you’re doing?”

The variety of responses to that question is amazing. Sometimes it’s, “About six months ago we changed our strategy a little bit based on some learnings from maybe eight months ago, but now we’ve been pretty much in execution mode, and we’re just rolling through and moving on that.” And then you get other teams that say, “Wow, I can’t even begin to count the number of things that we’ve learned in the last six months. We’ve learned that users did this. We made this mistake. We learned this from someone else, and we’ve been getting feedback from the market that this other thing is happening, while also tweaking something else.”

They review the last six months, and it’s just learning after learning about what they were doing. This helps me think about what the learning cadence or velocity is for a particular team. I say “cadence” because there’s a kind of cadence with which we reflect on things, and it’s very layered and nested. You might be learning little things every single week, but in terms of the larger chunks, maybe one to three of those bubble up every six weeks or every quarter. This learning velocity is a really powerful way to understand what’s happening and how your team is working now. Often, when a team or a small group of designers or strategists has a fair amount of upfront research, they’re doing a lot of learning.

There’s rapid learning every day. With every customer conversation, you’re learning something, and then you see this shift: you’ve got that learning, and now you’re trying to exploit it in a different context. I don’t think there’s anything inherently wrong with that, and that certainly makes some sense. But you have to contrast that with teams that may do a bit of that deeper upfront learning, where they’re always revisiting those assumptions. They’re revisiting to see if they are on the right track. And probably most importantly, they’re acting on that learning. I think that that’s one of the difficult things. You see teams that have shipped something, and they started amassing this list of requests and bits of feedback, and honestly, then they’ve been redirected to something else. So yes, they’re learning, and yes, that’s entering their system, but they’re not turning around and acting on that learning immediately, converting that knowledge into a change of direction. The important thing is not just learning, it’s responding to what you are learning. There is no right or wrong. Sometimes there is a lot of learning all at once, which then drifts into a little bit more exploiting than learning, with the pendulum swinging back and forth.

But this traces back to when I’m talking to these teams and it’s very evident in their conversation. They’ll just say, “In the last month, we really picked up on this, and we learned about this, and we learned that we were wrong about that.” And the differences between people who can and can’t really answer that question at that point are quite evident, especially when they talk about acting on what they are learning.

Top

Learned helplessness

Sometimes, when I’m talking to a product manager, they’ll say something like, “I wish my team would be more interested in research and exploring the problem. I’m not sure why they aren’t.”

This is a really interesting question because people certainly have a range of interests. I have engineer friends that say, “It’s not my job to figure out what we need to build. My job is build.” I respect where they’re coming from. I have other friends who are engineers who say, “I’ve kind of given up on what our PM does. I can’t make heads or tails of it. So I’m just checking out. I just want to focus on the technology. I just want to focus on this. And honestly, I don’t have a lot of confidence in what they’re doing, but it’s just better this way. I don’t get involved in what they’re doing.”

That’s a bit different, right? That suggests that they’re interested, but they’ve probably gotten burned a couple times. You find these attitudes in environments with incredibly low psychological safety, where engineers and designers, etc. are interested, they want to discover the problem, and they want to have more impact. But they just don’t. Either they aren’t enabled or empowered to go upstream and get involved, or perhaps they got involved in the past and got swatted down for doing that. There’s just this harsh wall that exists.

The challenge, I think, is that I’ve observed teams that just become really evolved after practicing a lot. In those teams, designers and engineers can pretty much do almost all of the activities generally associated with a product manager, which frees up that product manager to be more strategic and think about other things. It’s wonderful to see teams where that has happened.

But if you as a product manager get into the habit of just making these kinds of overly prescriptive statements, the team will adapt and optimize around you doing that. They’re going to optimize on you trying to put this kind of solution on the plate. As a result, even when they want to get involved, they haven’t really practiced enough, and it’s all a very new experience to them. I do think that we all fall victim to a certain level of learned helplessness in product development. This happens when we optimize around some less than great pattern over a long period of time. It really becomes ingrained in the culture, and it becomes incredibly difficult to break out of. I think the mistake is to assume, as in the example I started with, that when the product manager asks why their team isn’t interested, that it actually is because they’re not interested.

Instead, maybe they’re nervous because they haven’t really done that before. This causes resistance. Or they’ve been burned a couple times, or no one has really explained why it would work better if they got involved, or any number of things. When you actually do get these groups together, groups of marketing folks and the CEO and the developers, they’ve simply not spent any time communicating together. And there’s what Amy Edmonson refers to as a kind of professional culture clash.

There’s a little bit of learned helplessness with that, too, it happens. In short, you have to be very careful what patterns you allow to slip into a company because it becomes very, very hard to unwind those later. Even if people want something different, it can be incredibly difficult.

Top

Data snacking vs. Integrated approach

In these conversations, you often notice a difference between what I would term “data snacking,” and a more integrated approach. In the first, people cherry pick data to support a particular effort that they’re engaged in, or perhaps to answer one particular question. Not that there’s anything wrong with answering questions, but here, the whole idea is that insights serve the purpose of occasionally agreeing with and supporting what you’re doing at the moment.

By contrast, what you notice with teams that are making better use of data (and measurement and insight), is that data is integrated into many different facets of product development. This isn’t to say that these teams are completely data-driven, rather that qualitative and quantitative data is woven into the fabric of all of their various efforts as a team.

For example, in kickoffs, you’ll see context being presented as data. You’ll see data about the problem, and as the team presents it strategy, you might see the strategy represented as a model of particular metrics or beliefs. This can be supported by qualitative data. As the team is reflecting back on what it’s doing or did the last quarter, the metrics that they’re using provide context.

The important thing is the presence of a consistent language around the bets that they’re making and the inputs and outputs, something that transcends one particular effort or feature. And that’s powerful because if you think about things like annual strategy reviews or quarterly reviews (or kickoffs or retrospectives), it’s really important to have a common thread between them. That is a hallmark of teams that appear to have a healthy perspective on using data.

For example, a lot of people incorporate OKRs into their goal-setting framework. What can be interesting is that teams will have their own Objectives and Key Results that might be completely decoupled from the business model or how the C-suite is modeling it’s particular objectives. They’re very localized to particular teams, and they don’t really tie in to the larger approach to what’s happening.

Another example might be something like when a team is embarking on a particular mission and they have some sense of the behavioral change they’d like to create. They have a sense of how they think they’ll benefit the business or the users or customers. They have some sense of the baseline behavior. And great teams will use that as a common framework throughout the mission to reflect on whether they’re moving in the right direction. It’s not just for show, it’s not just the PowerPoint presentation to pitch other people on the particular effort. They’re actually closing the loop on their assumptions and closing the loop on what they’re doing.

I think that another way to think about this ties into using data for learning, and not just as a stage gate or phase gate or getting the thumbs up for your effort, or as a pass/fail for a particular team. When you take the approach of using it for learning, you want to make a special effort to build up a larger framework beyond just answering one particular question or getting the magic insight. That’s a common thing you see: people expect some magic insight that will crack open the whole year of work. And in teams that integrate this, it’s a much more rigorous, cyclical species of reflection using all forms of data that gets them to where they want to go.

Top

Functional feature factories

I have spoken to many companies, especially for business-to-business software, that are what I could most aptly describe as high-functioning feature factories. I would divide them into three categories. (The first is a massive over simplification, but you’ll get the idea.)

Imagine you have a company that can’t get anything done—or if it gets anything done, it’s completely unusable. They’re always chasing silver bullets. Really, nothing is clicking.

Then you have what I call “functional feature factories.” They release reasonably usable features. Customers are grateful (“That that was something we asked for,”) and they are not obvious duds. They keep chipping away at what they’re doing. The main thing that defines these companies is not that they’re doing terribly, but rather the lack of serious focus and step changes in their product, things that really help their customers do their job a lot better. The price points aren’t all that high, so people will churn if it’s not really providing that extra special value.

The third type of company is one that really nails high decision quality and high decision velocity by limiting the complexity they’re adding to their product in relation to the outcomes that they’re creating for customers.

In the middle category, they have an impact level of 4 to 7 on a scale of 1 to 10; they’re chugging along. Sometimes they land a dud that’s a 1 or a 2, but mostly they’re in the range of 4 to 6. But they don’t really have those extended 10s. They don’t knock it out of the park repeatedly or in a disciplined way.

The third category also has some duds—in fact, they often start out with duds—but they’re really pivoting and learning and leveraging that learning to practice a disciplined, repeated, systematic approach to introducing step changes in their product. They can’t predict exactly which one will deliver that result, but they managed to accomplish it.

Why is this important? I think that for most companies that have survived long enough to still be in the game, it’s not likely they’ve been doing anything terrible. But for a lot of these B2B companies, they eventually begin to struggle with the oppressive complexity in their product. They’ve just tried to play too many games, and it’s hard to really manage the result. They’ve made too many promises to customers and added too much complexity to their offering. This makes it very difficult for them to expand on their product and do really special things. Again, it’s not for lack of reasonable usability. It’s not even for lack of being able to move very quickly.

The important point here is that there’s obviously this broad spectrum of the types of decisions you’re making. For example, for some of the highest-performing companies that we know of, at least considered from the angle of their product team, maybe 40% of their initiative-level or mission-level decisions turn out to be great. Maybe only a very small percentage of those turn out to be knockout wins. It really puts into perspective how much complexity we risk adding to our products without creating a requisite amount of value for our customers (which we can hopefully monetize for our company as well). It’s an important thing to keep in mind.

Top

Left to right factory lines

Although product development is often referred to as “knowledge work,” there’s a strong temptation to view it as a kind of factory line. In fact, a lot of the tools that we use reinforce this idea.

Imagine a popular tool like Jira: you view a work item on a board and then move it to “done.” Now, the problem with this is that very few of these tools accurately communicate the relationships between work at different levels, its interrelatedness, and its true nature as multiple loops. We’re generating feedback from these individual items and moving them into the system for the purpose of getting work done. It obviously helps to have some kind of visualization of your work in progress—and it’s a good exercise to limit work in progress. But again, these tools are not adept at really communicating the essentially iterative nature of the work being done.

Experiments are not great at communicating the idea that we might have three overall big bets in our company for the next couple of years, and that all this work is lined up. Visualizations are largely good for creating a delivery focus. In our context, we need to think a lot more about how to augment these production-line views with things that help us wrap our head around how we’re doing overall.

For example, if we’ve got a particular mission, and we’re attempting to improve a metric or other aspects. In addition to this delivery-based or ticket-based view of the work, it’s really important for us to understand our various releases as they contribute to improving a particular metric. This includes all of the various feedback loops that we’ve generated and how that work is linked into the other things that we’re doing.

What you often see teams do is augment the predominant delivery views of work. These views are what many people spend much of their day looking at, and even roadmaps and swimlane diagrams continue to reinforce this delivery mindset. Teams augment those views with additional views of how the work and their beliefs are related, and their progress with these tasks. You could ask what harm this kind of factory line mindset could cause to the outcomes that we’re trying to achieve with the least amount of complexity. There is a risk with any tool that accidentally encourages or incentivizes “more-is-better” without offsetting that against the idea of rapid learning, removing complexity, or the overall health of the system. Anything that does that in isolation is dangerous.

A good example of that is when quality drops, teams easily slip into reactive failure mode. Often, that work becomes considerable: dealing with issues creeping in from the side starts to eat up a lot of time. And unless you think about that and consciously consider how to bubble up that insight, you’re liable to continue plodding away on all the work that you think needs to be done without thinking about the improvements you need to make to limit it in the first place. We’re working in a complex system, and not just left to right. It’s much more like a value creation network with lots of interrelated feedback loops.

Top

Modeling the business

Talking to teams that have had some exposure to the universe of advice for startups relating to growth models or software as a service metrics reveals that they have internalized some models relating to growth. This is at a very high level, and they haven’t yet incorporated their beliefs about how their product is meant to fit into those particular models.

As a result, they’ll look at certain ratios, for example, the cost of acquisition compared to the lifetime value of the customer, and they’ll think that since everyone else is tracking this particular number, they’re going to track it too. They’re going to try to reach some magic ratio.

What they’re not doing is digging into their product and thinking about the value exchange between the user or customer and their product, and trying to understand how those key value moments link back. Obviously, you want to increase the lifetime value of your customer, but what’s your hypothesis or your bet for a more simplified way of looking at it? What’s your bet for how your particular strategy is going to contribute to that, to extend that lifetime value for customers? What’s the bet, what are you really banking on?

What’s fascinating when you pose these questions is that it’s the bridge between their business and these high level frameworks that makes sense on the surface. You understand the “it depends” part at the end of that blog post on a framework: it all depends on what you’re doing with your product, and how your product fits into that framework.

Let’s take a business-to-business product that’s trying to disrupt a legacy on-premises solution in a really antiquated domain. Now, it’s kind of a safe bet that things are going to be more usable. There are a lot of safe bets, but in many cases, it really boils down to the problem of: are they going to be able to accommodate many different customer profiles and types with a singular solution? You get this balance of wanting to design the product to strongly encourage the right way to do things versus simply supporting the antiquated ways that all the customers have ingrained into their business processes. Now that’s one of the critical bets for that particular space. Teams are often weaker at presenting their unique mental model for how value is going to be created. What is their model for how that’s going to work?

During onboarding you see a lot of teams make a very precise bet that there’s some magical moment in the product which, if you do it enough times (even if it’s difficult), will somehow unlock the rest of the product’s value, and then users will keep doing it. Now, teams get caught up when they say, “We’re not sure if that’s the case,” but they’re acting like that’s the case. Their product strategy is aligned around that being the case. So here, it’s important to transparently present that mental model, that business model, in some way. If you can do it quantitatively, great. But if parts of that are more qualitative, you can fit those into that particular model as well. The challenge here is: can you present a model for how you believe this sort of value creation system is going to work while representing your current beliefs?

Top

Teams closer to customers

One question I ask teams is, “How long does it take to get a customer or user on the phone?” The amazing thing with that question is the incredibly wide spectrum of answers I get. This can range from “We can just pick up the phone at any time.” With some products, if you can introduce in-app prompts or whatever, it really is only a couple of minutes before you can get a real user that matches particular criteria connected with the team. It ranges from that fast to months (and many hoops to jump through). One of the most promising uses of data is by teams instrumentalizing it to target individuals who match the characteristics of the people they should be connecting with for their research. For those teams, it’s not a question of if can they connect with the user—they can connect with so many users that they simply don’t have enough time to have all those conversations. They need to connect with the right users.

On the other end of the spectrum, it may take a month of receiving permission from various layers—or maybe marketing or sales acts as a kind of proxy—or are many barriers to the groups who have access to the customer. One important consideration here is that people often talk about bringing problems to the team. And my particular perspective is, problems have all kinds of interpretations, and even the best analysts and the best people at understanding problem can bias particular problems as they drop them on the team.

So the real question is: can you connect your teams with the actual human beings that are expected to get value out of what you are doing? And at a minimum, do teams have really rich behavioral data about how people are using the product, to augment these kind of human conversations that they’re having? Now, obviously, in a lot of business-to-business products, this is easier. Again, it depends on the company. For business-to-consumer products, it’s a little harder to start tapping people on the shoulder and saying, “Can we talk to you?” Although it is possible to do that, the fascinating thing here, even in the bay area or silicon valley, is that there’s a strong sense that data is everything. The cultural pendulum has swung to the point where they’re actually struggling to connect with customers. They have the data they understand, but they can’t really dig into the why. That’s an important consideration for teams. Are you creating an environment where you can connect with these human beings?

This relates to product data and product intelligence. Someone would say, “We can connect directly with our customers, but we’re not really sure we need that.” What you find is that these things work wonderfully in tandem when you’re using the behavioral data at scale to narrow down to the specific moments, the specific people to specific things that are interesting to you—and then reaching out to those people to understand the why and the backstory behind those things. There’s a lot of potential upside to using these resources in concert.

I think the real question is: are you bringing the customer and the user close to the team? Not just the problem, but are you bringing the person with the problem close to the team so that they can interact and connect ? And how many proxies are you introducing between the team of problem solvers and this human being with a problem?

One thing that you find here is that a lot of people have stereotypes about engineers and their willingness to interact with people. These human beings use their product. But on most occasions I find is that the most impactful thing that a team of designers and engineers can experience is to really hear these stories directly from the source.

Top

“Weird” Practices

People who contact me are generally aware of the game. They’ve read about this stuff, and they have a sense of some interesting things that they would like to try. A common question is something like, “How do we sell our leaders on working in this different way?” The first thing that I tried to remind those people about is that when you’re a student of the game, when you’re deep into it, you don’t really realize how deeply in you are, and you don’t realize all the background information that you’re bringing into the discussion.

This makes it easy to go to a leader or executive and say, “Why don’t we just do this particular thing? It’s good because X, Y, and Z.” It’s difficult at that moment to understand how the suggestion might clash with their existing beliefs. It’s very difficult to understand all the other bits of information you need to explain in order to provide context for your justifications. This need for explanation isn’t immediately apparent.

What I try to remind people of is that the burden really is on you: you’re the heretic. It’s often not that people lack a growth mindset. It’s really just that you’ve gone a lot deeper into this, studied it more, and you’re simply more interested in all of the nuances that set us apart. They might have a couple of decades of experience of things working—often working reasonably well. You have to keep in mind how they have worked in their particular environments. The important takeaway is that in a lot of these cases, for a lot of the people who have not experienced those things firsthand, it really does seem to be extremely counterintuitive. To be fair, too, there are always contexts where it might not actually be the right approach, and they’ll always get you on that point. They’ll always say, “Well, it probably depends.” Naturally, you can’t say that it’s going to work in every situation, and you enter this long loop again of trying to explain yourself.

So the important part here is that you have to show, not tell. People will not instantly—or in one month or in three months or in six months—alter all of their prior beliefs that are based on prior evidence. They have to be open to working in a new way. I think that it’s worth pointing out that if you’re trying to work in a different way, and you propose an experiment, it’s safe to fail. You explain what you’re going to try to do. If the environment that you’re in is such that the type of thing you’re proposing is completely discouraged, then no. Just no: you can’t do that. You can’t even do something for two weeks. That’s a little bit different. In that case, I think you need to seriously reconsider what you’re doing at that particular job.

Overall, with these small, safe-to-fail experiments, sometimes you just do it and you don’t ask for permission. You just do it, and your team does it. If you can show with outcomes and if you can show with improvements, those are the things that really change people’s perspectives on this thing.

A good example of that is really setting up a good kickoff and even just a couple days of discovery for the team, especially if you can bring the skeptics along for the ride. Get them involved, and involve them in those sense-making activities, and really make the activities count. Do a tremendous job of facilitating and setting those things up. Don’t treat it as an afterthought. That’s what starts to open people’s minds to new and different ways of working.

Top

One Pagers

A lot of companies now do some kind of one pager. The idea of a one pager is that you communicate the bet, the thing that you have in mind, in one page. The challenge is that imposing one pagers on the team in a certain format can only reflect a specific type of bet. Providing certain pieces of information in a certain way all the time, for example, solution estimates, etc. can close you off to the variety of bets that people actually actually want to try to execute on. I do activities with teams where we try to tease things out with what I call the “tech sauna,” for me, these are the kinds of information that people would need to learn about an effort to make appropriate decisions.

What’s fascinating is that they don’t fit neatly into the documents: whether a product requirements document, proposal, PowerPoint, kickoff document or whatever. They’re always very nuanced depending on the type of bet.

For example, a designer will say, “Actually I need to understand this, this, this and this, and this and that will help me make better decisions and understand the value.” Or someone will say, “Of course we need to know the value.” And another person will say that there’s many different ways to do values: this type of effort is this, that type of effort is another type of value, and they will have many different meanings in what you’re working on. The net effect is that you know what you want to do: you want to create. You don’t want to oversimplify, and you don’t want to force consistency to the extent that it drains all the life out of the particular efforts that you’re working on.

What you want is something that’s flexible. I find myself recommending a checklist for teams. It’s used as a reminder (but not a requirement) that prompts them to include those things. A great example is when you have a more open-ended opportunity-based bet. If you have a bet that’s more opportunity-focused, you see the opportunity, and you trust that the team will be able to have leverage against that opportunity. You don’t want to try to pre-converge and figure out that particular solution. You’re proposing that the opportunity is there, and you might even propose a series of stopping functions, a series of pivot-or-proceed points as you try to exploit that opportunity.

If you have a one pager format that requires someone to provide an estimate or a level of effort measure or any number of things like that, you’re going to discourage that person from proposing that particular bet. You’re going to force them to come up with stuff just so that it can fit the format, which is not helpful. This is extremely challenging and can seriously impact how you’re doing product. I think that the lesson here, and the lesson that I have tried to communicate to teams, is that they need to deal with the reality in the room, and they need to think about first principles. If someone said to you, ”All of your efforts are the same, and they all need to go through the same process,” you would say that’s ridiculous.

Nevertheless, we fall into that trap all the time because it’s a little simpler and it’s easier to communicate. That’s how the game works. The next step for people thinking about this is to examine whether the artifacts that you’re using to communicate your bets are flexible enough to accommodate all the particular types of bets you can make. And at the same time, do they provide people with the information they might need, or do they intentionally omit it?

Down the road, if you take that view, and if you think of these as living documents, set up checklists and reminders—not requirements. You will be a lot better off in terms of dealing with the reality in the room.

Top

Not dividing out “customer-facing” and “non-customer facing”

It really pains me when I talk to teams and they refer to some bit of work as being “feature work” or “product work.” There’s this idea of non-customer-facing work—and I know what they’re getting at. Some things maybe are more immediately recognizable to customers than others, but it all impacts customers. If you have a company that, for example, has tripled in size but is not going any faster and has quality issues all over the place, that is almost certainly impacting customers. They don’t expect you to actually get things done anymore. They’ve sort of given up on those particular things.

It would be a big mistake if you told team that says it has an idea about how they’re going to try to solve some of those problems that they’re non-customer facing, because it absolutely is customer facing. If you have a company that experiences that much debt and has slowed down that much, I doubt that any particular feature that any team has in progress in any way approaches what trying to resolve that issue might look like for customers. Naturally, there’s success theater, the optics of releasing new things while everyone knows what’s what. But once customers are there feeling that impact…you’re going to hear about it.

This actually relates to measurement and to what teams are doing and if the feedback loops and dependencies that exist make it so difficult to deploy anything, or if quality issues keep you up at night. If you’re worried about something else, the first thing that’s going to fall by the wayside is considering the outcomes that your work is producing. It seems unrelated. It seems like that’s just the backend team, or that’s just the operational team. We don’t know how that really relates to what we’re doing. But it very much does relate to what you’re we’re, because a lot of that tooling and infrastructure quality, those are what allow small batches, frequent integration, and getting clean data back from your product.

That’s what allows a level of calm in the organization, the ability to think critically about what you’re doing. When none of that exists, it just becomes a nightmare for everyone involved. It’s when you have big blockers in the organization, and everyone is firing on all cylinders and trying to work around the problem and inventing, trying to roll their own solutions to every problem because some other team has a particular problem. You see a progressive deceleration in the organization’s ability to generate outcomes. This triggers the organization to come up with more success theater. This results in a lot of blame. It really becomes a situation which is not something you want to be a part of. I mention this because when we think about healthy teams and teams that are learning rapidly, we often don’t think about this aspect, although it is extremely important.

From a product manager’s standpoint, this often means helping the individuals involved who often don’t have product managers. It could be a shadow system that exists in the organization to help them mold business cases around what they’re doing so that they can make the case that if something is sapping 50% of all capacity for value creation at that moment—that’s the single most valuable thing—helping those people form business cases around what they’re doing. That can be extremely high-leverage. It takes a little bit to work through the system, but once those issues are resolved, things go a lot faster.

Top

Leaving time to iterate

One pain point I hear extremely often is that people will remark that they never have time to iterate on the work that they’ve done. This is repeated by product managers, engineers, designers—everyone. So the question boils down to, “How do we leave an opportunity iterate?”

If you don’t leave that opportunity and don’t expect to get that opportunity, what you find is that teams are extremely hesitant to think about their work in a more incremental, iterative way. They’re scared because they’re worried that once they say it’s done, they won’t really get an opportunity to go back and work on it. One of the common responses from product managers and business people is “We shouldn’t iterate on everything, and if we continue to iterate on things and add fun items, we’ll do a lot more work than is necessary. It’s important to get it out there and then let it bake a little bit and see how it’s doing before we start working on it again.” From a first principle angle, I don’t think that’s wholly unreasonable. But what typically happens is that people don’t quite recognize this relationship between not iterating and how people treat the work.

They’re less likely to take risks. They’re less likely to try to get something out there more quickly. And often, there’s low-hanging fruit: you’re getting feedback about that particular thing and everyone’s in context, you’ve got the data, and you really understand the problem. You’re deep into it, and that’s a wonderful time to be able to reflect and act on that and achieve something really special given that followup data. One approach here really boils down to how you frame these particular missions that you’re working on. If you leave everything open-ended, “We’ll know when we get there” is not really all that helpful. But if you give us an incredibly prescriptive stopping function, that’s not helpful either.

One of the tricks is finding the perfect balance when you’re framing a mission and leaving enough flexibility to iterate. But also iterating with the confidence of the organization, the confidence of those people around you that you’re not just going to iterate frivolously for a year or until you’re happy with it. There must be confidence that you’re going to have a bias for action, and you’re not going to gold plate it.

Specifically, that might look like, “We’re going to iterate on this until we can get that metric, which we believe to be a leading indicator of success for this until we can get it to this point,” or, “We’re going to continue iterating until we have a much higher confidence that this is true,” for something that you’re hoping to learn about customers. We’re going to reflect on this every week. We’re going to reflect on our current confidence about that number, and we’re going to make a decision. What you notice there is that just by adding those extra layers of rigor, where you respect what the business needs are, you respect the intuition traps that exist and you try to hold yourself to some level of rigor in terms of iteration that can really open things up. The other thing that teams can really make a point of doing is the longer you wait to get something out there, the more impatient people are going to be, and the less likely they’re going to be to encourage you to keep moving on it. This is where it comes down to the team to think about how they can get things out earlier and take that risk to see what’s working or what isn’t.

Top

PMs creating an environment … good decisions / quickly

One model that I like to use it, which is not original to me, is to ask the product manager, “Are you contributing to an environment where the best decisions can happen reasonably quickly?”

This challenges some product managers because they perceive it to be their particular role to make decisions. They are the decider or they are the idea person, or they’re the person expected to have all the answers.

What you find when you’re the sole decider or the sole idea person is that your ideas often aren’t the best ideas and become the single point of failure. If you’re not there, or if you’re distracted or doing something else, the team depends on you to make these decisions because you haven’t created adequate context for everyone to make these decisions independently. There are some environments where they’re actually really good decisions. They just don’t happen anywhere nearly as quickly as they need to happen. You can spend a long time making a perfect decision, but by that point, often the ship has sailed and the extra time you spent trying to get a perfect decision was not really worth it. You might as well get something out there.

You also see organizations that make decisions incredibly quickly, but they don’t make particularly good decisions. If you make a lot of decisions that are a five, six or seven out-of-ten, sometimes you get pretty lucky and sometimes things work out. You can make a lot of decisions that are quality one, two or three, and that doesn’t really help. You’re trying to find that balance in what you’re doing. There are a lot of things that contribute to making great decisions. You need information. You need to be able to interpret that information. You need to make models and frameworks that help de-bias what you’re doing and bring in more perspectives. You need to create flow on the team. You need to work on feedback loops. You can’t make great decisions without these feedback loops.

You need to encourage safety and trust on the team, and all these things contribute to an environment where good decisions can happen quickly in a decentralized way. We also need alignment. If people don’t have the same information or aren’t rowing in the same direction, they may be able to make great decisions, but those decisions aren’t really lined up with each other. With that in mind, when you use that particular frame as a product manager, it can help alter some of the things that you’re doing. For example, if you find that you’re hoarding information about the customer, I might encourage you to not hoard that information and to try to connect the team directly with the customers or users that have that information.

If you’re making decisions in a vacuum with other people in the organization and just dropping them on the team, you’re not inviting their perspective. I’m not ruling out the idea that you might be incredibly good at making decisions. In that case, if you’re the best person to make decisions and you make them consistently well all the time, you’ll be able to demonstrate that to the team. They’ll trust you to make those decisions and the onus is on you: the responsibility is on you to demonstrate that you believe you have that particular skill. I think it’s important not to rule out that being the case.

In my experience, a lot of decisions benefit from a diversity of perspectives and the input of other people. Even if you’re part of a culture where you bought one person who always makes the final decision, even then, the person who makes the final decision is going to benefit from having diverse perspectives and a lot of interesting information. As a product manager, think to yourself, “Am I contributing to an environment where the best decisions can happen reasonably quickly as it relates to data and insights?” That’s table stakes, right? That’s your bread and butter, that’s the information you need to balance other bits of data and input—qualitative and quantitative—to make the decisions.

Top

Two priority levels …

I was joking recently that you could go reasonably far with just two priority levels of work. One extreme is that you believe that the opportunity is extremely valuable and that the effort is experimentation-friendly. In this case it’s something that you can chip away at, you can experiment, or the thing is reasonably valuable and it’s guaranteed to be really, really small.

I said this as a joke, but I started to think about it more and more. [inaudible] It teased out an important point for me. If you look at the work the broader team has in progress—the 60 teams or 30 teams or 15 teams—and you ask those people what the potential value of the things they’re working on is, you often see this massive spectrum. Team one’s highest priority thing is 10 times the potential value of team two’s highest priority thing. Obviously, that involves judgment and you need to do an apples to apples comparison.

But what you realize is that often we talk ourselves into doing things that are of medium value, something that seems pretty good. We’ve got some data points on it. Oh, and it’s reasonably small. You can see the thought process behind it. Often, someone wants it, or we think the issue causes customers a little bit of pain, so it’s so easy to talk ourselves into those things because they’re not ridiculously low-value, but still…only medium value. The danger when that happens is that we miss out on those things which are a step change, which would be more valuable to explore.

They require experimentation. They require learning. There’s no easy answer. If there was an easy answer, and it was incredibly valuable, we’d be doing those things all the time. But they involve chipping away at the problem. And what this means is that when people ask, “How are you going to solve that problem?” they often don’t have an answer. I remember talking to a CEO once and I said, “What would you do if you could really, really do it?”

And he said, “Whoa, of course we’d be working on that.”

“Well, why aren’t you working on that?”

“Because it’s really hard. We’ve tried to do it a couple of times. We did things, and things just got messed up. We didn’t really get the outcomes we were looking for.”

So now we’ve written off being able to work on that, and you can see what happens. When this dynamic builds up where you ignore the really high-priority thing that’s there. You persuade yourself it’s not possible, and then you just end up in the middle.

I also talked about stuff that’s guaranteed to be a really small or a sequence of things. For example, you often see this with UX tweaks or individual aspects of the product. None of these things are big at all. You don’t even need to estimate them. They all fit in the category of things like, “Oh yeah, that’s pretty easy.” And they’re reasonably valuable, especially when you take into account how large they are, because they’re really small and of medium-to-high value. That’s a really good thing to do that doesn’t suffer from this problem of the “stuff in the middle” that I was discussing. These things are guaranteed really small and reasonably valuable.

When you take this perspective, there are so many ways to prioritize, but I like to challenge teams with this particular perspective with the question, “Are you shying away, or are you persuading yourself that you can’t chip away at the single biggest opportunity that’s out there? Are you hesitant because you’re not really sure of the solution to that problem—is that why you’re not touching it? Or is there a stream of really small, guaranteed, very small things which, when added together, could have a really high leverage with your product?

Top

Bring me solutions (not problems) and My Idea-ism

In a lot of environments, there’s a saying that goes, bring me solutions, not problems. And there’s a strong culture and I guess pressure, to not come to the table without a potential solution—the, “And then we will do this to solve the problem.” This makes sense on some levels. We don’t really like to think about an opportunity or problem without a way to be able to tackle it, without a potential solution.

In product development—and this is related to premature convergence—in some teams, this culture of having to bring your pitch or idea really contributes to not leaving sufficient space to reflect on whether things are working or leaving the kind of openness required for teams to come up with creative solutions. I think that’s a symptom of the culture that makes it hard to go to a meeting and just say, “I think that this is this big opportunity, and I’m not sure how we’re going to tackle it. We have some early data that shows that this will be worth it, but I think we should jump in.”

The question is then: what do you do about this? I think one thing you can do is really focus on the opportunity. That’s different than when people frame these things as problems. When you’ve seen it as a problem, you’re not really providing a solution. But when you frame it as an opportunity, it’s easier to get people excited about the prospect of intervening and exploiting that particular opportunity. The other thing that’s an easy win is to collect a handful of potential solutions. By doing this, you’re not locking the team into solving the problem in a particular way. Coming to the table with a couple of potential solutions is a method to ease your way into this without having to be a complete blank slate.

Another interesting thing that I’ve noticed with this problem is that it really does relate to the track record of the team and of the organization. If you have a habit of knocking things out of the park and just figuring things out based on a broader opportunity, people are often a lot more willing to extend that leash, to grant you the flexibility to try to figure it out. The contrary case can be a vicious cycle: if you don’t have a lot of wins, people get more and more nervous, which prevents you from having that level of flexibility.

But even with the idea of bringing multiple solutions to the table, this can be very, very difficult. I’ve noticed, even in Silicon Valley as an example, that there is a strong emphasis on individualism and pushing through your particular solution, shoving through your particular idea. People get it: the reality is, if you come up with a solution and it’s successful, you get the credit. This is where you need to step back and think that for your organization in the long run, the goal is to get the best solution—not to get your particular solution implemented. That can take a long time to wrap your head around. As a consequence for your career, people who are able to create a lot of leverage and who inspire a lot of great solutions across the board also tend to do better in the long run.

Top

Safe places to “workshop” ideas/bets

You that know crafting your one pager, your bet, your pitch, or whatever is, is hard. This is especially true in highly competitive teams where people are competing to access resources. (I prefer to call them teams or to talent.) There’s often even a hesitancy to really share your idea; you see people go into a back room and try to craft their particular thing in isolation. Or maybe they just look to their one mentor and hope that that mentor will help them craft the thing. But especially when you’re trying to really encourage better decision quality and better decision velocity, it can be incredibly valuable to workshop your bets with teammates, other product managers and anyone who’s involved in crafting these bets.

It can be incredibly valuable to workshop these things. When someone is skeptical in that environment, it’s so easy to construe their skepticism as trying to shoot down your idea or trying to elevate their own effort—or simply trying to make life difficult for you. The key is to create an environment where the product team understands that it takes crafting and tweaking and a level of healthy skepticism to be able to really form the bets that will make a big difference in how the team works. Again, it is difficult to create these environments. But if you can do that, especially with a cross functional group, often people will present a backlog to their small, individual team, people really close to the work.

And if the team has a lot of psychological safety, there’s a fair amount of back and forth, skepticism, and trying to shoot holes in things—just to improve the idea. It can also be helpful to get an outsider’s perspective, for example, from another product manager, another designer, or someone else from the organization to help you workshop these things.

Another helpful activity among product managers is to work on these pitches and to repeatedly practice presenting these ideas, to invite people to figure out what’s resonating and to see what’s falling flat. It’s often the case that when a product manager pitches something, it’s the first time anyone has heard of it. There are immediately obvious things that they could improve. You would want to set up a healthy, very low-pressure environment where people can workshop their particular bets, distribute the one pager beforehand, and solicit feedback from people in person. It’s so easy to just do this in Google docs and hope people are giving your pitch the attention that it deserves. But it’s best to do this in person by being vulnerable and by encouraging and casting a wider net for people to improve upon it. You’ll make these bets a lot better over time. You’ll make it clear that you don’t necessarily have all the answers and that you’re interested in people taking a look at it.

In general, try to create safe spaces where people can workshop their one pagers, they’re bets, their ideas, and their pitches in a way that encourages positive feedback, and even constructive criticism and constructive skepticism, and try to build that into the culture without it being competitive.

Top

The critical moment of realizing intuition needs testing

There’s that one moment that you’ve either already experienced in your product developer’s career or you haven’t…when you realize how wrong you’ve been about something you expected users would do—and they didn’t do it. Or maybe you thought you had a perfect design, and there was a glaring usability issue. Now, for some people, the question is, “How do you create these moments?” We obviously don’t want to be failing all the time, but what impact does that have on how we perceive product development? I’ll always remember the first time began attending usability tests. It’s a painfully clear thing.

You think that you’ve got this super intuitive interface, and it turns out you really don’t. People are confused, or they’re behaving in ways that you just had no way of anticipating. You’re kicking yourself because how could someone not understand that you have to do these three things to move to the next step?

For a lot of people, that Aha! moment, realizing that things don’t work quite as anticipated, came when attending an in-person or remote usability test. It can come when looking at quantitative or behavioral data for an interface. When you start to notice that 80% of the people don’t get more than a couple steps through an onboarding, it really makes you question what you’re doing to improve onboarding. That’s a classic, right? People want to onboard, and we’re going to show them how. We’re going to point the way through the product steps. Of course they’re going to go through it, and you don’t realize little things about how they move through that workflow. They’re frustrated after a couple of tries, and they leave.

The whole point here is that you need to create environments with usability testing by measuring things to offset our overconfidence about exactly what’s going to happen when the user first encounters something. This can be the greatest hurdle, especially with experts or subject matter experts, for example, in medicine or insurance or banking. You’ll have some expert on a health care-related subject, and they believe they’re able to place themselves precisely in the shoes of the particular user out in the world. Certainly they can to an extent: in some ways, they can relate to a lot of things that are on that particular user’s mind about the job that they’re trying to get done. But often there are specifics to that person, to their context and environment, and to the goals that they need to achieve.

And just because someone’s an expert in healthcare, for example, doesn’t mean they’re an expert in a usability or UX or design. It’s easy to assume that because you’re an expert in one thing that you’re an expert in everything. The point is that sometimes the people who need to see this the most are going to resist it. You need to do usability tests. You need to watch people. That’s the most humbling thing, watching people use your product and talk out loud. This provides you with a lot of rich data to reality check your perceptions and what you think is happening.

Top

Credibility by not manufacturing certainty with your team

A lot of product managers will come to me with the problem that they’ve lost credibility with their team. They might not be able to identify it exactly as such. It may manifest in that the team doesn’t trust them, and they’ll sense a lot of resistance from the team or a kind of fatigue, or the team will work around them in many cases.

I think that one of the prime causes of losing credibility, especially with passionate designers and passionate engineers and passionate problem solvers in general, is when there is an effort to manufacture certainty about what you’re doing in order to talk the team into doing what you need them to do. This kind of manufactured certainty may also be helpful for other people in the business. They might not really want to understand the gory details of what you’re doing, but when it comes to the problem solvers who are going to be investing a ton of time and trying to implement this and test it and see if it’s working, it goes a long way to readily admit what you don’t know. Some things actually are “unknown unknowns.” That’s important as well.

I’ve witnessed that you build a lot of credibility by speaking with certainty about what you can speak with certainty about and then being very open with the team about what you don’t know, and trying to frame the bet accordingly instead of trying to gold plate the idea or to make it seem bigger or better than it is at the moment. An issue with this is degrees of uncertainty, and that can be pretty hard to communicate. It can be hard sometimes for people to think probabilistically. It can be hard for them to adequately consider the uncertainty.

There are various methods to address this. You can tally up the odds. You can try, for example, to imagine you’re betting your own money on this: would you bet $10, or even $10,000 of your own money that this is going to produce this particular outcome or that this is a particular framing of the problem that can be helpful? You can think about spinning a dial a number of times. There’s a book called How to Measure Anything by Douglas Hubbard, and it’s all very new. There are a lot of really interesting methods in there to calibrate people’s sense of how uncertain those things actually are. But it goes without saying that when you’re doing some kind of kickoff or presentation, you must be very clear about what you don’t know right now and what you’re comfortable not knowing.

Also consider what’s important: what is important for the team to learn, what is important for you to learn about that particular effort? Now, that first part is really important because sometimes you’re just not going to know that much about the thing, and that’s okay for the planned effort. That’s okay for placing that particular bet. You’re okay with not knowing how you’re going to operate it. You’re going to have an operating assumption that something is the case. Even just that framing that can be very powerful because you’re not manufacturing certainty for the team. You’re admitting that you’re not 100% certain of it, but you’re okay with this uncertainty because maybe that’s something where if you’re wrong, it’s not really that big of a deal. I think that it is an important consideration that you do that.

There are other things related to things that you have to learn about, and that’s when you can engage the team and prioritize a learning backlog. What things are really important for us to figure out before moving forward? This comes up a lot with designers and other folks who really want to dig deeply into the problem they have. They have their own other range of questions that they need to ask. And being explicit about the learning efforts that you’re undertaking can be extremely powerful. Otherwise you get into a situation where someone’s like, “Well, we’re ready to go—right?” We’ve all figured it out, and people will say no, but they’ve never actually described what they needed to learn about the thing before they would feel more comfortable.

Top

Artificial deadlines

I get a lot of questions about deadlines: should we set deadlines and are deadlines important? The first question I ask is, “Is it a real deadline, or is it an artificial deadline?” And that can confuse people. What am I getting at? A real deadline is something where the value of the thing dramatically falls if the deadline is missed. For example, if you’re trying to get ready for a big event like a wedding. If the food doesn’t arrive on time, if it doesn’t hit the deadline, it’s not the value of that food drops precipitously the second it doesn’t arrive.

Now, you could start having a sequence of things. You could say that if this isn’t done by that time, then this doesn’t give us enough time to get that done. But at the end of the day, there is something that needs to happen by a particular time, and that’s what I would call a real deadline. This work is connected with this real deadline.

Another complexity involves consulting or contracting work. The team has committed to a particular deadline instead of an arrangement where there’s more flexibility. Is that real or not? If the customer’s not going to pay them if it’s late, or if the customer has an immediate date-driven need for that thing, maybe that’s a real deadline?

Nevertheless, a lot of deadlines are artificial. And this is why I recommend that product managers get out of the business of artificial deadlines. Unless they admit that they are artificial, they’re really putting their credibility with the team on the line.

An artificial deadline might arise in a situation where the value might drop if you quote/unquote “miss” the artificial deadline. But it’s not going to drop precipitously right after that particular point. A better alternative at that point is to make it very clear about your hypothesis, about how the value will decay—what some people call the cost of delay for an item. What would it cost if we were a week late? What would it cost us if we were a month late? This is actually an interesting statement in general because often, you see teams really load up on work in progress. They’re really intent on keeping everyone busy. And by doing that, they’re delaying work, potentially even delaying it a lot.

You could set artificially long deadlines and hit all of them, but by doing so you could be delaying work that would have had a ton of impact, simply to maintain the optics of hitting the deadline for the things you’re doing. If you track the cost of delay of each of those items, the team would be able to make much better decisions about whether it’s really wise to be trying to parallelize all those efforts.

The general advice that I tried to give is: stay away from artificial deadlines when there are real deadlines. Be incredibly forthright about them, and you’re going to have to be open to the idea that scope will be variable at that point, because if you’re going to hit that date, you know something will to have to give.

We know that adding a bunch of people to the project with only a small amount of time left doesn’t really solve the problem either. Artificial dates can also be things like the end of the quarter. There’s quarterly planning, so people are anticipating that things should wrap up. The antipattern is that it doesn’t encourage really smart economically-based decisions. While you’re doing it, you’re creating an additional game that doesn’t really reflect the game that you’re playing. Now, fatigue wants to set this ambitious deadline for itself for whatever reason, maybe as a potential forcing function, which we’ll discuss a little bit later. Everyone’s bought into that particular idea, but try to stay away from artificial deadlines.

Top

Forcing functions are valuable

I like to really make sure that teams understand the concept and value of healthy forcing functions. One really good example of this: our sprints are often misunderstood, and teams begin to resent them. Why are we working like that? It just seems like a tool used by management to get more out of us. And it’s not really doing what we think it would do.

A time box is an example of a healthy forcing function, as is the idea is that you’re going to circle back and integrate what you’re doing and reflect on what you’re doing as a team. Maybe exposing it to customers is an interesting forcing function or a variant of that forcing function. What you often find is that healthy forcing functions get co-opted by other parts of your company and are misunderstood. But that doesn’t detract from the idea that forcing functions are really effective. A team might commit to sharing what it’s learned in a broader setting every two weeks. That’s a bit of a forcing function. It forces them to reflect on what they’ve learned. It forces them to de-bias what they’re working on and try to figure out how to present it. That’s an example of how if you use that healthy forcing function that might be helpful.

Some teams are really what other people might regard as very aggressive, for example, they may agree that they don’t want to have any open users stories over the weekend. They want to force things to be wrapped up. Again, that can be seen as incredibly passive aggressive. It can be seen as incredibly dictatorial on a part of some manager. But for the team that generally did their best to make that happen, they could benefit from that level of reflection and integration. Another example is agreeing as a team that within the next 15 days, we’re going to get something in front of customers, or we’re going to do a certain number of customer interviews, or we’re going to implement some other healthy forcing function.

There is a glossary of human computer interaction, and it gives the definition of a forcing function as something that’s supposed to snap you out of automatic thought and force you to consider what’s happening.

I love that definition because the word “force” implies imposition and discomfort. But when used appropriately, what we’re doing with forcing functions (for all of these different types) is to snap ourselves out of automatic thought and consider what’s actually happening. It’s also an interesting lens to judge how effective your various current forcing functions are. For example, some people have quarterly activities, or quarterly events or quarterly reviews or whatever. You might find that a quarter is way too long to be an effective forcing function. It’s not near-term enough to win; the problems were already happening, and you’ve waited until the end of that quarter for it to materialize on your radar. That’s extremely difficult. Using this concept of the healthy forcing function and breaking out of automatic thought is a good frame to both consider potential forcing functions that you can add as a team, but also as a way to analyze the ones that you already have in place.

The question then becomes, what do you do if the forcing function isn’t that effective? One reaction could be to make it more frequent. The other might be something like someone saying “We do a status check. That’s a forcing function.” Well, is it really? Is it snapping you out of automatic thought and prompting you to take action on the thing? No. But doing something like limiting maximum story sizes to a couple of days would be a forcing function.

Top

Not hiding dependencies (and dependency wrangling)

As teams scale, there’s often this strong push to split up into two teams, to create smaller teams. You’ve got this team of eight people and it’s starting to get big, and you think it should be two teams of four people.

This scales with size, so in that it makes sense. But often what you get with growth is the emergence of lingering dependencies. And the teams, although one might be aligned on one part of the product and the other is aligned on another part of the product, they share something. They may share how they deploy, knowledge about the customer, the approval of the CEO, or any number of things. What you find that’s interesting is that organizations instinctively understand that these independent teams are going to somehow be faster or better able to make decisions. But they gloss over the number of dependencies that exist between those teams. This gradually starts to slide into having people jumping between these teams or being shared between teams, trying to support both at once.

It would seem that when it becomes a problem, we’ll just call it out. But there’s a whole series of nefarious, tricky illusions that crop up. We start to bring in a team, start to play Tetris with those shared teams where they’re all competing over that team, helping them, and the team does their best. They might even make do…until little cracks start to form—what corners are they cutting to make this possible, etc. The teams start to work around these dependencies; they do whatever it takes. “Well, if that team’s blocked right now, we’ll work on this other thing, although we were really being blocked in our most important thing, but we’ll find something else to do.”

It can be incredibly hard to pin down what’s happening, and it can be incredibly difficult to understand that it’s those dependencies that are causing it. As a result, I recommend that teams very clearly visualize the dependencies—don’t hide those dependencies under a kind of opaque planning process. This could be managers in a back room trying to wrangle the different dependencies or some quarterly dependency wrangling done by directors. It’s crucial to really expose those things because when you expose them, the cost of those dependencies begins to become really clear, and you’re less likely to play Tetris with those particular dependencies as you work it out. Sometimes a centralized services is the right thing to do, and sometimes it isn’t.

In some cases, hybrid models are the correct approach. I believe that teams can pick the right approach for their situation. But you want to be able to see the situation for what it is, and when you rush to create the illusion of a bunch of independent teams, these kinds of dependencies slip into the background to be navigated and wrangled by people—you actually start to hire people whose job it is to wrangle these dependencies. As a result, they’re not going to try to eliminate them anytime soon. Once you institutionalize these dependencies, it becomes very hard to pin down what you’re dealing with.

The reason this relates to measurement and being impact-driven is that if you’re primarily playing Tetris with these dependent teams, you’re often going to have your hand forced. You’re not going to have the flexibility to pursue outcomes. You’re going to be primarily limited by those dependencies and what you do and how you work.

Top

Products and features are temporary

One thing you realize with a lot of software as a service products is that features, and maybe to a lesser extent products, are temporary. They are temporary delivery mechanisms to meet human needs and deliver some value or capability.

It’s really important to keep this in mind: companies are constantly disrupting how they deliver the value. Something that’s currently being done by a human being, and it takes a long time might be taken over or augmented by machine learning. Something that was really rough and involved a lot of steps is going to be disrupted. They’re going to find a faster way to do it, or they can use a voice interface, or they’re going to use any number of things.

The reason why this is really important is because with software products, we tend to get this idea that we’re assembling the Mona Lisa, just in small parts, that there’s this perfect problem product and we’re just evolving it. We’re just building it incrementally, and it exists out there in the future, it’s just that we don’t have anywhere near enough time. So we’re going to build it in small parts.

What that doesn’t take into account is that of the products that many of us love today, some have been around for a long time and are updated every year. Others have been augmented, maybe also updated every year, but now there’s a developer ecosystem that we never anticipated. Maybe there is a template store that was never available before, or maybe we start paying for it as a subscription—whether we like it or not—when we used to just buy the update. This is important for product development; it’s our knowledge of the value exchange, our knowledge of what the customer values and how we can help them. That really persists over time, and it will persist even when the current delivery mechanism gets turned into something else.

Consider, for example, all of the companies that had to make the shift to mobile and the new interfaces that were required, having to reimagine what they were doing for mobile. It was really their deep understanding of the customer that empowered them. They had to learn this new form factor, how it worked, its limitations. But what they were able to carry over was this deep sense of what the customer was going to accomplish.

The other important thing when thinking about this is this idea of “one team designs and the other builds it, and the other team delivers it, and yet another team runs it.” That just separates the people with a sense of the landscape and the problem. And then the people were expected to run it for a really long time. Some products do end up running for a really long time. But when we take this view that we should constantly be reimagining how we’re delivering this, what also tend to do is structure our teams around value streams instead of around particular pieces of technology or particular touchpoints. And that can prove to be very, very effective. It is messier. It’s often easier to have backend, frontend, and mobile or other teams. But what this does is help to connect people with that persistent human need.

Top

Shifts that necessitate all of this

A good question is, what is changing that might necessitate changes to product management, how we build products, or product development in general? There are a couple of relevant factors.

First is that the division between the business and the teams recognizes that the teams themselves are the business, as opposed to there being a dichotomy. Teams increasingly have direct access to users and stakeholders, which really means that there’s less of a need for a customer or user proxy, something you see in a lot of cases where someone talks to someone else, who talks to someone else, and someone else again knows what the customer wants. That’s extremely disconnected from the team.

It also means that increasingly, teams have direct access to insights or data. As a result, they’re expected to make faster decisions. There are faster feedback mechanism and, with the rise of devops, less of a “ship and forget” mentality. There is a growing focus on teams being independent and supported. You see teams being able to independently ship things and then deploy into production, which is huge. Cloud infrastructure and services have grown dramatically, which has lowered the barriers to shipping. What you see teams now being able to do in the first six months of their existence compared to the late 1990s or early 2000s is just incredible. User experience is now seen as a key differentiator. It used to be that as a product manager, you could get by with just a little UX, but that’s extremely hard these days, and the bar for user experience continues to rise.

That’s being driven by business-to-consumer mobile, among other things. Subscription models are greatly changing how we view products because customers expect enhancements and improvements over time. They’re buying a subscription to a stream of innovation versus buying a product that they hold in their hands. It also changes the dynamics around new product introductions. You’re much more likely to see a minimal product that’s evolved. And the workplace is changing as well, which also changes the expectations of team members: expectations around the autonomy you’re supposed to have and the flexibility you’re supposed to exhibit are shifting. People expect to have more autonomy, and their expectations around organizational transparency involve: whether the work we’re doing is making a difference—that’s also shifting—and what the bigger bets of the company are.

Finally, what you see is also that the touchpoints people have into these products are becoming more and more complex. In the mid 2000s, you’d hear a lot of talk about “omnichannel,” the idea that people were going to access products through many different touchpoints. That’s coming to fruition now, where the product is likely to span many teams with many different areas of specialization. It might be that the touchpoint or single point of access sits with one team, but that team is likely dependent on other teams for a lot of things. This was clear at one of my prior jobs where it was very hard to decouple the people who knew a lot about a particular persona or actor from these individual touchpoints. You couldn’t really solve that in a vacuum. In such a case, the whole company becomes the product. As you start to see these kinds of service ecosystems like banks, insurance companies or whatnot embrace the idea of product thinking, you start to understand that whether you call it a product or not, it’s a very deep, rich service ecosystem with a lot of different touchpoints and a lot of different technologies that are used to interact with customers. The trend is clearly toward more complexity.

Top

Expertise as a service (vs. Ticket takers)

With certain functions, you’re seeing a shift to at least offering some of their expertise internally as a service. Here, I’m thinking of data science, analysis or analytics, UX research, and a number of other functions in these complex product organizations. Their goal is to not be the bottleneck for product teams attempting to access that expertise. It’s very hard to hire some of these particular roles and embed them on every single team; some are very specialized functions.

One way to think about it is to create all these dependencies and run that team as essentially taking work tickets from the rest of the product team. Another model is to acknowledge that yes, there is going to be some work that those teams end up doing. But really, their goal is to try to uplevel all of the other teams and help them to get the information they need for their efforts, or provide analysis or the tools and interfaces they need to be able to do their work in a self-service way.

One thing that you’re increasingly seeing in these complex product development organizations is a real spectrum of how teams function and a willingness (or at least an interest) to provide some of these services to teams in a way that permits the independence of those teams instead of them relying on that.

A classic example is when there are centralized analytics functions in an organization—and in some organizations, analytics functions as a question and answer factory. People ask them questions, they deliver answers, and they might try to maintain the tools necessary to do that faster or more accurately. There’s a level of expertise in what they do. They’re funded like that: everyone contributes to their budget, which is a new and evolving way to think about that problem. This is probably driven by the real desire for teams to have a level of autonomy and independence. Their approach is either mixed models, which may involve partnering closely with an individual product team on a short-term basis to get them going, or alternatively offering a platform, a series of services or consulting, but with the intent of upleveling that particular team in their field of expertise.

This has a dramatic impact; I’d say the same thing that happens with operations, and even devops. The idea of devops is not to necessarily to leave operations as a shared service that everyone is dependent on. It’s often to allow independence, autonomy and to centralize things when it’s helpful to centralize them—it’s not about centralizing simply for the sake of cost cutting or cost limiting.

When you think about structuring the product organization, you’ll have some areas of specialization. But the minute you silo them off and don’t allow them to connect directly with other teams, to embed when it makes sense, or really to think about what they’re doing as products, or think holistically about what they’re offering, you run a lot of risks and create a large number of dependencies.

Top

Perspectives on roadmapping

The way teams are approaching roadmapping is changing for a number of reasons. I have to admit, when I do a larger workshop and ask an audience, “How many of you are using a swim-lane oriented roadmap with quarters, or dates, or months or something similar?” a large percentage of the people raise their hands. But what’s interesting is that when you explore some alternatives, for example, user story mapping, which was invented and popularized by Jeff Patton, you actually see a few hands. So it looks like some people really are using user story maps as a form of roadmap.

That’s interesting because a user story map is very powerful in terms of really understanding the bigger picture of a product, when you’re delivering in horizontal slices across some experience in the product or some kind of value stream. It’s very interesting to see that teams would be doing that because it challenges the traditional swimlane paradigm. You also see a number of teams shifting toward visualizing their roadmap on a Kanban board. The net effect is a prioritized list of items, in contrast to laying them out across a calendar. That can be pretty powerful. I do think people often still wonder when the thing be done, or they still view it through a very project-oriented lens.

That’s a change from always thinking about it in terms of swimlanes. As teams become more autonomous, the demands on them to be extremely specific about all the things they’re going to build a whole year in advance goes down, if they’re clearer about the areas of the product and the missions they’re going to focus on. There’s less of a need at that point to lay out all these features in details. You might see something like a list of problem statements or a list of opportunity statements in priority order according to the size of the opportunity. The team is going to think about pulling those off of the list, one after another.

Simpler roadmaps emerge because a lot of the solutioning will happen once the team has embarked on that particular mission. Now, that doesn’t preclude the team from establishing some idea of a release plan or release batches and communicating it through the rest of the organization. That may evolve as the team is working. It certainly will if they’re putting things into production. It just means that it’s not the same thing as needing to have it as some preset thing right off the bat. The same applies to putting roadmaps on Kanban boards (you also have hybrid boards) that not only show the larger bets as they move across the board but might also show the more fine-grain work that’s happening, with the debt nested underneath these particular missions.

Hybrid boards are an interesting evolution in terms of the standard roadmap. Roadmaps are primarily a way to communicate, a way to have a conversation about what you are planning to do. But often, they need to be augmented with other mind maps, other models for how you think value is going to be created, other belief maps, other things like that. The real point is that as you shift to these more opportunity-focused teams, the needs around road mapping change, and you might have to bring in other artifacts to help. You’re doing less talking about the output that you’re thinking about and more about the overall mental model for the problem that you’re solving.

Top

Measurement to inform decisions

Many teams spend a lot of time obsessing about understanding what should they measure. And it’s an interesting discussion because there’s a lot baked into this particular discussion. Often they’re wondering what their competitors are measuring. And in some ways too, they’re scared. They’re scared that they will measure the wrong thing—but not scared that they’ll measure the wrong thing and maybe make the wrong decision. Certainly that’s the case sometimes, but often they’re worried that they will be graded or judged or penalized in their organization for somehow measuring the wrong thing.

As a result, you see this obsession with looking through the various frameworks and trying to have someone paint a very precise picture for them about what they need to measure. Certainly, depending on the industry that you’re in or what you’re doing, there are some things that bubbled to the top. E-commerce is a good example: you might think about the average cart size or cart abandonment, the specific things that rise to the top. There are also things related to topics like cost of acquisition and lifetime value of customers, which are just so ubiquitous that you see them regularly.

But what you don’t often hear, and what’s important, is that what you measure should be related to the decisions that you need to make—where you see uncertainty and where you’re willing to pay to reduce that uncertainty. That seems incredibly simple; why would you measure otherwise? But you will encounter teams that have been told to be more data driven. Am I really able to decompose their strategy or decompose the bets that they’re making? You describe their bets in a way that then can relate to what they need to measure. It can be difficult to tell if this is a skill issue or a fear issue.

Another challenge is that often, some other part of their company is asking for a lot of certainty. They may be trying to advocate for working on the user experience or focusing on something else, and someone in another part of the committee is saying, “Are you sure?” And when the question is, are you sure, you’re going to try to find the perfect metric. You want to find the thing that’s bulletproof. And that’s problematic because you rarely have that level of certainty with the things you’re working with in product development. Over time, you sometimes establish causal relationships—and that’s a really powerful thing when you do—but often you’re just dealing in layers of uncertainty.

The real operative thing that I discuss with teams is: what are the big bets that are still open? What do you need to explore with this thing to in order to understand? Another question is, what’s working right now, and what assumptions have you made that you want to ensure don’t have an adverse impact? You want to make sure that they continue to work. So what do you need to learn? Where do you need to reduce uncertainty, and what do you want to make sure is working?

Often, startups have built a model for their particular business. You also want to measure things that will help you build an increasingly coherent model of how you think value is generated and stored, and what behaviors make the most successful customer.

That might not immediately relate to a decision. But imagine if someone like Airbnb found some strange twist in terms of their business model, perhaps found out that some percentage of everyone who had an Airbnb stay immediately went home and opened up their own Airbnb (which may actually be true). That might alter how they think about how they acquire new hosts. If that were the case, maybe they would change the experience. It’s something to think about.

Top

Reflection is what counts

In some businesses, you walk into work every day and see numbers move on a dashboard. You know that on that day, the 10 experiments that you’re running did X, producing a certain amount of money for the business. That works in some cases, but that doesn’t work in our particular environments. For us, the important part is that there is a level of reflection about these bets, that teams are talking about these things and following up on their particular experiments. That’s what you notice in places with less success theater. There is more introspection and rigor in what the team is doing. Some benefits do take a while to emerge.

A team is always juggling activities, trying to understand the various impacts. There are a couple of useful activities there. One is to make sure that when you’re kicking off a particular effort, you’re at least attempting to create some projections or forecasts about what you think will happen if you’re successful or what the result will be. This could also be anticipating a likelihood of failure. If you do that at kickoff and you allow those things to be revisited periodically over the course of the effort, that gives you the blueprint to come back and reflect on the particular bet you made.

That’s something that you just don’t see a lot of. When you see teams, you see a lot of talk about what they’re going to do, a lot of talk about the immediate release of the thing and celebrating it. But you don’t see a lot of talk about following up on what you thought would happen. It’s very good practice to put regular decision reviews and follow-ups into place. Do this in a disciplined way to ensure that you’re staying honest with yourself. One advantage of linking this with the kickoff, the pitch or the one pager, and then closing the loop, is that you can avoid writing a revisionist history about what you’ve done, which means allowing yourself to be more transparent with the team about their efforts.

One of the significant challenges is just carving out the time to do that. At the end of the day, you are what you invest your energy in, and this tends to be more a question of energy. You don’t want to create an approach that’s overly structured or inspires teams to bend the truth, to show off, or anything like that. You want something that’s relaxed and informal enough that people are able to talk these things through frequently and deeply enough that it can really guide future decisions. The real benefit, especially if teams are transparent and talk about things they thought would happen that didn’t end up happening, is its ability to inspire other parts of the company to think a little bit more about what they expect to happen, especially when they’ve made requests of product development. That level of transparency and honesty can be infectious across the organization. This is critical.

It’s often said that the retrospective is the hallmark of agile in the sense that it typifies the ethos of inspect-and-adapt and improvement that transcends any particular framework, practices or tools. It’s very interesting when you look at the effect retros have on product teams or design teams and how they approach continuous improvement, or how even larger product teams approach continuous improvement above that team level. Even for a single team, continuous improvement is hard and requires safety. It requires support from the organization and a number of other things to be effective. As a result, when you see product teams having a quarterly check-in, doing an engagement survey every six months, or any number of things like that, it’s hard because often, these are the groups that are deciding to start using planning practices and that are doing things across many teams. They’re disconnected in terms of retrospectives and not really approaching continuous improvements.

With product practices, it’s really easy to say that each individual team should just independently handle their own business. However, there are often elements of a more global planning approach spanning many teams, and there may be issues with funding teams or any number of other questions. This makes it important that those groups also approach continuous improvement in a disciplined way. Are they limiting change in progress? Are they holding themselves accountable to experiments in process? Are they being transparent about what they’re experimenting on? Do they have feedback loops with all the people involved across the organization? When it’s suddenly up to product to come up with a quote/unquote “product development approach” or “software development life cycle” or anything like that in a vacuum, it’s a little ridiculous because it really is a team decision about how they’re going to approach that. They really shouldn’t be doing that in a vacuum—they should be inviting more people in. Product teams, teams of teams, and functional groups of product managers also need to consider their continuous improvement activities. They need to consider their retrospectives, and they need to choose what model they’re going to use to improve. The reason why I mention the model for improvement is that there are a variety of approaches. Sometimes the approach is just “talk to your manager if something’s wrong, and they’ll sort it out.” Other companies are much more involved and include the whole group of people in these decisions, making it much more of a consensus-driven thing.

There’s no one right approach for every environment, but what’s important is that you actually have an approach. Take a standard product development idea: “Okay, we’re all going to do OKRs.” I’m assuming that you want to impose that on all the people in the organization. What mechanism is in place for people to give feedback on that? That’s the critical question, because if it’s not working, if you haven’t been clear about the job you want it to do, if you don’t give people a channel to be able to provide feedback on those things, you end up imposing these global practices without any sense of whether it’s working for the people involved. So think about how you approach continuous improvement, not just on the individual product team level.

Top

Shared understanding / vocabulary is super hard

Every time you think that you’ve got shared understanding nailed, you don’t. It turns out that there’s more involved, more context that needs to be discussed. I mention this because you see a lot of the practices people put in place as a way to avoid the harder conversations. They try to isolate them, to have a smaller group of people have those conversations, to oversimplify what they’re doing. But when you really boil it down, a lot of what we discuss needs more explanation. I do an exercise with teams where I ask them to make a list of the words that they’re having a tough time grappling with as a team. It’s amazing to me because it turns out to be words like sprint, vision, results, outcomes, or MVP.

This really highlights the attention that we need to place on that. People are so obsessed with moving quickly and staying busy that this type of work tends to get shortchanged. The reason that this relates to measurement is because even understanding what the data means or understanding what the business metrics are that we’re moving, all of these things are nuanced and often take a fair amount of explanation or understanding to fully grasp. It’s hard just taking the time to create an in-depth guide to the metrics for a particular product or a starter kit to get someone going or starter dashboards or notebooks to be able to understand what’s happening.

One temptation here is to try to just document it. That might help people in some ways. From my experience, it’s so hard to avoid doing the hard work of having people spend time with each other and test each other’s understanding of things, restate things and redirect them. It’s kind of funny that MVP (minimal viable product) comes up on the list of misunderstood words because it is understood in many different ways. What’s fascinating is that people seem obsessed with trying to come up with a single definition of MVP instead of discussing the nuances in their particular environment that they’re grappling with at the moment.

It’s almost as if they’ve picked that particular phrase as a kind of focal point for debate. And the debate is what they should really be having. The discussion of the definition of MVP is secondary. This happens with a lot of these popular practices: it becomes about the practice way more than it becomes about the actual ideas. Truth be told, these ideas are nuanced, and yes, they may take a little bit of experience to grasp, but a lot of them are not rocket science, and MVPs a good example of this. Are we using this to learn? What’s the goal here? Did we just need to get something out the door?

Just calling it what it is would be very powerful. In short: there are no silver bullets. It takes repetition, discussion, repetition, discussion, synthesis, repetition, and discussion, over and over again, as well as restatement and testing to really get people on the same page. It’s so tempting to shortchange that and just hope that it all works out, or to try to document your way around a particular problem. But there’s often no escaping the work of discussion.

Top

Prioritization spreadsheets

It’s fairly common to meet teams that have a complex prioritization scheme. I have mixed feelings sometimes when I look at them because I’m extremely happy that a conversation has happened ostensibly to create something explicit, and at least then you get some sense of what they value. One of the ironies here is that once someone puts one of these spreadsheet things together, what you tend to see is people gloss over or forget the conversation about why this thing exists in the first place. You also see a hesitancy to update it. That ship has sailed, it’s been put together and any additions tend to seem complex: why would we make things more complex at this point? As new beliefs emerge, older beliefs are challenged and sometimes intangibles are added. That sounds like work.

For example, if someone said, “Wow, I have a hunch that our user experience is a really major part of this,” they’re not really encouraged to inject those things into this kind of spreadsheet. You also get all of these pseudoscientific aspects. For example, you get scales, and you arrange things on these scales; it can be really difficult to make any sense of it. Once you’ve put something on a scale, it yields some prioritization or a score for people to consider. Often, if that score doesn’t feel right, you see people fudge the numbers or move them around appropriately without necessarily having a discussion first. Very quickly they become a bit of an antipattern.

What you really want to do is to have some open model to be iterated on that people can plug their efforts into to generate forecasts. They’re outcomes about what they’re working on, but in a way that compares apples to apples. Other people can use the same scale, and you can compare these efforts. One example is using cost of delay, which is simply forecasting some opportunity costs in dollars per period of time. That’s how you’re going to state your particular effort. You’re going to say, “I think that this effort could be worth $100,000 a month,” or some given range of values.

Now, the extremely important part of that is again the discussion: yes, there are a lot of what-ifs. There are a lot of footnotes to that. As you come up with that number, you’re making a lot of assumptions. The question is, are you surfacing all those assumptions to your team, and are you then continuing to further refine that particular model? As you put that together, another classic pitfall is that it might be overly simplified. Someone will say “value,” and someone will say “effort.” Now, that’s not necessarily bad, and you could come up with a value number. But as we’ve discussed, the important part there is that sometimes that value has a huge range. Other times, you haven’t really talked about the component parts of that value to help communicate your strategy as related to that.

That’s an example where, if you have a framework that describes how you believe value is created or the key levers that you have to move, and you plug the work you’re doing into that particular framework, at least you can have a discussion about the impact that you think that will have on value instead of just this vague word “value.”

Top

A longer-term view of customers…LTV and behavioral cohorts

Most people really narrow the time span that they’re looking at when thinking about analytics or measurement units. What I mean is that when someone is trying to think about whether people convert or why they churn, they look at it over the span of a day, some short period of time. But…there’s a whole other realm of trying to understand the behavior of users or customers as it evolves over months and years.

This is especially important in business-to-business software, where people are often forced to use the software. It can take days, months or even longer to become a power user in that particular software. As people pay with subscriptions, it becomes increasingly important to think about the lifetime value of that customer and these very subtle differences. What do I mean by that? One example of subtle differences are behavioral cohorts or behavioral personas, subtly different ways they are using the product. And it can be difficult in one little snapshot of time to say, “Oh, these people are vastly different.”

What’s really important is to see how their usage may have evolved over time and whether they were actually becoming more efficient in their use or were deriving more value from it. An important consideration for product teams is the full view of a customer’s experience with your product. This can extend further upstream to when they were assessing the product or were looking at your marketing. It definitely spans across all of these initial touchpoints: when they’re starting to develop interest in the product, starting to become more proficient in the product and exploring it. As you start to add new features, is this the type of customer that embraces new features or do they just not use them? What you find is that there is often a lot of low-hanging fruit.

It’s instructive to compare the most effective customers and their behavior in the product with what the company does with customers that have a harder time in the product or that don’t realize the entire value of the product. It can be something as simple as one set of customers tends to use a particular feature, and that makes them a lot more efficient or effective. Customers don’t really discover that feature, and they don’t become as effective.

The key point I’m trying to communicate is that to be more holistic in terms of thinking about measurement, you need to think beyond these initial interactions that are very commonly tracked, like acquisition and converting into the product. You need to consider the longer-term behavioral characteristics of those customers, especially if you’re hoping to make them more effective at something. That won’t happen overnight. The question is: are you changing their business? Is your product realizing the value that you thought it was?

What you find even in business-to-consumer products is that there’s just so much interesting variety on a person-by-person level and across these cohorts that we tend to not really fully wrap our heads around the variety of interaction patterns. Take something like YouTube: there’s probably a multitude of big patterns in terms of how people consume that information. A fundamental question is, do you have a sense of how people are consuming that information?

Top

Product strategy part 2 – don’t outsource it

This is a continuation of the discussion of product strategy, but from a different angle. As a product team, if you don’t clearly describe your strategy and the bets that you’re making in a persuasive way, you’re going to outsource the product strategy to the rest of the company. They’re going to fill in the blanks that they think need to be filled in. They’re going to be out there trying to make deals. In lieu of you being able to really describe the coherent story of what you’re doing, they’re going to come up with it themselves. As a result, in a lot of environments where I hear that there’s not as much measurement as there could be, there’s a strong tradition of Sales and other folks viewing product development as essentially building stuff that they need built so that they can close deals.

One way to view this tendency is that they might not have a real sense of the upside that the product team could explore if they were given more flexibility to do that. What I’ve noticed in general is that in those environments, they lack a layer of product leadership that’s able to really explain the product calculus to other people in the organization in a way that is reasonable and holds water.

I remember talking to a salesperson who had traditionally been working in organizations where they kept a list of the requests that they were expecting so that they could close particular deals. And this person said something equivalent to, “Now that there’s a product that’s a really, really a good fit at the moment, I find myself not really selling on the roadmap. I find myself just talking about how successful we’ve been over the last year and about our overall strategy. That seems to really resonate with people.”

That’s because a lot of times, these customers are entering into a relationship with you as a company, and they also want to understand the direction that you’re taking. They want to have a dialogue around a strategy — to be crystal clear what was happening—without trying to abolish the natural level of uncertainty. This makes it possible to scale the discussion of the product strategy beyond just the product and engineering team, or the product development org, etc.

This is an important point when it comes to thinking about measurement and thinking about what you’re doing. Again: what you’re trying to do is create a scenario where the organization is going to embrace product teams as problem solvers, not just delivery units. Once you embrace them as problem solvers and understand the potential upside that exists there, you’re much more interested in giving them the tools they need and to accept that they’re going to be iterating and changing course based on their activities. In summary, the important consideration with all of this is, “Can you create a narrative around strategy throughout the whole organization? Are you making it crystal clear what you’re doing? Is it changing all the time?”

If it is changing all the time, that really suggest that you’re being overly reactive. You might be learning—and that’s good—but if it’s shifting that often, it suggests that maybe you weren’t really honest with yourself about what you thought you knew the last time. So ask good questions: does it feel like the organization is a moving target and no one knows what’s going on? Do some things seem to stay consistent, quarter-in, quarter-out, year-in, year-out as you’re doing that? If you can get those things to happen again, you create more cover for the teams to really pursue opportunities and have an impact.

Top

Connected / Embedded design

Another factor that heavily impacts the approaches you can take as a team is who the team actually includes. At the end of the day, with so many of these practices and guidelines, it boils down to having a cross-functional team that has direct access to users, customers, and data about them—the ability to get things into production, out to customers, and to test them.

Once you have the people that you need, for example, a designer or data scientist, etc.: if those people are not embedded on that team and don’t feel like they’re part of that team and aren’t protected like real members of that team, you’ll always get face the challenges. You’ll have to optimize around the fact that those people are not available, that they might go upstream, might design in a vacuum or might only be available some of the time.

Part of the challenge is simply creating environments where these cross-functional teams can emerge. As a rule, you don’t have unlimited amounts of everyone, so sometimes you need to improvise and carve out a week or two (or three). A broader group of people can be involved. Initially, you may need more perspectives to really understand what’s going on. But are you starting together? Are you getting everyone together in the room? Something like design is a critical role to try to embed on the team so that they self-identify with being a full member of the team.

Naturally, they are involved with the broader group of designers, and there might even be some sort of centralized aspects that they draw from and bring to teams as ambassador. In that role, designers are either just relegated to only working on the visual treatment or the individual problem, really just coming up with an interface. They’re not tapped to truly understand the problem. When they’re separated from teams, that can be an extremely big deal and a large negative. Frankly, this relates to other forms of design or architecture where there’s just one lead architect who needs to figure out exactly how to implement the thing you’re working on. They are distinct from the team, and that’s going to encourage certain types of behavior. This should be an important consideration.

This also extends to things like marketers. Collaboration is increasing, and the boundaries between where the product starts and marketing ends are increasingly porous. If you need someone with that perspective who has those particular skills, and they are located in the other parts of the building and aren’t involved in a particular team, it becomes extremely difficult to get the right voices in the room. This really hammers home the point that product teams are becoming increasingly independent and focused on a particular mission, zeroing in on problems. It’s not enough to just have access to the people they need, whether or not its products are designed to be siloed from those particular teams.

Top