Note: I’ve been meaning to share this for a while now. I was reminded of this training material a couple days ago while brainstorming new workshop content with my Amplitude* coworkers. A cool thing about my gig at Amplitude is that I get to work on up-leveling the product-public at large (not just customers). Follow me on Twitter (@johncutlefish)
and I’ll let you know when we launch new stuff.*
I have always been a fan of one-pagers — short, space-constrained, descriptions of a proposed product bet. A single page is something that you can put up on a wall for everyone to see. It takes between 3–6 minutes to read. If you use GDocs, you can invite commenting and suggestions. Overall, one-pagers encourage crisp communication, “product thinking”, and collaboration. As with most things, it is the “conversation that counts”…you’ll know one-pagers are “working” when they inspire a lot of interesting banter, edits, challenges, clarifications, etc.
You’ll quickly note that I don’t offer a template. Design your own. This post should give you enough guidelines to establish a starting-point with your team.
Why One-Pagers?
One-pagers are used to build shared understanding around opportunity, value, impact, outcomes, risk, and viability. They are short, easily consumed, and collaboration/feedback friendly. They are no-fluff, no-spin, and to the point. One-pagers are not meant to communicate detailed specifications, requirements, and plans.
One-pagers communicate the data, insights, and beliefs behind potential “bets”. Most startups are not dealing in the land of “sure things”. The goal, therefore, is not to manufacture certainty or pitch your favorite solution. Rather, a good one-pager takes a data-informed perspective on risk and return. With one-pagers, we hope to…
- Improve decision quality
- Improve outcomes
- Reduce risk
- Reduce rework and thrashing
- Reduce batch sizes
- Encourage cross-functional collaboration
- Discourage opacity
- Encourage novel solutions to valuable problems
A Bet Thought Exercise
A good thought exercise while writing a one-pager is as follows:
Would you bet $5,000 of your own money on the success of this effort? Why? Why not? On what terms? For what return? How would we know whether you had won/lost the bet? What might we learn early on that would encourage you to increase your bet to $10,000? Or decrease your bet to $1,000, or $0?
This question strikes at issues of value, independence, outcomes, data, insights, and “testability” (a clear goal). If you have trouble with this question, consider the risk you are asking the company to undertake. How can you close the information gap? Or is that, perhaps, the goal of your one-pager…to learn, and reduce uncertainty?
We tend to fall in love with our plans/ideas. Since the goal of one-pagers is to help us do best by the company, it is important to invite others to challenge your assumptions, and potentially add valuable insights/data/perspectives.
But I’m Making It All Up!
When writing one-pagers, it is common to fall into a state of analysis paralysis. “What do I do if I’m not 100% certain about this outcome?” Remember that 100% certainty is not the goal — and is rarely achievable (especially in a startup). What is the smallest experiment you could run that might reduce uncertainty?
Consider two options:
Option 1:
- Addresses a $5,000,000 a month opportunity
- Confident that 5–10% capture is realistic near term
- Clear path to learning in 1–2 months
Option 2:
- A “sure thing“ $50,000 a month opportunity If given a choice between the two, which one do you pick? Option 1, of course, provided you have confidence in your ability to learn, adapt, and figure it out.
Sometimes, all we need is a range. For example, you might be 90% confident that an opportunity is larger than $1,000,000 per month, and smaller than $12,000,000 per month. You have a vague hunch that there is a lot of low hanging fruit. That may feel terribly uncertain. But my guess is that you’d chose that over an opportunity that is 100% certain to be under $400k per month.
Now if you’re truly guessing, you might as well flip a coin. Maybe focus on another one-pager.
Great One-Pagers Are…
- Collaborative — Have had multiple rounds of cross-functional workshopping, and were probably “messy” at some point
- Independently Valuable — tackle one (and only one) opportunity
- Outcome Oriented — Start with the desired outcome, not the output
- Understandable — Anyone in the org can understand it
- Generative — Enough information to inspire creative problem solving
- Actionable — Not so open ended that problem solvers will flounder
- Humble — Surfaces assumptions, unknowns, questions, risks, etc.
- Succinct — Minimal fluff, concise
- Testable — Can test whether you’re on track (or not)
- Inspiring — Gets people excited without hyperbole, using qual/quant data
Problems & Solutions
An added twist for modern software product development is that we can incrementally experiment, reduce risk, and exploit new learning — essentially betting on the race/game after the “horses have left the gate”.
Why is this important? It means that our bets (and by extension our one-pagers) don’t necessarily need to focus on a specific solution — provided there is data surrounding the opportunity. Does solution-specific data help? Sure…if the data is valid and robust. But don’t discount the ability of a diverse, creative, cross-functional team to come up with any even better solution (to your compelling problem). Also, consider the various cognitive biases attached to advocating for your favorite solution.
Another way to say this, is that don’t automatically equate your one-pagers to “projects” using traditional guidelines for project “greenlighting” or linear four-phased software delivery life cycles. Project Managers deal in the realm of time, cost, and scope (given a predefined solution). They assume that X is valuable. Product Managers operate in the realm of opportunity, value, and viability. Try to assume a product management mindset. Instead of projects, think in terms of missions, and initiatives.
Project Manager: Time, Cost, Scope Product Manager: Opportunity, Viability, ValueAt the end of the day, one-pagers communicate data and insights (about risk, opportunities, assumptions, etc.). The perfect solution to a “meh” problem is not very valuable to the company. So consider nailing down the opportunity first.
Timeframe & Scope
One-pagers are not open-ended. They have a “definition of done”. But this “definition of done” can range from a working/tested deliverable to achieving a high-level business objective. Consider a range of prescriptiveness:
- Build exactly this
- Build something that does x,y, z
- Build something that lets customers do __________
- Solve this customer’s problem
- Improve the experience for [some segment of customers]
- Optimize this metric
- Generate this short term business outcome
- Generate this long term business outcome There is no right/wrong implied in this list. But it will impact how your one-pager is perceived and evaluated, the team that will be assembled, and the data you will need to make a persuasive case. A one-pager with a #1 focus, will look very, very different from a one-pager with an #8 focus. Notice the trade-offs:
- #1 (Build exactly this) may be a perfectly good bet if there’s a lot of data on your proposed solution. But what if your solution has never been tested? What if a cross-functional team could have come up with a handful of better solutions? What if there’s a risk that your solution is not viable? What if the opportunity is tiny? And how long will it take for us to figure out if the bet paid off?
- #4 (Solve this customer’s problem) feels riskier — we may not know how to solve the problem yet — but if we do solve the problem, we will likely be more confident about the bet’s outcome. Assuming we have data to connect solving the customer’s problem to a business outcome, this might be better bet.
- #6 (Optimize this metric) will require tight(er) feedback loops. Can the team isolate a leading indicator? Can they identify a beta group that is willing to try “new stuff”? Can they rule out other factors that might influence the metric? The risk: this is hard. The upside: more certainty of a positive outcome. From experience, I think the sweet spot for one-pagers is 3–6, unless, of course, you have sufficient data to show that #1 will generate the desired outcome, or that the team is resourced and independently capable of chasing #7 or #8 outcomes.
Importantly, there may be multiple “definitions of done” for a one-pager. At some point, the team may feel comfortable passing the baton to customer success/adoption. Maybe there is an earlier DoD that covers when the team will feel comfortable pushing something into production. Suffice to say, DoD is all about risk management. Finishing early may feel efficient, but it may not address the risk of the effort not generating benefits.
For scoping, also consider the idea of independence, and work being independently valuable. If 80% of the value can be extracted by 20% of the work…then reduce the scope of the one-pager by 80%. If the one-pager tackles two problems, consider focusing on one-problem. In the case of a one-pager that is not independently valuable, but is scoped to “unblock” or “unlock” the value of other future one-pagers, consider linking the unblocking one-pager to an effort to extract value (and reduce risk).
In general (a guideline, not a rule), one-pagers should cover a timeframe of between one week and three months. Why the limit? In short, it is dangerous to go more than three months without showing a meaningful outcome, or at least progress towards that outcome. Ideally, teams are “shipping” every two weeks (or more often), and expose the work-product to customers on a regular basis. A six-month one-pager would be perfectly fine if (and only if) there was some likelihood of reducing risk on a regular basis, delivering value continuously, and measuring outcomes.
When considering “duration”, think about the big picture. An effort that takes one month to build — but that does not generate meaningful value for fourteen months — is not a “small” effort. A two-month, leading-indicator-producing bet may be a better choice.
Parting scoping tips:
- Do not split initiatives artificially (e.g. Phase 1,2,3)
- One-pagers should be independently valuable
- Solve one problem
- Limit dependencies and constraints
Content
Some common sections to one-pagers include:
- Title
- Tweet length mission
- Definition of awesome / celebration quotes
- Cost of Delay estimate
- Pivot and proceed points
- Key data points
- Key insights
- Actors (both internal and external)
- Dependencies and constraints
- Operating assumptions
- Open questions
- Assumptions to validate
- Risks to mitigate
- Baseline behavior (the status quo)
- Target condition/state
- Possible interventions
- Assumptions to validate
Tweaking Your One-Pagers Titles and Mission
In general, I recommend keeping features out of your one-pager titles and missions. Why?
- Features alone are not outcomes
- We tend to fall in love with our ideas. What if a better solution exists?
- Once you’ve described and shared a specific feature in a roadmap, there are few opportunities to switch things up
- Shipping a feature is not the end of the story. Thinking in terms of features ignores challenges like adoption, validation, and iteration
- As the initiative progresses, you’ll be able to provide more specifics on the how/what
Bad:
Add tags to work orders
Good:
Help our ~7,000 in-house maintenance coordinators process work orders 50% faster. From submission to payment-received … make it effortless, and let them focus on finding new customers, not paperwork. No more “I’m overwhelmed, and can’t find anything! This takes hours!”
Bad:
User Permissions Phase 1
Good:
Confident admins with an improved NPS (from 31 to 55).
Unblock our land-and-expand strategy by making it safe for admins to let other internal departments start trial projects. For our 900 30+ seat customers, increase the number of read-only trial users by 150% by end of Q3 2017
Bad:
New Login and Onboarding Flow
Good:
From “that’s interesting” to “my first book sale” in 30 minutes or less for 95% of new customers starting April 2017
Bad:
Launch WidgetCo Value-Added Service
Good:
Reduce the time it currently takes our customers with in-house 4–7 person marketing departments to create and launch a campaign from 3d to <1d. Move 25% of first-month trials to paying plans. Brag about it at CampaignCon 2017!
Bad:
HopPredictor
Good:
SMB beer retailer customers closing >$450k in revenue, with users in our “savvy tech adopter” category (about 40% of those customers), can expand their businesses by an average of 8% by enhanced consumer outreach. Smarter recommendations and reminders!
Bad:
Harden our deployment pipeline
Good:
Faster feedback. Fewer sleepless nights. Deploy code with 100% confidence and be able to test new features with early-adopter customers in a matter of hours not days. Reduce pager duty alerts from an average of X weekly, to Y.
Keeping It Simple
A helpful frame is the one of behavioral change. This is a good way of side-stepping the question of “solution” and focusing on what you might observe if the one-pager was successful.
One-Pager Questions
Use these questions while workshopping and writing.
It’s OK if you don’t initially have all the answers, but I would expect a one-pager in the next up position — and the team supporting that one pager — to have answers to most of these.
- Would you bet $20,000 of your own money on the success of this effort? Why? Why not? On what terms? How would we know whether you had won/lost the bet? What might we learn early on that would encourage you to increase your bet to $40,000? Or decrease your bet to $5,000, or $0?
- Fill in the blanks. With this effort — in the next 6 months — there is a 95% chance we’ll generate [some outcome], a 50% chance we’ll generate [more of that outcome], and a 10% chance we’ll generate [even more of that outcome].
- How have we tried to solve this problem in the past? What happened? Do you mind sharing some data?
- What status quo are we hoping to disrupt? What is actually wrong with the status quo?
- Imagine you had to judge an internal competition to pick the best intervention to solve this problem. You’re responsible for writing judging criteria for your fellow judges. How would you rank submissions?
- What efforts have you taken to defeat confirmation bias, the availability heuristic, information bias, the IKEA effect, and other cognitive biases? How might a less-biased person view this bet?
- Describe the “good news” you hope to elicit as a result of this effort. How might you describe it in a company-wide presentation in a non-success-theater, non-fluffy way? Write the dream customer feedback tweet. How might the good news change in the short, mid, and long term as we realize the benefits?
- Every idea has a “backstory”. What’s the backstory here? How might you describe this effort to a new team member without the “back story”?
- Explain how this connects to the broader company strategy. Why is this a critical part/piece of the puzzle? Together with other initiatives, are we telling a cohesive story?
- Why now? Why is this the most important problem to solve right now? How might the financial outcome be different if we did this in six months, one year, or never? Explain how it “beats” a handful of other things you are considering.
- You’re about to occupy some % of the careers of a couple fellow human beings. Why should they come along for this adventure?
- Do you imagine a team sticking around to push the actual benefits here? Or do you expect a hand-off? Or a hybrid? What early indicators might indicate that we’ve placed a good bet and would signal that it is safe to move on to other things?
- Let’s say we don’t do this. What will actually happen to the business in the short, mid, and long term? To our customers/users/partners/team?
- Does this effort rely on other efforts to be successful? Describe how the efforts are related. If they are truly dependent, can/should we pursue them concurrently, or combine them somehow?
- Challenge yourself to cut the scope here by 75%. Would that deliver some value? Should we pursue that first, even if it expands the overall scope a bit?
- How much money are we losing each week (new opportunities or cost savings) by not solving this problem? How does that compare to the money we are losing each week by not solving other problems?
- In the spirit of challenging the sunk-cost fallacy, what might happen part-way through this effort that would persuade you to stop work?
- Describe the various forces and shifts that must come together to make this successful? What do we control? What don’t we control? What can we influence?
- Play your own Devil’s Advocate for a moment. Give me three good reasons why this isn’t a good idea. Now give me three good reasons why solving another problem is a better idea.
- Who will this impact? Say I wanted to identify the customers/users this will impact, what query would I run? How might I quantify the impact over time?
- Can we design some safe-to-fail experiments to help us solve this problem? Overall, how can we expand our “portfolio” of bets here, and get faster feedback?
- Can you give a brief summary of the data and insights underpinning this bet? How did this data, and these insights inform your beliefs?
- What are the known unknowns here?
- Where are you making leaps of faith in terms of user behavior? What is your plan to close the learning gap? When will you get this into the hands of real humans, with real data, trying to do their real job?
- Do you have a plan for regular usability testing? How often? How early? Have you set aside time to act on what you learn during these tests?
- How will you instrument your solution to measure outcomes and learn?
- What’s your plan to work “end-to-end” across the problem and the solution, such that we don’t arrive, finally, at a solution and discover the parts don’t fit together as expected?
- What is the behavior you hope to change? What will customers/users do more of, less of, start doing, and stop doing as a result of this work? How will that behavior change benefit the customer/user and the company?
- What information would make solving this problem easier? Are we missing insights that might improve our “batting average” here? How might we obtain that information?
- What problem might we solve, such that this problem would be a non-factor? Why aren’t we trying to solve that problem?
- How will we measure the impact and success of this effort in the short, mid, and long term?
- What is your plan to regularly reduce “benefits risk” (the risk this effort will not achieve the desired benefits) as the effort progresses?
- How might you describe the various other risks in this effort? How will you incrementally reduce those risk levels?
- What must we “get right” to succeed at this effort? Where can we be less-than-awesome, and still succeed? What should we ignore? What can be “suck at”?
- Who do we need involved to make this a success? Any special skills? Any special insights?
- What assumptions must hold true for this initiative to remain the most important thing we can work on?
- Is this the lowest hanging fruit? If I asked your team to spend the next week fixing “small things with a big impact” would this top the list? Would it have a greater cumulative value? Say you only had two weeks to solve the problem (or chip away at the problem)…what would you try?
- Can you commit to a “pivot/proceed” decision point? When will we stop iterating on this? Please draw a line in the sand.
- It is a six months from now and this effort has failed. Describe three plausible reasons why it failed. Tell a good story.
- What is the leap of faith here? What must I believe without supporting data? And that is that. Happy One-Pagering.