In the realm of strategy, Design Thinking begins with deep observation (the foundation of the Netflix success story, as discussed in Post 01) and moves through intensive prototyping (turning ideas into concrete blueprints, as detailed in Post 02). Yet, the act of designing a magnificent new Detailed Business Model (DBM) is only half the battle. A strategy built on abstract creativity alone is merely a hypothesis, floating precariously above the solid ground of market reality.
The core challenge facing strategy designers is managing strategic uncertainty. This involves reconciling the inherent human tendency to overestimate one’s own market knowledge with the need for senior executives to make decisions without demanding absolute, 100% certainty.
The solution lies in the fourth, crucial phase of the Design Thinking for Strategy (DTS) methodology: Validating (V).
Validation is the forward-looking, confirmatory process that uses targeted, experiment-based testing to manage this uncertainty. It provides enough judgmental insights to convince decision makers that potential flaws have been identified, mitigated, or ruled out, reducing strategic risk to an acceptable level. This is how a firm tests the viability of its strategic blueprints against the seismic, shifting pressures of the real world—the tectonic plates of technology, customer behavior, and competition.
This post will explore the disciplined process of formulating assumptions, designing rigorous experiments, and applying these tools to ensure that a designed strategy is simultaneously Desirable, Feasible, and Viable.
I. The Criticality of Managing Uncertainty
The reliance on backward-looking, data-driven, deductive strategy approaches often fails in a rapidly changing environment. This failure is often compounded by two common strategic mistakes:
- The flaw of false positives: Executives frequently believe they know their customers and markets better than the data or the customers themselves, leading to the development of offerings for which there is no genuine market need. As noted by CB Insights, a staggering 42% of start-ups fail because there is no market need for their products or services. Failed products like Sony’s Betamax, Microsoft Zune, and Apple’s Newton are historic reminders of this pitfall.
- The paralysis of perfection: On the opposite end, many decision takers refuse to move forward unless they are 100% convinced that a strategic change will be successful, demanding validation after validation to eliminate all business risk. This high degree of risk aversion is inherently at odds with successfully leading organizations into the future.
Strategy validation aims to find the productive middle ground: providing sufficient evidence to move forward quickly by focusing on identifying potential weaknesses, rather than merely confirming initial beliefs.
Validating vs. Testing
In traditional academic research, validation often means translating a belief into a quantifiable hypothesis that is tested using backward-looking statistical methods based on historical data. In DTS, validation differs fundamentally:
- Focus: It is forward-looking, relying on judgmental insights gathered until the marginal added knowledge from further testing is negligible.
- Goal: The primary goal of any strategy experiment is to attempt hard to invalidate the assumption, rather than to prove its validity.
- Confidence: The aim is to achieve high confidence in the decision (e.g., getting to an 80% certain positive answer with sound explanations) rather than achieving statistical precision.
II. Formulating and Classifying Assumptions
Before validation can commence, the strategy design team must clearly define the implicit or explicit beliefs upon which the designed DBM prototype rests.
The Three Types of Assumptions
A successful DBM prototype relies on dozens of assumptions that must be explicitly formulated. Assumptions are classified into three types:
- Element-based assumptions: These relate to the validity of the description of specific elements within the DBM.
- Example: Whether the firm has correctly identified the most important Customer Segments (CS) or Products and Services (OPS).
- Relationship-based assumptions: These are the most common type and relate to the causal descriptions of the relationships between different DBM elements.
- Example: Do targeted customers (CS) value the specific Value Proposition (OVP) strongly enough to purchase the offering (OPS)?
- Externality-based assumptions: These relate to the dependencies between the DBM elements and the external environment (e.g., regulations, suppliers, partners).
- Example: Is the firm’s competitive advantage dependent on a key supplier continuing to offer favorable terms (supplier/capability link)?
Only those assumptions that have a material impact on the validity of the business model and for which confidence is insufficient should be formulated and prioritized for testing.
Application Case: The Digital Retail Bank
Consider a suburban retail bank that chooses an Offerings strategic focus, aiming to become a purely digital bank relying solely on technology (e.g., mobile apps) to deliver services.
This strategic design generates critical assumptions across all three categories:
| Assumption Type | Strategic Element | Example Assumption |
|---|---|---|
| Element-based | Customer Segments (CS) | Are there enough young, mobile-addicted customers (18–30) in the suburbs to sustain the bank? |
| Relationship-based | CS → OVP | Are targeted customers willing to contract a first-time mortgage via a mobile phone app without human interaction? |
| Externality-based | Financials → Suppliers | Are gas stations willing to function as human-serviced ATMs for a reasonable fee? |
III. The Experiment Framework
Designing experiments requires creativity and adheres to specific constraints to maintain agility. A key principle is attempting hard to invalidate the assumption, rather than merely confirming what is already believed to be true.
The 5-5-5 Rule
A well-designed strategy experiment typically adheres to the 5-5-5 Rule: it requires no more than 5 weeks to perform, costs no more than $5,000, and involves no more than 5 strategy design team members (including decision takers). For lower-impact assumptions, the timeline may be reduced further.
Experiment Design Template
A structured experiment design includes:
| Element | Description |
|---|---|
| Experiment Activities | The specific actions taken (e.g., presenting mock-ups, conducting interviews) |
| Target Population | Who will be tested (e.g., recent mortgage holders, target customer segment) |
| Measurement Criterion | What counts as success (e.g., count of “yes” or “maybe” responses) |
| Decision Threshold | The bar for accepting or rejecting the assumption (e.g., 80% positive) |
Application: Digital Bank Mortgage App Validation
Assumption: Customers are willing to contract their first-time mortgage via a mobile phone app without human interaction or support.
| Element | Design |
|---|---|
| Experiment Activities | Informants who recently contracted a mortgage by visiting a bank are presented with a mock-up of the mobile app interface, navigating the full application process. They are allowed to ask understanding questions until confident in the process. |
| Target Population | Homeowners who have recently obtained a mortgage via traditional bank branch visits. Initial sample size of 25, increasing up to 100 if results are inconclusive. |
| Measurement Criterion | Count all informants who answer the final question, “Would you be willing to use such an app and follow this proposed process?” with ‘yes’ or ‘maybe’ as positive responses. |
| Decision Threshold | Accept the assumption if 80% of the initial 25 informants respond positively. For larger populations (over 25), accept if 75% respond positively. |
This approach provides quantifiable evidence of customer acceptance (Desirability) without requiring the bank to build the costly, complex back-end system. If the experiment fails, the strategy team must revert to the Designing step to prototype alternative ideas and re-validate them.
IV. The Arsenal of Experimentation Tools
Although the space for experiment design is limitless and requires creativity, most strategy experiments fall into four categories, listed in decreasing order of typical strategic relevance.
1. Mock-up or Prototype Feedback
Mock-up or prototype experiments involve presenting the informant with a tangible (physical or digital) representation of the assumption being validated. This is especially useful for validating offering features and distribution channels and is preferred over questioning alone, as it minimizes potential biases introduced by the interviewer.
- Application example: To test the viability of the digital bank’s mortgage app (Feasibility/Desirability), a sequence of screen masks is presented to informants to simulate navigating the application process, identifying friction points and appeal.
2. Confirmatory Interviews
In contrast to explanatory or ethnographic interviews (which are divergent and seek broad insights), confirmatory interviews are convergent, focusing on getting answers to closed-end questions—rephrasing the assumptions to be validated. The power of this technique lies in understanding why an assumption is rejected or accepted.
During a confirmatory interview, the discussion centers on five dimensions:
- Do you agree or disagree with the assumption?
- Why do you come to your conclusion? Which insights impact your decision?
- What would make you change your mind?
- What missing information would solidify your opinion?
- What attributes were irrelevant to your decision?
- Application example (digital bank cash access): An informant disagrees with the assumption that customers want “cash availability at any time.” The crucial insight gathered is that the need is specifically for cash early morning (for coffee) or late evening (for home-delivered pizza). This allows the designers to rephrase the assumption (e.g., changing “at any time” to “6 AM to midnight”) or adjust the DBM to reflect proximity and convenience (e.g., adding grocery stores as an alternative to gas stations).
3. Split Testing (A/B Testing)
Split tests are used when the goal is to validate possible alternatives (e.g., A vs. B) rather than answering a pure yes/no question. They are commonly applied to test assumptions around offering features, packaging, combination of characteristics, and pricing models.
- Application example: A split test may be used to find out whether customers prefer paying up-front, in installments, or only after the product or service has been fully delivered.
4. Surveys
Surveys offer the benefit of reaching a large informant population with minimal effort. However, they are fraught with risk. Since the informant cannot be observed or questioned, the quality of the formulated question is paramount to avoid bias and ensure honest, complete answering. They should include confirmatory questions to test for internal consistency of the answers.
V. Top-Down Validation: The Final Consistency Checks
After bottom-up validation of individual assumptions, the final step in the validation process (V.6) involves top-down tests to ensure the overall consistency of the entire designed Detailed Business Model prototype. A strategy must meet all three criteria holistically, as it is a mistake to assume that if a model is Desirable, it is automatically Viable and Feasible.
1. Validating Desirability (Customer & Offerings)
The Desirability check confirms that the offerings are sought after by the target customers and that they satisfy their needs and support their jobs-to-be-done (JTBD). Key assumptions tested here include:
- Are there enough customers in the target segments who perceive the Value Proposition as unique or superior?
- Does the Value Proposition cover enough JTBD to trigger a buying decision?
2. Validating Viability (Financials)
The Viability check ensures that the firm can generate a sustainable profit. This is paramount for any strategy with a Financials strategic focus. Key assumptions tested here include:
- Are customers willing to pay a price that exceeds the incurred costs?
- Is the pricing model appropriate from the firm’s perspective (e.g., expected revenues exceed incurred costs)?
- Will a sufficiently large number of customers buy the offering to cover fixed expenses and investments?
3. Validating Feasibility (Capabilities)
The Feasibility check confirms that the firm can actually deliver the promises made at the quality level expected by the customers. This is crucial for strategies focused on Capabilities. Key assumptions tested here ensure:
- The required activities (KA) can be performed.
- The necessary resources (KR) (labor, capital, skills) are available at acceptable costs.
- The firm’s internal value chain processes (e.g., supply chain) support the desired delivery quality.
Unless the DBM involves entirely untested inventions or aims at radical disruption, Feasibility is generally the easiest trait to confirm.
The validation process, with its iterative loops back to Learning (L) or Designing (D), ensures that the chosen strategy is not merely an abstract vision but a robust, tested model capable of withstanding the complex reality of the market. By prioritizing high-impact, low-effort validation experiments, DTS empowers decision makers to manage strategic uncertainty effectively, allowing them to fail fast to succeed faster.
Summary
| Concept | Description |
|---|---|
| Validation Phase | Forward-looking, confirmatory process using judgmental insights and experiments to reduce strategic uncertainty |
| Assumption Classification | Element-based, relationship-based, and externality-based assumptions must be prioritized and tested |
| 5-5-5 Rule | Experiments should take no more than 5 weeks, cost no more than $5,000, and involve no more than 5 team members |
| Experiment Types | Mock-ups, confirmatory interviews, A/B testing, and surveys to validate Desirability, Feasibility, and Viability |
| Decision Maker Involvement | Executives must experience validation firsthand for higher confidence in strategic decisions |
| Holistic Checks | Final validation ensures the strategy is Desirable, Feasible, Viable, and competitively differentiated |
