The High Price of Untested Belief

Managers commonly rely on intuition and experience-based insights, often leading them to apply solutions to problems they have yet to fully understand. This default managerial style is often characterized by the plunging-in bias: rushing forward with a solution long before gathering data, finding alternatives, or engaging in analysis. Conversely, some highly analytical managers find themselves paralyzed, agonizing over multiple alternatives and over-analyzing a problem until the moment for effective action has passed. Both extremes—the impetuous and the overly cautious—share a common vulnerability: making decisions based on untested, unvalidated assumptions.

The real-world consequences of intuition replacing empirical testing can be severe. In one case, an entrepreneurial CEO faced cash flow issues because clients consistently paid late. Based on anecdotal feedback from his sales team, he assumed that lengthening the standard payment term from 30 to 60 days would satisfy clients and encourage timely compliance. Yet, the opposite occurred: clients who were already late simply delayed payment even further, and those who previously paid on time began using the full 60 days, incurring significant cost and problems for the company. A small, inexpensive study conducted later by MBA students confirmed the counterintuitive truth: longer payment terms correlated positively with longer payment delays. This outcome underscores a core principle of critical management: assumptions, especially those that contradict market behavior or base rates, must be tested empirically before implementation.

The Scientific Mandate in Management

The foundational attitude for sound decision-making is evidence-based management, which is guided by the intellectually humble principle: “I could be wrong”. By adopting the mindset and tools of the scientific method, managers transform untested assumptions into falsifiable hypotheses that can be investigated.

The scientific method, though complex in its philosophical origins, is straightforward in its application to business strategy, following five steps:

  1. Observation: Noticing a pattern or phenomenon (e.g., gaming accessories are often ordered as single items online).
  2. See a Pattern: Confirming the observation (e.g., tech items are generally ordered one at a time, non-tech items are bundled).
  3. Formulate a Hypothesis: Creating one or several competing explanations that define how an independent variable (X) influences a dependent variable (Y).
  4. Design and Conduct an Experiment: Testing the hypothesis (e.g., looking at correlations between age, price, and basket size).
  5. Evaluate Results: Accepting the hypothesis as a working theory or rejecting it.

A central requirement for effective testing is avoiding the temptation of the immediate, large-scale launch. Instead of signing a lease and hiring staff based on the assumption that a coffee shop will succeed, a manager should test rapidly and cheaply—through surveys, analysis of competitive base rates, or even setting up a temporary tasting event. A validated working theory offers high value because, as theory should, it explains past observations, predicts future behavior, and offers elegant insights.

Engineering a Controlled Experiment

To yield reliable conclusions, business experimentation, while not requiring the double-blind complexity of clinical trials, must adhere to fundamental design principles that limit human bias.

First, hypotheses must be testable and clear, translating general propositions (e.g., “Volvo owners are an easier target”) into operationalized variables (e.g., “Volvo owners are more likely to buy a new Volvo than people who own other brands”). Second, experimentation requires the construction of control and treatment groups. The control group provides the necessary baseline, ensuring that any change observed in the treatment group is attributable to the variable being tested, not to general market conditions. Crucially, groups must be selected based on the independent variable (e.g., owning a Volvo vs. owning an Audi), never the dependent variable (e.g., grouping only people who bought a new car and looking backward), as this generates misleading results.

The case of the Volvo car dealer illustrates this refinement. The dealer, relying on instinct, focused marketing solely on existing Volvo owners. An experiment using two comparable groups (1,000 existing Volvo owners and 1,000 Audi owners) showed that while more Volvo owners visited the dealership, the non-Volvo owners (the control group) were 50% more likely to buy a car once they entered the store (28.6% conversion rate vs. 18.4%).

50%

Higher conversion rate for non-Volvo owners

This unexpected finding proved that the manager’s approach was leaving money on the table and led to a strategic shift: focusing marketing on increasing dealership visits for non-Volvo customers. This demonstrates that testing not only validates hunches but frequently unveils a reality far more complex than anticipated.

The Pitfalls of Bias

The manager’s inherent inclination toward belief must be constantly guarded against. Confirmation bias is the tendency to seek out evidence that confirms one’s existing hypothesis. The philosopher of science Karl Popper argued that the essence of scientific inquiry—and the core defense against confirmation bias—is falsifiability: deliberately designing experiments to prove the theory wrong. Instead of seeking more white swans, the researcher must strive to find the black swan. Other dangers include measuring the wrong behavior (using gym membership count instead of actual workouts) or relying on samples that are too small or not representative.

In the digital world, large-scale, controlled tests are efficiently executed through A/B testing, constantly optimizing elements like button color or phrasing. Dan Siroker, as Director of Analytics for the Obama 2008 campaign, used A/B testing on the website’s landing page elements. The combination favored by users contradicted the team’s own preference, ultimately resulting in millions more sign-ups and an additional $60 million in donations, demonstrating the immense value of submitting even confident hunches to rigorous, empirical testing.

$60 Million

Additional donations from A/B testing in Obama campaign

Intellectual Humility as a Strategic Asset

The reluctance to test often stems from managerial overconfidence. However, testing offers the most benefit in two opposite scenarios: when we think we know exactly how to solve a problem (plunging-in) and when we are paralyzed by over-analysis. By adhering to the principles of controlled experimentation and adopting an attitude of intellectual humility—constantly entertaining the possibility that one is wrong—managers can transform the chaotic nature of the market into a reliable source of information. Whether through elaborate experiments, pilot studies, speaking directly to clients, or analyzing base rates, structured validation replaces hopeful speculation with solid evidence, ensuring decisions are built on rock, not sand.