PDSA Cycle

Plan-Do-Study-Act: The framework for pragmatic experiments and learning from failure

The Coach Who Wouldn’t Teach

In his seminal book The Inner Game of Tennis, coach Timothy Gallwey recounts a student’s frustration. After a missed shot, the student demanded instruction: “What should I do differently?” Gallwey’s response was counterintuitive: he wouldn’t say. His philosophy was that conscious, verbal instructions often interfere with the body’s innate, tacit learning ability. Muscle memory, proprioception, and balance form a complex system with far too many degrees of freedom for any coach to rationally decompose. Recovery from failure, he argued, must be directed by the player’s own sensory feedback. The coach’s role is not to provide a blueprint but to create conditions for “self”-organized learning. This insight extends far beyond tennis: in a complex, tacit world—whether swimming, innovating, or leading—prescriptive solutions fail. Progress requires a framework for learning from failures, not a manual for avoiding them.

The Limits of the Blueprint

Traditional engineering is built on the “blueprint” model: define a goal, identify parameters, and control the process to achieve a predictable outcome. This is system identification—understanding the fixed “mountains” around the “river” to navigate it. But what if you’re in the open ocean, with no fixed landmarks? Learning to swim is the quintessential example. The water’s behavior is unpredictable, and the body’s coordination is tacit. No amount of video analysis of Olympic champions will program your muscles. You can only learn by doing, by feeling the water, and by interpreting the feedback from countless failed attempts. This is the world of “pragmatic experiments,” as defined by Charles Peirce and Walter Shewhart’s PDSA (Plan-Do-Study-Act) cycle: hypothesize an action, try it, study the outcome, and adapt. The goal isn’t to avoid error but to iterate rapidly within a safe boundary, using failure as the primary data source.

The Pattern of Progress

The central problem in such learning is measurement. How do you know if this attempt was better than the last? In business, the performance indicator is often revenue. In tacit skill acquisition, it’s elusive. This is where the Mahalanobis-Taguchi-Fukuda (MTF) approach provides a key innovation. It uses the Mahalanobis Distance (MD)—a unitless, multivariate measure of how far a given set of observations (e.g., sensor data from a swimmer’s motion) is from a “unit space” of healthy, effective patterns. The learner doesn’t need to know which joint angle to change; they simply see if their overall movement pattern MD score goes up (worse) or down (better). It provides a holistic, quantitative performance indicator for a holistic, tacit activity. The tool doesn’t prescribe the solution; it illuminates the path, turning unorganized trial-and-error into organized, directed learning.

From Motion Control to Motor Wisdom

This pattern-based approach bridges the critical gap between “Motion Control” and “Motor Control.” Motion control, from an engineering perspective, seeks to replicate the final, reproducible trajectory of a successful movement—the last 10% of Bernstein’s cyclogram. But motor control is the body’s own, internal process of coordinating muscles, balance (vestibular system), and proprioception to find that trajectory through a cloud of possibilities. MTF’s pattern cognition supports motor control. By giving feedback on the whole pattern, it allows the learner’s instinctive system—the “octopus-like” intelligence of the body—to self-organize toward better solutions. It engineers the conditions for human wisdom to flourish, rather than attempting to replace that wisdom with a machine instruction set.

Failing Safely Toward Adaptation

The implication for modern systems—from AI training to organizational strategy—is profound. Our obsession with optimizing for success (minimizing error rates) in stable environments has made us fragile in the face of novelty. Instead, we must design systems for graceful failure and rapid adaptation. This means creating simulations, sandboxes, and pilot environments where pragmatic experiments can be run at low cost. It means valuing “loss functions” that measure robustness to disruption, not just efficiency at a single task. For the engineer, the most important product becomes not a thing, but a learning system—one that empowers its human users to explore, fail, learn, and adapt. In an age of sharp changes, the ability to learn from failures is no longer a soft skill; it is the core competitive advantage, and it must be engineered into the very fabric of our tools and organizations.


References

Bernstein, N. A. (1967). The co-ordination and regulation of movements. Pergamon Press. Fukuda, S. (2019). Self engineering: Learning from failures. SpringerBriefs in Applied Sciences and Technology. Gallwey, W. T. (1997). The inner game of tennis: The classic guide to the mental side of peak performance. Random House. Peirce, C. S. (1878). The doctrine of chances. Popular Science Monthly. Shewhart, W. A. (1939). Statistical method from the viewpoint of quality control. The Graduate School, USDA.