Of the Systems Decision Process: Problem Definition, Solution Design, Decision Making, Implementation
The High Cost of Solving the Wrong Problem
History is replete with examples where immense effort resulted in failure because the problem was incorrectly defined. Confederate General Robert E. Lee’s decision to pursue a decisive engagement to destroy the Union Army at Gettysburg, rather than aiming to destroy the North’s will to fight, highlights a potentially incomplete definition of the strategic problem. Similarly, when U.S. auto manufacturers faced competition from Japan, they initially defined the problem as merely competing on cost, failing to recognize that superior quality was the complete problem driving Japanese success. An incomplete problem definition wastes considerable time and resources by pursuing solutions that fail to address the core issue.
The Problem Definition phase is the first and most crucial step in the Systems Decision Process (SDP). This phase, symbolically coded RED to signal “Stop and think,” mandates a thorough, deliberate effort to define the complete problem before moving to solution generation. This deliberate pause ensures that all subsequent energy and resources are directed toward achieving true stakeholder value.
Defining the Problem: The Essential First Task
The ultimate goal of the Problem Definition phase is to produce a clearly refined problem statement, a set of mandatory constraints (screening criteria), and an initial quantitative value model for evaluating potential solutions. This outcome requires executing three interconnected tasks: Research and Stakeholder Analysis, Functional and Requirements Analyses, and Value Modeling. Successful execution requires iterating and integrating insights from each task.
Systems engineers must accept that the initial problem defined by a client is never the final, complete problem. Therefore, thorough research is necessary to understand the problem domain, identify relevant disciplines, and uncover initial requirements and constraints. This foundation enables the subsequent, more sensitive task of engaging the human elements of the system: the stakeholders.
The Analytical Core: Structured Inquiry into Needs and Values
Research and Stakeholder Analysis
Stakeholder analysis forms the bedrock of the problem definition, as stakeholders—individuals or organizations with a vested interest—are the source of needs, constraints, and values. Key groups include the decision authority (the ultimate approver), the client (who pays), the user (who operates the system), and the consumer (who benefits). Thorough analysis ensures the team understands the decision maker’s objectives and the various environmental factors—political, economic, and technological—that influence the system.
Three primary techniques are available for soliciting stakeholder input, each suited to different levels of participation and data needs. Interviews are ideal for senior leaders, providing individual, in-depth perspective, often guided by a tailored questionnaire with unfreezing and closing questions. Focus groups are valuable for quick collective insight from groups with common backgrounds, generating information through discussion, and are often enhanced by groupware like GroupSystems to ensure anonymity and avoid dominant individuals. Surveys are best for gathering quantitative data from large, geographically dispersed groups, though care must be taken in question design and sampling size to ensure statistical validity. The resulting notes are analyzed by binning comments to generate findings, conclusions, and recommendations, confirming the facts and assumptions underlying the problem.
Functional Structuring and Requirement Definition
Once stakeholder objectives are gathered, systems engineers must define exactly what the system must do to meet those objectives. Functional analysis systematically identifies the characteristic tasks, actions, or activities a system must perform. This process begins with developing a functional hierarchy, a tiered structure created via techniques like affinity diagramming, where functions (described using a verb and an object) are grouped logically without specifying the physical element performing the task.
Progressing to greater detail, Functional Flow Diagrams illustrate the relationships and interfaces between subfunctions, breaking down the system until discrete tasks can be allocated to system elements. The comprehensive IDEF0 model goes further, specifying inputs, outputs, controls, and mechanisms (ICOMs) for each function, establishing a formal architecture for design. This functional architecture is key for guiding both system design and the development of subsequent models and simulations.
Simultaneously, requirements analysis translates stakeholder needs into specific, technical specifications. Requirements are categorized as capabilities (desired features) or constraints (mandatory screening criteria). Constraints, such as a minimum effective range or mandatory launch platform type for a rocket system, are non-negotiable and immediately used to screen alternatives for feasibility. Successful requirements analysis transforms high-level operational requirements into measurable engineering characteristics, ensuring clarity and traceability.
Modeling Stakeholder Value
The final task structures how the team will evaluate the goodness of a solution, moving beyond mere feasibility to quantify stakeholder preference. This Value Modeling task utilizes Value-Focused Thinking principles to build a quantitative methodology grounded in Multiple Objective Decision Analysis (MODA).
The process establishes a Qualitative Value Model, which links the overarching Fundamental Objective (the primary reason for the decision) to specific Objectives (statements of preference, e.g., maximize efficiency), and finally to Value Measures. Measures must be collectively exhaustive and mutually exclusive, ensuring the model is complete without redundancy. Measures are categorized by preference: a natural scale directly measuring attainment is preferred over a constructed proxy scale.
The Quantitative Value Model then defines the mathematical structure for evaluation. Value Functions convert a solution’s score (e.g., kilometers) on a measure into a standardized, dimensionless unit of value (e.g., 0 to 100) and capture returns to scale. Measure Weights (global weights) are determined using swing weights, reflecting both the measure’s importance and the impact of variation across its range on the decision. These components combine in the Additive Value Model (Equation 10.1) to calculate a total value score for any alternative.
The Framework for Better Decisions
The Problem Definition phase provides the comprehensive framework needed for all subsequent design and evaluation efforts. By the time the team moves to the Solution Design phase, they possess the essential outputs: a clear problem statement, a binding set of constraints (screening criteria), and a weighted quantitative value model. This thorough preparation mitigates the risk of expending resources on a flawed approach and sets the stage for achieving high decision quality.
