Simulation method to assess risk profiles and value uncertainty
The Moment of Quantification
Napoleon Bonaparte once stated that nothing is more difficult, or more precious, than the ability to decide. In systems engineering, the Decision Making phase provides the necessary foundation for that difficult choice. After the Problem Definition phase established the value framework, and the Solution Design phase produced feasible and improved candidate solutions, the Decision Making phase must now definitively answer: which solution provides the best value for the resources?.
This phase requires a comprehensive assessment of candidate solutions by quantifying their value, analyzing uncertainties, mitigating risks, and formally presenting the cost-value tradeoffs. The systems engineer’s role is crucial here—not to make the decision, but to present all essential, logically correct information to ensure the decision maker’s choice is sound and defensible.
The Analytical Core: Scoring Value and Assessing Uncertainty
Scoring Candidate Solutions
The process begins by calculating the score of every candidate solution against each established Value Measure. These scores can be derived using five primary methods, chosen based on the system’s life cycle stage and resource constraints. Operations (using a system in the real environment) and Testing (developmental or operational testing of prototypes) provide the most accurate, objective data but are typically cost-prohibitive for all solutions. Modeling (mathematical) and Simulation (using a representation of the solution over time) offer a less expensive, reproducible alternative, particularly useful for complex or stochastic systems. Finally, Expert Opinion is the quickest method, especially when data is scarce, though it requires rigorous sensitivity analysis to account for subjectivity.
Once scores are gathered in the raw data matrix, they are converted to a standardized, dimensionless unit of value (e.g., 0 to 100) using the established Value Functions. The total value ($v(x)$) for each candidate solution is then calculated using the Additive Value Model, combining the normalized value scores with the weighted importance (measure weights) of each function. The additive model allows the team to compare solutions across conflicting objectives, such as maximizing rocket mobility versus maximizing payload capacity.
Analyzing Sensitivity and Risk
A deterministic total value score is insufficient for a complex decision; systems engineers must analyze the robustness of their recommendation. Sensitivity Analysis tests the stability of the preferred solution by varying key assumptions. The most common test involves varying the Measure Weights (swing weights) to see if the preferred solution changes, especially where stakeholder disagreement exists. If the resulting value lines of competing solutions cross, the decision is sensitive to that weight, requiring stakeholders to resolve the underlying disagreement.
Risk analysis then models the impact of uncertainty in the solution scores themselves. Monte Carlo Simulation is the primary tool for assessing the combined effect of multiple independent scoring uncertainties (e.g., uncertainty in inertial guidance performance). By sampling predefined probability distributions (like the triangular distribution) for each uncertain score across numerous runs, the simulation generates a probability distribution for the total value of each candidate solution. This analysis can reveal deterministic dominance, where the worst outcome of the best solution is still better than the best outcome of all others, providing high confidence to the decision maker.
Value Improvement and Tradeoff Clarity
Even after initial scoring, the team must seek a better solution—a concept central to Value-Focused Thinking. By comparing the total value of the current best solution against the ideal solution (a hypothetical alternative that scores the best possible on every measure), the team identifies the “value gap” and uses the insights from other alternatives to formulate an improved candidate solution. This iterative improvement process uses the current solutions as a basis for designing superior architectures.
After all improvements and risk analyses, the final step involves comparing the calculated total value against the estimated Life Cycle Cost (LCC). The Cost/Benefit Plot graphically represents the tradeoff space, helping the decision maker identify solutions that are nondominated (not having a lower value at a higher cost). Dominated solutions, which provide less value at a greater cost than an alternative, should be eliminated from consideration.
For final clarity, the Decision-Focused Transformation (DFT) can be used to rescale the remaining nondominated solutions. DFT eliminates Common Value (value all remaining alternatives share) and Unavailable Value (value that none achieve), thereby maximizing the communication of Discriminatory Value—the portion of value that actually drives the final choice. This simplified graphical output enhances stakeholder communication and reinforces Commitment to Action.
From Analysis to Action
The Decision Making phase culminates in a clear, compelling recommendation supported by rigorous data. The final presentation or report must be concise and targeted directly to the decision maker’s needs. The Storyline Approach ensures maximum clarity by structuring the narrative so that the main argument is readable in the title of each slide (horizontal integration), with supporting evidence detailed below (vertical integration).
Crucially, the presenter must always deliver the Bottom-Line Up Front (BLUF), immediately stating the recommendation and asking for the decision to focus the ensuing discussion. Upon securing a decision, the team shifts immediately to developing the plan for the Solution Implementation phase, preparing for the difficult reality of translating the decision into a successful outcome.
