Engineering Optimization

Author

Professor. Hisham Ibrahim

Engineering optimization

Summary

Optimization is a fundamental aspect of engineering design, evolving from historical trial-and-error to modern systematic approaches for creating efficient and cost-effective systems. It involves defining key elements: design variables (parameters to be determined), objective functions (performance measures to be optimized, like minimizing weight or cost), and constraints (limitations that must be satisfied). Proper problem formulation is crucial, often constituting over 50% of the effort, and is typically an iterative process. Solutions can be found using graphical methods for simple two-variable problems, analytical (calculus-based) methods, or powerful computational tools such as Excel Solver and MATLAB’s Optimization Toolbox, which are essential for complex, multi-variable problems. Understanding concepts like local vs. global minima, convexity, and sensitivity analysis is key to validating results.

Learning Objectives

Upon completion of this module, students will be able to:

  • Explain the fundamental concepts and importance of engineering optimization in design.

  • Formulate engineering design problems into a standard optimization model.

  • Apply graphical methods for solving two-variable optimization problems.

  • Utilize computational tools like Excel Solver and MATLAB for solving optimization problems.

  • Analyze optimization results, including sensitivity, active constraints, and local versus global minima.

Optimization Fundamentals and Graphical Methods

Introduction to Optimization in Vehicle Design

Optimization is a critical aspect of engineering design that has evolved over centuries alongside the development and fabrication of complex systems like buildings, bridges, and automobiles. Historically, this process was slow, time-consuming, and resource-intensive, often resulting in designs that were simply “good enough” rather than truly optimal. For more information, refer to References (Arora 2017).

With modern optimization, the design of a system can be formulated as a problem where a performance measure is optimized while all other requirements are satisfied. The primary goal is to design efficient and cost-effective systems, which allows companies to beat the competition and improve their bottom line. Key objectives often include:

  • Minimizing weight

  • Maximizing efficiency

  • Minimizing cost

In the context of optimization, we define:

  • Design Variables : These are the parameters that need to be determined in any problem that can be formulated as an optimization problem. For example, in a beam design, these could be the width and thickness, or in a tubular column, the outer and inner radii.

  • Objective Functions : This is the performance measure that is to be optimized (either minimized or maximized).

  • Constraints : These represent the limitations or requirements that must be satisfied for the design to be valid. Examples include material non-failure, demand satisfaction, or resource limits.

Understanding these elements is crucial for identifying and managing trade-offs inherent in vehicle design.

Problem Formulation

The proper definition and formulation of an optimization problem are paramount, often constituting more than 50% of the total effort required for a solution. The crucial step of formulating a design optimization problem is detailed, as the optimum solution will only be as good as its formulation. For instance, omitting a critical constraint can lead to an unfeasible optimum solution, while too many or inconsistent constraints might mean no solution exists.

The standard approach to represent an optimization problem is as a minimization of a cost function subject to equality and inequality constraints. This is commonly referred to as a nonlinear programming problem (NLP). Key aspects of formulation include:

  • Standard Form : Inequality constraints in the standard model are always transformed to a “\(\) type”. A “\(\) type” constraint \(G_j(x) \) can be converted by multiplying by -1 to get \(g_j(x) = -G_j(x) \). Maximization problems are converted into minimization problems by multiplying the objective function by -1. Simple bounds on design variables, such as \(x_i \) or \(x_{iL} x_i x_{iU}\), are generally assumed to be included within the general inequality constraints.

  • Single vs. Multi-objective : While the standard model focuses on a single objective function, problems with multiple objectives can be addressed by combining them into a single objective function using weighting factors, or by treating some objectives as constraints.

  • Iterative Process : Developing a proper formulation for practical problems is often an iterative process requiring several revisions before an acceptable model is finalized.

Examples of problem formulation include cantilever beams, insulated spherical tanks, sawmill operations, and minimum-weight tubular column designs.

Graphical Optimization Methods

Graphical methods provide a visual approach to solving optimization problems and are applicable only to problems involving two design variables. The general procedure involves:

  1. Plotting all constraint functions.

  2. Identifying the feasible set, also known as the feasible region.

  3. Drawing contours of the objective function.

  4. Visually determining the optimum design.

Software like Mathematica and MATLAB can aid in this process:

  • 2D Parameter Space Visualization : The ‘ContourPlot’ command in Mathematica is useful for plotting functions and visualizing the parameter space.

  • Contour Plots and Feasible Regions : Constraints are often transformed to the standard “\(\)” form, which helps in identifying the infeasible region on the screen. The ‘ContourShading’ option can be used to visually shade regions. MATLAB also offers capabilities for plotting contours and identifying feasible regions.

  • Graphical Solution Techniques : The optimum solution report typically includes the optimal design variable values, the optimum objective function value, and a list of active and inactive constraints. A unique aspect of graphical solutions is the possibility of multiple optimum designs. This occurs when an active constraint is parallel to the cost function, leading to an infinite number of optimal solutions along a line segment. This is demonstrated in the minimum-weight tubular column design and rectangular beam design. For practical applications, non-negativity constraints for design variables should use a small minimum value rather than zero to avoid physical impossibilities.

Hand Calculation Methods

For simpler optimization problems, particularly those with a small number of variables and constraints (e.g., 2-3), analytical or calculus-based methods can be applied.

  • Calculus-based Optimization (Derivatives, Lagrange Multipliers) :

    • Optimality criteria methods identify the conditions a function must satisfy at its minimum.

    • The gradient vector of a function points in the direction of its maximum increase and is normal to the tangent plane at a given point.

    • The Hessian matrix involves the second partial derivatives of the function.

    • For equality-constrained problems, the Lagrange multiplier theorem provides necessary conditions for optimality. These conditions often result in a set of nonlinear equations that may require numerical solvers if not solvable analytically.

    • The Karush-Kuhn-Tucker (KKT) conditions extend the Lagrange multiplier theorem to include inequality constraints and represent the necessary conditions for general constrained optimization problems. These conditions can also be solved using numerical software like Excel or MATLAB.

  • When Analytical Solutions are Possible : Analytical solutions are feasible when the KKT conditions or other optimality conditions result in a solvable set of nonlinear equations, typically for problems with a limited number of variables and constraints.

  • Verification Techniques :

    • Local vs. Global Minimum : It is important to distinguish between a local minimum (smallest value in a neighborhood) and a global minimum (smallest value over the entire feasible region). Most numerical algorithms primarily converge to a local minimum because they use only local information during the search process.

    • Convexity : A problem is considered convex if all its constraint functions and the cost function are convex. Linear equality or inequality constraints always result in convex sets, whereas nonlinear equality constraints typically lead to non-convex sets. A significant property of convex problems is that any local minimum is also a global minimum.

    • Postoptimality or Sensitivity Analysis : This involves studying how the optimum solution changes when original problem parameters are varied. Lagrange multipliers provide insights into the sensitivity of the cost function to changes in constraint limits without requiring a full re-solution of the problem.

Computational Optimization Tools

Excel-Based Optimization

Excel Solver is a versatile, general-purpose software add-in available within the Microsoft Office Suite. It is capable of solving a wide range of optimization problems, including linear and nonlinear programming, as well as systems of simultaneous equations. It serves as a valuable tool for students to tackle real-world optimization projects and to verify their homework solutions.

Setting Up Optimization Problems with Excel Solver

  1. Prepare an Excel Worksheet : Organize all problem data, formulas, and cell references logically within the worksheet.

  2. Invoke Solver : Access Solver from the “Data” tab. Here, you define the objective function cell, specify the design variables that Solver can change, and input all constraints.

  3. Define Objective and Constraints :

    • Set the objective function cell to either maximize or minimize.

    • Ensure all inequality constraints are transformed into the standard “\(\)” form.

  4. Solving Nonlinear Problems : Excel Solver can effectively solve complex constrained nonlinear programming problems. Practical examples include the optimum design of springs and plate girders.

Sensitivity Analysis with Excel Solver

Upon solving a problem, Solver generates an Answer Report and a Sensitivity Report. These reports are crucial for analyzing the optimization results:

  • The Sensitivity Report provides Lagrange multipliers for the constraints. Note that the sign convention for Lagrange multipliers in Solver is typically opposite to that used in academic texts, so their sign must be flipped to match. These multipliers are invaluable for understanding the sensitivity of the optimum solution to changes in constraint limits.

  • These reports help to understand the activity of constraints and the behavior of the optimal solution.

MATLAB Optimization

MATLAB is a powerful computational tool with extensive capabilities for solving engineering problems, including a dedicated Optimization Toolbox.

  • Graphical Optimization : MATLAB can be used for graphical solutions of two-variable optimization problems, similar to Mathematica.

  • Solving Systems of Nonlinear Equations : It is effective for solving sets of nonlinear equations, such as KKT optimality conditions, using commands like ‘fsolve’ within the Optimization Toolbox.

  • Optimization Toolbox Functions : The Optimization Toolbox provides algorithms to solve linear, quadratic, and nonlinear programming problems. It requires a separate installation in addition to the basic MATLAB program. Key functions include:

    • ‘fminbnd’: For minimizing a single-variable function within specified bounds.

    • ‘fminunc’: For multivariable unconstrained minimization problems (e.g., using BFGS or DFP methods).

    • ‘fminsearch’: For multivariable unconstrained minimization, utilizing the Nelder-Mead Simplex method, which is a direct search method and does not require derivatives.

    • ‘fmincon’: For general constrained optimization problems. It can use either numerical or analytical gradient calculations.

  • Algorithm Selection : Beyond standard gradient-based methods, MATLAB’s capabilities extend to nature-inspired search methods (e.g., Genetic Algorithms, Ant Colony Optimization, Differential Evolution), which are particularly useful for global optimization and problems with discrete or non-differentiable functions. These algorithms often draw inspiration from biological evolution and genetic operations.

  • Plotting and Visualization : MATLAB allows for plotting function contours and generating various visualizations of the optimization process. The generated graphs can also be edited and exported for reports.

Method Comparison and Selection

Choosing the right optimization method depends on the nature and complexity of the design problem.

When to Use Each Approach:

  • Graphical Method : Highly intuitive but limited to problems with only two design variables.

  • Analytical Solutions (Optimality Conditions) : Feasible for problems with a small number of variables and constraints (e.g., two or three). However, they often lead to systems of nonlinear equations that may still require numerical methods for their solution.

  • Numerical Methods (Search Methods) : These are essential for problems with many variables and constraints, as they can directly search for optimum points.

    • Derivative-Based (Gradient-Based) Methods : These methods (e.g., in ‘fminunc’ or ‘fmincon’) assume that all problem functions are continuous and at least twice continuously differentiable, and that accurate derivatives are available. They follow an iterative procedure where the design is updated based on search directions and step sizes.

    • Direct Search Methods : Methods like the Nelder-Mead Simplex (used by ‘fminsearch’) do not require derivatives of the problem functions. This makes them broadly applicable, relatively easy to program, and suitable for non-differentiable cost functions.

    • Nature-Inspired Methods : These are particularly well-suited for discrete variable optimization problems or global optimization problems, as they do not require derivatives.

  • Excel Solver : A user-friendly and accessible option for solving both linear and nonlinear programming problems, especially for quick analysis or smaller-scale problems.

  • MATLAB Optimization Toolbox : Offers powerful and flexible algorithms for a wide range of problem types, including linear, quadratic, unconstrained, and constrained nonlinear problems.

Computational Efficiency Considerations:

  • Scaling : Proper scaling of constraints and design variables can significantly improve the performance and convergence rate of optimization methods.

  • Initial Design Estimate : The initial design estimate has a substantial impact on the number of iterations required to reach the optimum. A good starting point, often derived from preliminary analyses, can lead to much faster convergence.

  • Quasi-Newton Methods : These approximate second-order information using only first-order information, which can reduce computational cost, especially for large-scale problems where computing exact second derivatives is expensive.

  • Implicit Functions : In practical applications, functions can be implicit (where analysis variables are not explicitly known in terms of design variables), which complicates derivative evaluation and can be time-consuming. Alternative explicit formulations that treat analysis variables as design variables and equilibrium equations as equality constraints can simplify gradient evaluations, though they may result in larger, sparse problems.

Validation Strategies:

  • Algorithm Convergence : It is important to ensure that the chosen optimization algorithm is proven to converge to a local minimum point, contributing to its reliability.

  • Local vs. Global Minima : Most numerical algorithms converge to a local minimum. For applications where a global minimum is critical, specialized global optimization methods are needed.

  • Problem Formulation Check : Always verify that the problem formulation and data are properly transferred to the optimization software.

  • Iterative Formulation Refinement : The development of an acceptable optimization formulation for a practical problem is an iterative process. Multiple adjustments to problem parameters and constraints are often necessary to refine the model and ensure its accuracy and solvability.

References

Arora, Jasbir S. 2017. Introduction to Optimum Design. Fourth. Academic Press, Elsevier. https://www.elsevier.com/books/introduction-to-optimum-design/arora/978-0-12-800806-5.