Multidisciplinary design optimization

Last updated

Multi-disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization (MSDO), and multidisciplinary design analysis and optimization (MDAO).

Contents

MDO allows designers to incorporate all relevant disciplines simultaneously. The optimum of the simultaneous problem is superior to the design found by optimizing each discipline sequentially, since it can exploit the interactions between the disciplines. However, including all disciplines simultaneously significantly increases the complexity of the problem.

These techniques have been used in a number of fields, including automobile design, naval architecture, electronics, architecture, computers, and electricity distribution. However, the largest number of applications have been in the field of aerospace engineering, such as aircraft and spacecraft design. For example, the proposed Boeing blended wing body (BWB) aircraft concept has used MDO extensively in the conceptual and preliminary design stages. The disciplines considered in the BWB design are aerodynamics, structural analysis, propulsion, control theory, and economics.

History

Traditionally engineering has normally been performed by teams, each with expertise in a specific discipline, such as aerodynamics or structures. Each team would use its members' experience and judgement to develop a workable design, usually sequentially. For example, the aerodynamics experts would outline the shape of the body, and the structural experts would be expected to fit their design within the shape specified. The goals of the teams were generally performance-related, such as maximum speed, minimum drag, or minimum structural weight.

Between 1970 and 1990, two major developments in the aircraft industry changed the approach of aircraft design engineers to their design problems. The first was computer-aided design, which allowed designers to quickly modify and analyse their designs. The second was changes in the procurement policy of most airlines and military organizations, particularly the military of the United States, from a performance-centred approach to one that emphasized lifecycle cost issues. This led to an increased concentration on economic factors and the attributes known as the "ilities" including manufacturability, reliability, maintainability, etc.

Since 1990, the techniques have expanded to other industries. Globalization has resulted in more distributed, decentralized design teams. The high-performance personal computer has largely replaced the centralized supercomputer and the Internet and local area networks have facilitated sharing of design information. Disciplinary design software in many disciplines (such as OptiStruct or NASTRAN, a finite element analysis program for structural design) have become very mature. In addition, many optimization algorithms, in particular the population-based algorithms, have advanced significantly.

Origins in structural optimization

Whereas optimization methods are nearly as old as calculus, dating back to Isaac Newton, Leonhard Euler, Daniel Bernoulli, and Joseph Louis Lagrange, who used them to solve problems such as the shape of the catenary curve, numerical optimization reached prominence in the digital age. Its systematic application to structural design dates to its advocacy by Schmit in 1960. [1] [2] The success of structural optimization in the 1970s motivated the emergence of multidisciplinary design optimization (MDO) in the 1980s. Jaroslaw Sobieski championed decomposition methods specifically designed for MDO applications. [3] The following synopsis focuses on optimization methods for MDO. First, the popular gradient-based methods used by the early structural optimization and MDO community are reviewed. Then those methods developed in the last dozen years are summarized.

Gradient-based methods

There were two schools of structural optimization practitioners using gradient-based methods during the 1960s and 1970s: optimality criteria and mathematical programming. The optimality criteria school derived recursive formulas based on the Karush–Kuhn–Tucker (KKT) necessary conditions for an optimal design. The KKT conditions were applied to classes of structural problems such as minimum weight design with constraints on stresses, displacements, buckling, or frequencies [Rozvany, Berke, Venkayya, Khot, et al.] to derive resizing expressions particular to each class. The mathematical programming school employed classical gradient-based methods to structural optimization problems. The method of usable feasible directions, Rosen's gradient projection (generalized reduce gradient) method, sequential unconstrained minimization techniques, sequential linear programming and eventually sequential quadratic programming methods were common choices. Schittkowski et al. reviewed the methods current by the early 1990s.

The gradient methods unique to the MDO community derive from the combination of optimality criteria with math programming, first recognized in the seminal work of Fleury and Schmit who constructed a framework of approximation concepts for structural optimization. They recognized that optimality criteria were so successful for stress and displacement constraints, because that approach amounted to solving the dual problem for Lagrange multipliers using linear Taylor series approximations in the reciprocal design space. In combination with other techniques to improve efficiency, such as constraint deletion, regionalization, and design variable linking, they succeeded in uniting the work of both schools. This approximation concepts based approach forms the basis of the optimization modules in modern structural design software such as Altair – Optistruct, ASTROS, MSC.Nastran, PHX ModelCenter, pSeven, Genesis, iSight, and I-DEAS.

Approximations for structural optimization were initiated by the reciprocal approximation Schmit and Miura for stress and displacement response functions. Other intermediate variables were employed for plates. Combining linear and reciprocal variables, Starnes and Haftka developed a conservative approximation to improve buckling approximations. Fadel chose an appropriate intermediate design variable for each function based on a gradient matching condition for the previous point. Vanderplaats initiated a second generation of high quality approximations when he developed the force approximation as an intermediate response approximation to improve the approximation of stress constraints. Canfield developed a Rayleigh quotient approximation to improve the accuracy of eigenvalue approximations. Barthelemy and Haftka published a comprehensive review of approximations in 1993.

Non-gradient-based methods

In recent years, non-gradient-based evolutionary methods including genetic algorithms, simulated annealing, and ant colony algorithms came into existence. At present, many researchers are striving to arrive at a consensus regarding the best modes and methods for complex problems like impact damage, dynamic failure, and real-time analyses. For this purpose, researchers often employ multiobjective and multicriteria design methods.

Recent MDO methods

MDO practitioners have investigated optimization methods in several broad areas in the last dozen years. These include decomposition methods, approximation methods, evolutionary algorithms, memetic algorithms, response surface methodology, reliability-based optimization, and multi-objective optimization approaches.

The exploration of decomposition methods has continued in the last dozen years with the development and comparison of a number of approaches, classified variously as hierarchic and non hierarchic, or collaborative and non collaborative. Approximation methods spanned a diverse set of approaches, including the development of approximations based on surrogate models (often referred to as metamodels), variable fidelity models, and trust region management strategies. The development of multipoint approximations blurred the distinction with response surface methods. Some of the most popular methods include Kriging and the moving least squares method.

Response surface methodology, developed extensively by the statistical community, received much attention in the MDO community in the last dozen years. A driving force for their use has been the development of massively parallel systems for high performance computing, which are naturally suited to distributing the function evaluations from multiple disciplines that are required for the construction of response surfaces. Distributed processing is particularly suited to the design process of complex systems in which analysis of different disciplines may be accomplished naturally on different computing platforms and even by different teams.

Evolutionary methods led the way in the exploration of non-gradient methods for MDO applications. They also have benefited from the availability of massively parallel high performance computers, since they inherently require many more function evaluations than gradient-based methods. Their primary benefit lies in their ability to handle discrete design variables and the potential to find globally optimal solutions.

Reliability-based optimization (RBO) is a growing area of interest in MDO. Like response surface methods and evolutionary algorithms, RBO benefits from parallel computation, because the numeric integration to calculate the probability of failure requires many function evaluations. One of the first approaches employed approximation concepts to integrate the probability of failure. The classical first-order reliability method (FORM) and second-order reliability method (SORM) are still popular. Professor Ramana Grandhi used appropriate normalized variables about the most probable point of failure, found by a two-point adaptive nonlinear approximation to improve the accuracy and efficiency. Southwest Research Institute has figured prominently in the development of RBO, implementing state-of-the-art reliability methods in commercial software. RBO has reached sufficient maturity to appear in commercial structural analysis programs like Altair's Optistruct and MSC's Nastran.

Utility-based probability maximization was developed in response to some logical concerns (e.g., Blau's Dilemma) with reliability-based design optimization. [4] This approach focuses on maximizing the joint probability of both the objective function exceeding some value and of all the constraints being satisfied. When there is no objective function, utility-based probability maximization reduces to a probability-maximization problem. When there are no uncertainties in the constraints, it reduces to a constrained utility-maximization problem. (This second equivalence arises because the utility of a function can always be written as the probability of that function exceeding some random variable). Because it changes the constrained optimization problem associated with reliability-based optimization into an unconstrained optimization problem, it often leads to computationally more tractable problem formulations.

In the marketing field there is a huge literature about optimal design for multiattribute products and services, based on experimental analysis to estimate models of consumers' utility functions. These methods are known as Conjoint Analysis. Respondents are presented with alternative products, measuring preferences about the alternatives using a variety of scales and the utility function is estimated with different methods (varying from regression and surface response methods to choice models). The best design is formulated after estimating the model. The experimental design is usually optimized to minimize the variance of the estimators. These methods are widely used in practice.

Problem formulation

Problem formulation is normally the most difficult part of the process. It is the selection of design variables, constraints, objectives, and models of the disciplines. A further consideration is the strength and breadth of the interdisciplinary coupling in the problem. [5]

Design variables

A design variable is a specification that is controllable from the point of view of the designer. For instance, the thickness of a structural member can be considered a design variable. Another might be the choice of material. Design variables can be continuous (such as a wing span), discrete (such as the number of ribs in a wing), or Boolean (such as whether to build a monoplane or a biplane). Design problems with continuous variables are normally solved more easily.

Design variables are often bounded, that is, they often have maximum and minimum values. Depending on the solution method, these bounds can be treated as constraints or separately.

One of the important variables that needs to be accounted is an uncertainty. Uncertainty, often referred to as epistemic uncertainty, arises due to lack of knowledge or incomplete information. Uncertainty is essentially unknown variable but it may causes the failure of system.

Constraints

A constraint is a condition that must be satisfied in order for the design to be feasible. An example of a constraint in aircraft design is that the lift generated by a wing must be equal to the weight of the aircraft. In addition to physical laws, constraints can reflect resource limitations, user requirements, or bounds on the validity of the analysis models. Constraints can be used explicitly by the solution algorithm or can be incorporated into the objective using Lagrange multipliers.

Objectives

An objective is a numerical value that is to be maximized or minimized. For example, a designer may wish to maximize profit or minimize weight. Many solution methods work only with single objectives. When using these methods, the designer normally weights the various objectives and sums them to form a single objective. Other methods allow multiobjective optimization, such as the calculation of a Pareto front.

Models

The designer must also choose models to relate the constraints and the objectives to the design variables. These models are dependent on the discipline involved. They may be empirical models, such as a regression analysis of aircraft prices, theoretical models, such as from computational fluid dynamics, or reduced-order models of either of these. In choosing the models the designer must trade off fidelity with analysis time.

The multidisciplinary nature of most design problems complicates model choice and implementation. Often several iterations are necessary between the disciplines in order to find the values of the objectives and constraints. As an example, the aerodynamic loads on a wing affect the structural deformation of the wing. The structural deformation in turn changes the shape of the wing and the aerodynamic loads. Therefore, in analysing a wing, the aerodynamic and structural analyses must be run a number of times in turn until the loads and deformation converge.

Standard form

Once the design variables, constraints, objectives, and the relationships between them have been chosen, the problem can be expressed in the following form:

find that minimizes subject to , and

where is an objective, is a vector of design variables, is a vector of inequality constraints, is a vector of equality constraints, and and are vectors of lower and upper bounds on the design variables. Maximization problems can be converted to minimization problems by multiplying the objective by -1. Constraints can be reversed in a similar manner. Equality constraints can be replaced by two inequality constraints.

Problem solution

The problem is normally solved using appropriate techniques from the field of optimization. These include gradient-based algorithms, population-based algorithms, or others. Very simple problems can sometimes be expressed linearly; in that case the techniques of linear programming are applicable.

Gradient-based methods

Gradient-free methods

Population-based methods

Other methods

Most of these techniques require large numbers of evaluations of the objectives and the constraints. The disciplinary models are often very complex and can take significant amounts of time for a single evaluation. The solution can therefore be extremely time-consuming. Many of the optimization techniques are adaptable to parallel computing. Much current research is focused on methods of decreasing the required time.

Also, no existing solution method is guaranteed to find the global optimum of a general problem (see No free lunch in search and optimization). Gradient-based methods find local optima with high reliability but are normally unable to escape a local optimum. Stochastic methods, like simulated annealing and genetic algorithms, will find a good solution with high probability, but very little can be said about the mathematical properties of the solution. It is not guaranteed to even be a local optimum. These methods often find a different design each time they are run.

See also

Related Research Articles

Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.

<span class="mw-page-title-main">Linear programming</span> Method to solve optimization problems

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming.

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

<span class="mw-page-title-main">Gradient descent</span> Optimization algorithm

Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function.

In mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming.

<span class="mw-page-title-main">Optimal control</span> Mathematical way of attaining a desired output from a dynamic system

Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.

In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints are not linear equalities or the objective function is not a linear function. An optimization problem is one of calculation of the extrema of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.

Topology optimization is a mathematical method that optimizes material layout within a given design space, for a given set of loads, boundary conditions and constraints with the goal of maximizing the performance of the system. Topology optimization is different from shape optimization and sizing optimization in the sense that the design can attain any shape within the design space, instead of dealing with predefined configurations.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

In mathematical optimization, constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.

Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

The combination of quality control and genetic algorithms led to novel solutions of complex quality control design and optimization problems. Quality is the degree to which a set of inherent characteristics of an entity fulfils a need or expectation that is stated, general implied or obligatory. ISO 9000 defines quality control as "A part of quality management focused on fulfilling quality requirements". Genetic algorithms are search algorithms, based on the mechanics of natural selection and natural genetics.

Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems and add a penalty term to the objective, but the augmented Lagrangian method adds yet another term designed to mimic a Lagrange multiplier. The augmented Lagrangian is related to, but not identical with, the method of Lagrange multipliers.

<span class="mw-page-title-main">WORHP</span> Mathematical software library

WORHP, also referred to as eNLP by ESA, is a mathematical software library for numerically solving large scale continuous nonlinear optimization problems.

SmartDO is a multidisciplinary design optimization software, based on the Direct Global Search technology developed and marketed by FEA-Opt Technology. SmartDO specialized in the CAE-Based optimization, such as CAE, FEA, CAD, CFD and automatic control, with application on various physics phenomena. It is both GUI and scripting driven, allowed to be integrated with almost any kind of CAD/CAE and in-house codes.

<span class="mw-page-title-main">OptiSLang</span>

optiSLang is a software platform for CAE-based sensitivity analysis, multi-disciplinary optimization (MDO) and robustness evaluation. It was originally developed by Dynardo GmbH and provides a framework for numerical Robust Design Optimization (RDO) and stochastic analysis by identifying variables which contribute most to a predefined optimization goal. This includes also the evaluation of robustness, i.e. the sensitivity towards scatter of design variables or random fluctuations of parameters. In 2019, Dynardo GmbH was acquired by Ansys.

<span class="mw-page-title-main">Simulation-based optimization</span>

Simulation-based optimization integrates optimization techniques into simulation modeling and analysis. Because of the complexity of the simulation, the objective function may become difficult and expensive to evaluate. Usually, the underlying simulation model is stochastic, so that the objective function must be estimated using statistical estimation techniques.

References

  1. Vanderplaats, G.N. (1987). "Numerical Optimization Techniques". In Mota Soares, C.A. (ed.). Computer Aided Optimal Design: Structural and Mechanical Systems. NATO ASI Series (Series F: Computer and Systems Sciences). Vol. 27. Berlin: Springer. pp. 197–239. doi:10.1007/978-3-642-83051-8_5. ISBN   978-3-642-83053-2. The first formal statement of nonlinear programming (numerical optimization) applied to structural design was offered by Schmit in 1960.
  2. Schmit, L.A. (1960). "Structural Design by Systematic Synthesis". Proceedings, 2nd Conference on Electronic Computations. New York: ASCE: 105–122.
  3. Martins, Joaquim R. R. A.; Lambe, Andrew B. (2013). "Multidisciplinary design optimization: A Survey of architectures". AIAA Journal. 51 (9): 2049–2075. Bibcode:2013AIAAJ..51.2049M. CiteSeerX   10.1.1.669.7076 . doi:10.2514/1.J051895.
  4. Bordley, Robert F.; Pollock, Steven M. (September 2009). "A Decision Analytic Approach to Reliability-Based Design Optimization". Operations Research. 57 (5): 1262–1270. doi:10.1287/opre.1080.0661.
  5. Martins, Joaquim R. R. A.; Ning, Andrew (2021-10-01). Engineering Design Optimization. Cambridge University Press. ISBN   978-1108833417.