An example hierarchical system
Analytical target cascading (ATC) is a hierarchical systems optimization method that works by decomposing a system into a hierarchy of subsystems and coordinating the optimization of subsystems so that the joint solution is consistent and optimal for the overall system.

The motivation and potential advantages of decomposing an optimization problem and solving it using ATC (rather than solving it without decomposition) are:

• Efficiency: The computational effort required to solve the problem is sometimes reduced using decomposition because the algorithms take advantage of the structure of the problem. Some methods for solving ATC hierarchies enable parallel computing, which can further increase efficiency.
• Robustness: When a large problem is broken into a set of smaller problems, the smaller problems are typically solved in a relatively low-dimensional space with relatively few constraints. This can improve robustness in the ability of the algorithm to find a solution, sometimes enabling problems to be solved that would be very difficult to solve otherwise.
• Organization: In some cases, because of the organization of people involved in modeling and optimization, or simply for convenience, it may be easier to organize the problem as a collection of subsystems with well-defined interfaces rather than attempt to pose a single monolithic problem statement.

# Introduction

When designing complex systems, generally it is not possible or desirable to have a single decision-maker in charge of all decisions. Instead, such systems are routinely decomposed hierarchically into subsystems and components, and various design groups interact to coordinate their decisions and achieve a feasible and consistent system solution. For each system in such a hierarchy, target specifications are chosen for the subsystems below such that the system can meet targets set by the supersystem above. If targets cannot be met, negotiation and rebalancing is necessary to ensure that the final system solution is consistent and achieves system goals. Ford Motor Company refers to this process as target cascading, and the analogous model-based, computational process for such hierarchical systems has been termed analytical target cascading (ATC) (Kim et al., 2003). In ATC, top level design targets are propagated to lower levels, which are optimized to meet the targets. The resulting responses are rebalanced at higher levels to achieve consistency. The optimal system solution is obtained through an iterative process until target/response consistency is achieved globally.

ATC approaches this target-setting and matching process through formal mathematical decomposition methods, and so it has similarities to many of the multidisciplinary design optimization (MDO) methods that have been developed to coordinate complex analysis models from various disciplines during optimization, such as collaborative optimization (CO), concurrent subspace optimization (CSSO), and bi-level integrated system synthesis (BLISS). In particular, Allison et al. (2005) compare and contrast ATC and CO. Apart from the difference in initial motivation, the formulation of ATC also differs in that it is defined for an arbitrarily large hierarchy of subsystems, and formal convergence proofs ensure that the method will reach an optimal system solution under typical assumptions. More recently, methods for solving non-hierarchical quasiseparable, or dual block-angular, problems with proven convergence properties have also emerged (Haftka and Watson, 2005). This article focuses on hierarchical problems in the spirit of ATC; however, ATC works by translating a hierarchical problem into a quasiseparable problem through relaxation of target-response relationships between systems and their subsystems, and many of the methods posed for solving hierarchical ATC systems have or could also be used to approach general quasiseparable systems. ATC has been applied to complex systems such as automotive design (Kim et al, 2002), architectural design (Choudhary et al, 2005), and multidisciplinary product development (Michalek et al., 2005,6).

# Overview of ATC

## System Structure

An example of ATC decomposition
ATC is applicable for problems that have a hierarchical structure so that the top level design is a supersystem that consists of a number of systems, each of which may consist of its own subsystems. For example, an automobile may be composed of powertrain, body, and chassis, and the powertrain may be composed of engine and transmission, etc. This model is general enough to account for any number of levels in the hierarchy (Michelena et al., 2003). The objective function for the overall system can be described as a sum of the objective functions of its components; Typically the objective function is entirely at the system level. Moreover, the subproblems are nearly separable except for a few linking variables. Specifically, a parent and a child are connected by a response variable, which represents a child's response to the design specification on which its parent imposes. This "response variable" may or may not be a variable in the original pre-decomposition formulation: It is typically the output of the subsystem simulator and input to the system simulator and it may be treated as an intermediate calculation of the original formulation. The effects of subsystem response on system behavior is what prevents subsystems from being designed independently.

## Notation

Different notations are used in describing and defining ATC, depending on the application (Kim et al., 2003; Michelena et al., 2003; Michalek and Papalambros, 2005; Tosserams et al., 2006). In this article, we adopt the notational system of Tosserams for simplicity. Consider a system that can be decomposed into N levels and M elements. The subscript ij is used to denote the jth element of the system in the ith level. fij is the scalar objective function, and $\mathbf{g}_{ij} \leq \mathbf{0}$ and $\mathbf{h}_{ij} = \mathbf{0}$ are the inequality and equality constraints, respectively. Local variables of element j are denoted by $\mathbf{x}_{ij}$. $\mathbf{r}_{ij}$ is the response of element j calculated by analysis model $\mathbf{a}_{ij}$. Generally speaking, this model is an engineering simulation or a set of equations predicting the behavior of the subsystem. $\mathcal{E}_{i}$ is the set of elements at level i, and $\mathcal{C}_{ij}$ is the set of children of element j.

## Mathematical Formulation

### The All-in-One Problem

By the assumption of the problem structure, and using the notation described above, the hierarchical problem before decomposition, also known as the all-in-one (AIO) formulation, can be described as:

 minimize with respect to $f(\mathbf{x}_{ij}; \forall j \in \mathcal{E}_{i}, \, i=1,...,N) = \sum_{i=1}^{N}\sum_{j \in \mathcal{E}_{i}}f_{ij}(\mathbf{\overline x}_{ij})$ $\mathbf{x}_{ij}; \forall j \in \mathcal{E}_{i}, i=1,...,N$ $\mathbf{g}_{ij}(\mathbf{\overline{x}}_{ij}) \leq \mathbf{0}$ $\mathbf{h}_{ij}(\mathbf{\overline{x}}_{ij}) = \mathbf{0}$ $\mathbf{\overline x}_{ij} = [\mathbf{x}_{ij}, \mathbf{r}_{(i+1)k} \, \forall k \in \mathcal{C}_{ij}],$ $\mathbf{r}_{ij} = \mathbf{a}_{ij}(\mathbf{\overline x}_{ij})$

$\forall j \in \mathcal{E}_{i}, i=1,..., N;$

Note that the response $\mathbf{r}_{ij}$ of each element j depends on the response of its children which prevents the objective function and the constraint sets from being separable. In order to separate the set of variables governed by each subsystem, target variables $\mathbf{t}_{ij}$ are created for each shared variable. And the consistency constraint: $\mathbf{t}_{ij}-\mathbf{r}_{ij}=\mathbf{0}$ is added to ensure target/response consistency.

We rewrite the problem as:

 minimize with respect to $\sum_{i=1}^{N}\sum_{j \in \mathcal{E}_{i}}f_{ij}(\mathbf{\overline x}_{ij})$ $\mathbf{\overline{x}}_{ij}; \forall j \in \mathcal{E}_{i}, i=1,...,N$ $\mathbf{g}_{ij}(\mathbf{\overline{x}}_{ij}) \leq \mathbf{0}$ $\mathbf{h}_{ij}(\mathbf{\overline{x}}_{ij}) = \mathbf{0}$ $\mathbf{t}_{ij} - \mathbf{r}_{ij} = \mathbf{0}$ $\mathbf{\overline x}_{ij} = [\mathbf{x}_{ij}, \mathbf{t}_{(i+1)k} \, \forall k \in \mathcal{C}_{ij}],$ $\mathbf{r}_{ij} = \mathbf{a}_{ij}(\mathbf{\overline x}_{ij})$

$\forall j \in \mathcal{E}_{i}, i=1,..., N;$

Note that the problem is almost separable except for the consistency constraint $\mathbf{t}_{ij} - \mathbf{r}_{ij} = \mathbf{0}$.

### The Relaxed All-in-One Problem

In order to make the constraint sets separable, the consistency constraint can be relaxed using penalty functions or Lagrangian relaxation. In general, the problem can be relaxed via a consistency constraint relaxation function $\mathbf{\pi}$. Alternate methods for consistency constraint relaxation used in the literature include:

• Quadratic penalty functions (Kim et al., 2003; Michelena et al., 2003; Michalek and Papalambros, 2005)
• Lagrangian relaxation [9] (Lassiter et al., 2005)
• Augmented Lagrangian relaxation (Tosserams et al., 2006; Kim et al., 2006; Li et al., 2007)

For a general $\mathbf{\pi}$, the resulting formulation is the relaxed AIO problem:

 minimize with respect to $\sum_{i=1}^{N}\sum_{j \in \mathcal{E}_{i}}f_{ij}(\mathbf{\overline x}_{ij}) + \sum_{i=2}^{N}\sum_{j \in \mathcal{E}_i}\pi(\mathbf{t}_{ij} - \mathbf{r}_{ij})$ $\mathbf{\overline{x}}_{ij}; \forall j \in \mathcal{E}_{i}, i=1,...,N$ $\mathbf{g}_{ij}(\mathbf{\overline{x}}_{ij}) \leq \mathbf{0}$ $\mathbf{h}_{ij}(\mathbf{\overline{x}}_{ij}) = \mathbf{0}$ $\mathbf{\overline x}_{ij} = [\mathbf{x}_{ij}, \mathbf{t}_{(i+1)k} \, \forall k \in \mathcal{C}_{ij}],$ $\mathbf{r}_{ij} = \mathbf{a}_{ij}(\mathbf{\overline x}_{ij})$

$\forall j \in \mathcal{E}_{i}, i=1,..., N;$

### Individual Subsystems

For a general $\mathbf{\pi}$, consider only the subset of the decision variables that are non-constant in subsystem j to obtain the general subproblem corresponding to each element:

 minimize with respect to $f_{ij}(\mathbf{\overline x}_{ij}) + \pi(\mathbf{t}_{ij} - \mathbf{r}_{ij}) + \sum_{k \in \mathcal{C}_{ij}}\pi(\mathbf{t}_{(i+1)k} - \mathbf{r}_{(i+1)k})$ $\mathbf{\overline{x}}_{ij}$ $\mathbf{g}_{ij}(\mathbf{\overline{x}}_{ij}) \leq \mathbf{0}$ $\mathbf{h}_{ij}(\mathbf{\overline{x}}_{ij}) = \mathbf{0}$ $\mathbf{\overline x}_{ij} = [\mathbf{x}_{ij}, \mathbf{t}_{(i+1)k} \, \forall k \in \mathcal{C}_{ij}],$ $\mathbf{r}_{ij} = \mathbf{a}_{ij}(\mathbf{\overline x}_{ij})$

Note that in the above formulation, the variables $\mathbf{t}_{ij}$ and $\mathbf{r}_{(i+1)k}$ for $k \in \mathcal{C}_{ij}$ are constants with respect to element j. The constraint sets are now separable, and depending on the consistency relaxation function $\mathbf{\pi}$, the subproblems may or may not be separable. If subproblems are separable, they can be solved in parallel. Otherwise, sequential computation of each subproblem is required.

### Consistency Constraint Relaxation Functions

This section needs attention.


• Quadratic penalty functions (Kim et al., 2003; Michelena et al., 2003; Michalek and Papalambros, 2005) Non-separable objective function, large w causes ill-conditioning $\pi = w(\mathbf{t} - \mathbf{r})^{2}$
• Lagrangian relaxation [9] (Lassiter et al., 2005) Objective is separable-enables parallel computing, but duality gaps may exist which may cause instability $\pi = \lambda(\mathbf{t} - \mathbf{r})$
• Augmented Lagrangian relaxation (Tosserams et al., 2006; Kim et al., 2006; Li et al., 2007) has a quadratic and a lagrangian term, quadratic term solves instability issues and lagrangian term over comes ill-conditioning $\pi = \lambda(\mathbf{t} - \mathbf{r}) + w(\mathbf{t} - \mathbf{r})^{2}$
• Augmented Lagrangian with Alternating Directions: (Li et al., 2007)Truncates inner loop of AL by solving even levels together followed by odd for a fixed number of iterations.
• Diagonal Quadratic Approximation: (Li et al., 2007) This is an effective technique to enable parallel computing. The non-separable term in the AL approach is linearized using taylors series expansion to the first degree.
• Truncated Diagonal Quadratic approximation: (Li et al., 2007) Truncates the inner loop of DQA to a single (or few) iterations. Greatly reduces computational cost.
Insert more figures


## Solution Methods

It can be shown that by sequentially and iteratively solving each subproblem as specified in the above equation in any cyclic order, convergence is guaranteed. This algorithm is called block coordinate descent, or BCD, and the convergence result is applicable for any general relaxation consistency function $\mathbf{\pi}$ because the constraint sets are independent. In the literature, the inconsistency constraint relaxation function $\mathbf{\pi}$ has been implemented in three ways: a quadratic penalty function, an ordinary Lagrangian function, or an augmented Lagrangian function. Both the quadratic penalty and augmented Lagrangian approaches do not allow separability of subproblems, and block coordinate descent is required to achieve convergence, which limits efficiency. The ordinary Lagrangian approach does produce separable subproblems; however, the method is not robust when duality gaps exist (Li et al., 2007).

The best performing methods reported have used truncated inner loops to reduce computational time. The alternating direction method of multipliers approach (Tosserams et al., 2006) takes advantage of the fact that in hierarchical systems, only elements at two adjacent levels of the hierarchy share elements. The truncated diagonal quadratic approximation approach (Li et al., 2007) linearizes the cross term of the augmented component, which is the only nonseparable element in the augmented Lagrangian relaxation method, in order to make all subsystems independent and enable parallelization.

# References

• Allison, J., M. Kokkolaras, M. Zawislak, and P. Papalambros. “On the use of analytical target cascading and collaborative optimization for complex system design”. In Proceedings of the 6th World Congress on Structural and Multidisciplinary Optimization, Rio de Janeiro, Brazil, 2005.
• Bertsekas, D.P.. Nonlinear Programming. Athena Scientific, Belmont, Massachusetts, 2nd edition, 2003.
• Bertsekas, D.P. and J.N. Tsitsiklis. Parallel and Distributed Computation. Prentice-Hall, 1989.
• Choudhary, R., A. Malkawi, and P.Y. Papalambros. “Analytic target cascading in simulation-based building design”. Automation in Construction, 14(4):551 – 568, 2005.
• Haftka, R.T. and L.T. Watson. “Multidisciplinary design optimization with quasiseparable subsystems”. Optimization and Engineering, 6(1):9 – 20, March 2005.
• Kim, H.M., N.F. Michelena, P.Y. Papalambros, and T.Jiang. “Target cascading in optimal system design”. Journal of Mechanical Design, 125(3):474 – 480, September 2003.
• Kim, H.M., M.Kokkolaras, L.S. Louca, G.J. Delagrammatikas, N.F. Michelena, Z.S. Filipi, P.Y. Papalambros, J.L. Stein, and D.N. Assanis. “Target cascading in vehicle redesign: a class vi truck study”. International Journal of Vehicle Design, 29(3):199 – 225, 2002.
• Kim, H.M., W.Chen, and M.M. Wiecek. “Lagrangian coordination for enhancing the convergence of analytical target cascading”. AIAA Journal. To appear.
• Lassiter, J.B., M.M. Wiecek, and K.R. Andrighetti. “Lagrangian coordination and analytical target cascading: Solving ATC-decomposed problems with lagrangian duality”. Optimization and Engineering, 6(3):361 – 381, September 2005.
• Li, Y., Z. Lu and J.J. Michalek "Diagonal quadratic approximation for parallelization of analytical target cascading," in review, ASME Journal of Mechanical Design, 2006.
• Michalek, J.J., O. Ceryan, P.Y. Papalambros, and Y.Koren. “Balancing marketability and manufacturability in product line design decision-making”. ASME Journal of Mechanical Design, 2006.
• Michalek, J.J., F.M. Feinberg, and P.Y. Papalambros. “Linking marketing and engineering product design decisions via analytical target cascading”. Journal of Product Innovation Management, 22:42 – 62, 2005.
• Michalek, J.J., and P.Y. Papalambros. “An efficient weighting update method to achieve acceptable inconsistency deviation in analytical target cascading”. ASME Journal of Mechanical Design, 127(3):206 – 214, March 2005.
• Michalek, J.J. and P.Y. Papalambros. “Weights, norms, and notation in analytical target cascading”. Journal of Mechanical Design, 127(3):499 – 501, May 2005.
• Michelena, N., H. Park, and P.Y. Papalambros. “Convergence properties of analytical target cascading”. AIAA Journal, 41(5):897 – 905, 2003.
• Tosserams, S., L.F.P. Etman, and J.E. Rooda. “An augmented Lagrangian relaxation for analytical target cascading using the alternating directions method of multipliers”. Structural and Multidisciplinary Optimization, 31(3):176 – 189, March 2006.