Design structure matrix

Last updated
A sample DSM with 7 elements and 11 dependency marks. A sample Design Structure Matrix (DSM).png
A sample DSM with 7 elements and 11 dependency marks.

The design structure matrix (DSM; also referred to as dependency structure matrix, dependency structure method, dependency source matrix, problem solving matrix (PSM), incidence matrix, N2 matrix, interaction matrix, dependency map or design precedence matrix) is a simple, compact and visual representation of a system or project in the form of a square matrix. [1]

Contents

It is the equivalent of an adjacency matrix in graph theory, and is used in systems engineering and project management to model the structure of complex systems or processes, in order to perform system analysis, project planning and organization design. Don Steward coined the term "design structure matrix" in the 1960s, [2] using the matrices to solve mathematical systems of equations.

Overview

A design structure matrix lists all constituent subsystems/activities and the corresponding information exchange, interactions, and dependency patterns. For example, where the matrix elements represent activities, the matrix details what pieces of information are needed to start a particular activity, and shows where the information generated by that activity leads. In this way, one can quickly recognize which other activities are reliant upon information outputs generated by each activity.

The use of DSMs in both research and industrial practice increased greatly in the 1990s. DSMs have been applied in the building construction, real estate development, semiconductor, automotive, photographic, aerospace, telecom, small-scale manufacturing, factory equipment, and electronics industries, to name a few, as well as in many government agencies. [1]

The matrix representation has several strengths.

DSM analysis can also be used to manage the effects of a change. For example, if the specification for a component had to be changed, it would be possible to quickly identify all processes or activities which had been dependent on that specification, reducing the risk that work continues based on out-of-date information. [1]

DSM Structure

A DSM is a square matrix, representing linkages between the system elements. The system elements are often labeled in the rows to the left of the matrix and/or in the columns above the matrix. These elements can represent for example product components, organization teams, or project activities.

The off-diagonal cells are used to indicate relationships between the elements. A marking of the cell indicates a directed link between two elements and can represent design relations or constraints between product components, communication between teams, information flow or precedence relations between activities. In one convention, reading across a row reveals the outputs that the element in that row provides to other elements, and scanning a column reveals the inputs that the element in that column receives from other elements. For example, in the DSM, the marking in column A and row C indicated a link from A to C (output from A, input to C). Alternatively, the rows and columns may be switched (without a change of meaning). Both conventions may be found in the literature. [1]

The cells along the diagonal are typically used to represent the system elements. However, the diagonal cells can be used for representing self-iterations (e.g., rework of a code that did not pass its unit testing). Self-iterations are required when a matrix element represents a block of activities/subsystems that may be further detailed, allowing hierarchical DSM structure. [4]

Two main categories of DSMs have been proposed: static and time-based. [5] Static DSMs represent systems where all of the elements exist simultaneously, such as components of a machine or groups in an organization. A static DSM is equivalent to an N2 chart or an adjacency matrix. The marking in the off-diagonal cells is often largely symmetrical to the diagonal (e.g., in an organizational DSM indicating interactions between teams, there are both a mark from team C to team E and a mark from team E to team C, thus indicating that interactions are mutual). Static DSMs are usually analyzed with clustering algorithms.

A time-based DSM is akin to a precedence diagram or the matrix representation of a directed graph. In time-based DSMs, the ordering of the rows and columns indicates a flow through time: earlier activities in a process appear in the upper-left of the DSM and later activities appear in the lower-right. Terms like “feedforward” and “feedback” become meaningful when referring to interfaces. A feedback mark is an above-diagonal mark (when rows represent output). Time-based DSMs are typically analyzed using sequencing algorithms, that reorder the matrix elements to minimize the amount of feedback marks, and make them as close as possible to the diagonal. [1]

DSM matrices were categorized to Component-based or Architecture DSM; People-based (Team-based) or Organization DSM, both considered as Static (representing existing elements). Activity-based or Schedule DSM and Parameter-based DSM are defined as time-based, as their ordering implies flow.

DSM marking

Initially, the off-diagonal cell markings indicated only the existence/non-existence of an interaction (link) between elements, using a symbol (or the figure '1'). Such marking is defined as Binary DSM. The marking then has developed to indicate quantitative relation Numeric DSM indicating the "strength" of the linkage, or statistical relations Probability DSM indicating for example the probability of applying new information (that require reactivation of the linked activity).[ citation needed ]

DSM algorithms

The DSM algorithms are used for reordering the matrix elements subject to some criteria. Static DSMs are usually analyzed with clustering algorithms (i.e., reordering the matrix elements in order to group together related elements). Clustering results would typically show groups (clusters) of tightly related elements, and elements that are either not connected or are connected to many other elements and therefore are not part of a group. [1]

Time-based DSMs are typically analyzed using partitioning, tearing and sequencing algorithms. [1] [6]

Sequencing methods try to order the matrix elements such that no feedback marks remain. [1] In case of coupled activities (activities that have cyclic links, e.g., activity A is linked to B, which is linked to C, which is linked to A) the results is a block diagonal DSM (i.e., blocks or groups of coupled activities along the diagonal). Partitioning methods include: path searching; reachability matrix; triangulation algorithm; and the powers of the Adjacency Matrix.

Tearing is the removal of feedback marks (in Binary DSM) or assignment of lower priority (numeric DSM). Tearing of a Component-based DSM may imply modularization (the component design is not influencing other components) or standardization (the component design is not influencing and not influenced by other components). [1] [7] After tearing a partitioning algorithm is reapplied.

Minimizing feedback loops gets the best results for Binary DSM, but not always for Numeric DSM or Probability DSM. Sequencing algorithms (using optimization, genetic algorithms) are typically trying to minimize the number of feedback loops and also to reorder coupled activities (having cyclic loop) trying to have the feedback marks close to the diagonal. Yet, sometimes the algorithm just tries to minimize a criterion (where minimum iterations is not the optimal results). [8]

Use and extensions

Interactions between various aspects (people, activities, and components) is done using additional (non-square) linkage matrices. The Multiple Domain Matrix (MDM) is an extension of the basic DSM structure. [9] A MDM includes several DSMs (ordered as block diagonal matrices) that represent the relations between elements of the same domain; and corresponding Domain Mapping Matrices (DMM) [10] that represent relations between elements of different domains.

The use of DSM has been extended to visualize and optimize the otherwise invisible information flow and interactions associated with office work. This visualization via DSM allows the Lean Body of Knowledge to be applied to office and information intensive flows. [11]

Related Research Articles

Linear algebra Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

Principal component analysis Conversion of observations of possibly-correlated variables into values of fewer, uncorrelated variables

The principal components of a collection of points in a real coordinate space are a sequence of unit vectors, where the -th vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest.

Matrix multiplication Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.

Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Factor analysis searches for such joint variations in response to unobserved latent variables. The observed variables are modelled as linear combinations of the potential factors, plus "error" terms.

Sparse matrix

In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. The number of zero-valued elements divided by the total number of elements is sometimes referred to as the sparsity of the matrix.

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In mathematics, the Smith normal form is a normal form that can be defined for any matrix with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module. It is named after the British mathematician Henry John Stephen Smith.

Non-negative matrix factorization Algorithms for matrix decomposition

Non-negative matrix factorization, also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into (usually) two matrices W and H, with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered. Since the problem is not exactly solvable in general, it is commonly approximated numerically.

Glossary of Unified Modeling Language (UML) terms provides a compilation of terminology used in all versions of UML, along with their definitions. Any notable distinctions that may exist between versions are noted with the individual entry it applies to.

A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0,1) matrix is a matrix with entries from the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of finite sets.

Object-oriented design is the process of planning a system of interacting objects for the purpose of solving a software problem. It is one approach to software design.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish mathematician Tadeusz Banachiewicz in 1938.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

Matrix (mathematics) Two-dimensional array of numbers with specific operations

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example,

Feature learning

In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.

Robust Principal Component Analysis (RPCA) is a modification of the widely used statistical procedure of principal component analysis (PCA) which works well with respect to grossly corrupted observations. A number of different approaches exist for Robust PCA, including an idealized version of Robust PCA, which aims to recover a low-rank matrix L0 from highly corrupted measurements M = L0 +S0. This decomposition in low-rank and sparse matrices can be achieved by techniques such as Principal Component Pursuit method (PCP), Stable PCP, Quantized PCP, Block based PCP, and Local PCP. Then, optimization methods are used such as the Augmented Lagrange Multiplier Method (ALM), Alternating Direction Method (ADM), Fast Alternating Minimization (FAM), Iteratively Reweighted Least Squares (IRLS ) or alternating projections (AP).

References

  1. 1 2 3 4 5 6 7 8 9 S.D. Eppinger and T.R. Browning, Design Structure Matrix Methods and Applications, MIT Press, Cambridge, 2012.
  2. D. V. Steward: The Design Structure System: A Method for Managing the Design of Complex Systems. In: IEEE Transactions on Engineering Management. 28(3), 1981, S. 71-74.
  3. Browning TR, Fricke E, Negele H (2006) "Key Concepts in Modeling Product Development Processes", Systems Engineering, 9(2):104-128
  4. A. Karniel and Y. Reich, “Simulating Design Processes with self-iteration activities based on DSM planning,” in Proceedings of the International Conference on Systems Engineering and Modeling - ICSEM'07, Haifa, 2007.
  5. T. Browning: "Applying the Design Structure Matrix to System Decomposition and Integration Problems: A Review and New Directions." In: IEEE Transactions on Engineering Management. 48(3):292-306, 2001.
  6. A. Karniel and Y. Reich, "Design process planning using DSM", in Managing the Dynamics of New Product Development Processes: A New Product Lifecycle Management Paradigm, Springer, 2011
  7. Sered Y, Reich Y (2006)," Standardization and modularization driven by minimizing overall process effort." Computer-Aided Design, 38(5):405-416
  8. T. Browning: "Modeling Impacts of Process Architecture on Cost and Schedule Risk in Product Development", In: IEEE Transactions on Engineering Management. 49(4):428-442, 2002.
  9. Maurer M (2007) Structural Awareness in complex product design. Dissertation, Technischen Universität München, Germany
  10. M. Danilovic; T. R. Browning: "Managing Complex Product Development Projects with Design Structure Matrices and Domain Mapping Matrices". In: International Journal of Project Management. 25(3), 2007, S. 300-314.
  11. Far From the Factory: Lean for the Information Age. New York: Productivity Press. 2010. pp. 159–180. ISBN   978-1420094565.

Further reading