TOPSIS

Last updated

The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a multi-criteria decision analysis method, which was originally developed by Ching-Lai Hwang and Yoon in 1981 [1] with further developments by Yoon in 1987, [2] and Hwang, Lai and Liu in 1993. [3] TOPSIS is based on the concept that the chosen alternative should have the shortest geometric distance from the positive ideal solution (PIS) and the longest geometric distance from the negative ideal solution (NIS).[ citation needed ] A dedicated book in the fuzzy context was published in 2021 [4]

Contents

Description

It is a method of compensatory aggregation that compares a set of alternatives, normalising scores for each criterion and calculating the geometric distance between each alternative and the ideal alternative, which is the best score in each criterion. The weights of the criteria in TOPSIS method can be calculated using Ordinal Priority Approach, Analytic hierarchy process, etc. An assumption of TOPSIS is that the criteria are monotonically increasing or decreasing. Normalisation is usually required as the parameters or criteria are often of incongruous dimensions in multi-criteria problems. [5] [6] Compensatory methods such as TOPSIS allow trade-offs between criteria, where a poor result in one criterion can be negated by a good result in another criterion. This provides a more realistic form of modelling than non-compensatory methods, which include or exclude alternative solutions based on hard cut-offs. [7] An example of application on nuclear power plants is provided in. [8]

TOPSIS method

The TOPSIS process is carried out as follows:

Step 1
Create an evaluation matrix consisting of m alternatives and n criteria, with the intersection of each alternative and criteria given as , we therefore have a matrix .
Step 2
The matrix is then normalised to form the matrix
, using the normalisation method
Step 3
Calculate the weighted normalised decision matrix
where so that , and is the original weight given to the indicator
Step 4
Determine the worst alternative and the best alternative :
where,
associated with the criteria having a positive impact, and
associated with the criteria having a negative impact.
Step 5
Calculate the L2-distance between the target alternative and the worst condition
and the distance between the alternative and the best condition
where and are L2-norm distances from the target alternative to the worst and best conditions, respectively.
Step 6
Calculate the similarity to the worst condition:
if and only if the alternative solution has the best condition; and
if and only if the alternative solution has the worst condition.
Step 7
Rank the alternatives according to

Normalisation

Two methods of normalisation that have been used to deal with incongruous criteria dimensions are linear normalisation and vector normalisation.

Linear normalisation can be calculated as in Step 2 of the TOPSIS process above. Vector normalisation was incorporated with the original development of the TOPSIS method, [1] and is calculated using the following formula:

In using vector normalisation, the non-linear distances between single dimension scores and ratios should produce smoother trade-offs. [9]

Online tools

Related Research Articles

In mathematics, the tensor product of two vector spaces V and W is a vector space to which is associated a bilinear map that maps a pair to an element of denoted

<span class="mw-page-title-main">Hypergraph</span> Generalization of graph theory

In mathematics, a hypergraph is a generalization of a graph in which an edge can join any number of vertices. In contrast, in an ordinary graph, an edge connects exactly two vertices.

In linear algebra and functional analysis, the partial trace is a generalization of the trace. Whereas the trace is a scalar valued function on operators, the partial trace is an operator-valued function. The partial trace has applications in quantum information and decoherence which is relevant for quantum measurement and thereby to the decoherent approaches to interpretations of quantum mechanics, including consistent histories and the relative state interpretation.

In number theory, Dixon's factorization method is a general-purpose integer factorization algorithm; it is the prototypical factor base method. Unlike for other factor base methods, its run-time bound comes with a rigorous proof that does not rely on conjectures about the smoothness properties of the values taken by a polynomial.

In mechanics, virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action.

The work of a force on a particle along a virtual displacement is known as the virtual work.

In the study of differential equations, the Ritz method is a direct method to find an approximate solution for boundary value problems. The method is named after Walther Ritz. Some alternative formulations include the Rayleigh–Ritz method and the Ritz-Galerkin method.

Multilevel models are statistical models of parameters that vary at more than one level. An example could be a model of student performance that contains measures for individual students as well as measures for classrooms within which the students are grouped. These models can be seen as generalizations of linear models, although they can also extend to non-linear models. These models became much more popular after sufficient computing power and software became available.

<span class="mw-page-title-main">Rand index</span> Measure of similarity between two data clusterings

The Rand index or Rand measure in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index may be defined that is adjusted for the chance grouping of elements, this is the adjusted Rand index. The Rand index is the accuracy of determining if a link belongs within a cluster or not.

<span class="mw-page-title-main">Wick's theorem</span> Theorem for reducing high-order derivatives

Wick's theorem is a method of reducing high-order derivatives to a combinatorics problem. It is named after Italian physicist Gian-Carlo Wick. It is used extensively in quantum field theory to reduce arbitrary products of creation and annihilation operators to sums of products of pairs of these operators. This allows for the use of Green's function methods, and consequently the use of Feynman diagrams in the field under study. A more general idea in probability theory is Isserlis' theorem.

In numerical linear algebra, an incomplete LU factorization of a matrix is a sparse approximation of the LU factorization often used as a preconditioner.

In mathematics, the Grothendieck inequality states that there is a universal constant with the following property. If Mij is an n × n matrix with

Zero-forcing precoding is a method of spatial signal processing by which a multiple antenna transmitter can null the multiuser interference in a multi-user MIMO wireless communication system. When the channel state information is perfectly known at the transmitter, the zero-forcing precoder is given by the pseudo-inverse of the channel matrix. Zero-forcing has been used in LTE mobile networks.

Rossmo's formula is a geographic profiling formula to predict where a serial criminal lives. It relies upon the tendency of criminals to not commit crimes near places where they might be recognized, but also to not travel excessively long distances. The formula was developed and patented in 1996 by criminologist Kim Rossmo and integrated into a specialized crime analysis software product called Rigel. The Rigel product is developed by the software company Environmental Criminology Research Inc. (ECRI), which Rossmo co-founded.

<span class="mw-page-title-main">David Shmoys</span> American mathematician

David Bernard Shmoys is a Professor in the School of Operations Research and Information Engineering and the Department of Computer Science at Cornell University. He obtained his Ph.D. from the University of California, Berkeley in 1984. His major focus has been in the design and analysis of algorithms for discrete optimization problems.

Least-squares support-vector machines (LS-SVM) for statistics and in statistical modeling, are least-squares versions of support-vector machines (SVM), which are a set of related supervised learning methods that analyze data and recognize patterns, and which are used for classification and regression analysis. In this version one finds the solution by solving a set of linear equations instead of a convex quadratic programming (QP) problem for classical SVMs. Least-squares SVM classifiers were proposed by Johan Suykens and Joos Vandewalle. LS-SVMs are a class of kernel-based learning methods.

In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point, in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.

For computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer of a regularized empirical risk functional defined over a reproducing kernel Hilbert space can be represented as a finite linear combination of kernel products evaluated on the input points in the training set data.

Bayesian hierarchical modelling is a statistical model written in multiple levels that estimates the parameters of the posterior distribution using the Bayesian method. The sub-models combine to form the hierarchical model, and Bayes' theorem is used to integrate them with the observed data and account for all the uncertainty that is present. The result of this integration is the posterior distribution, also known as the updated probability estimate, as additional evidence on the prior distribution is acquired.

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

Nonlinear mixed-effects models constitute a class of statistical models generalizing linear mixed-effects models. Like linear mixed-effects models, they are particularly useful in settings where there are multiple measurements within the same statistical units or when there are dependencies between measurements on related statistical units. Nonlinear mixed-effects models are applied in many fields including medicine, public health, pharmacology, and ecology.

References

  1. 1 2 Hwang, C.L.; Yoon, K. (1981). Multiple Attribute Decision Making: Methods and Applications. New York: Springer-Verlag.
  2. Yoon, K. (1987). "A reconciliation among discrete compromise situations". Journal of the Operational Research Society. 38 (3): 277–286. doi:10.1057/jors.1987.44. S2CID   121379674.
  3. Hwang, C.L.; Lai, Y.J.; Liu, T.Y. (1993). "A new approach for multiple objective decision making". Computers and Operational Research. 20 (8): 889–899. doi:10.1016/0305-0548(93)90109-v.
  4. El Alaoui, M. (2021). Fuzzy TOPSIS: Logic, Approaches, and Case Studies. New York: CRC Press. doi:10.1201/9781003168416. ISBN   978-0-367-76748-8. S2CID   233525185.
  5. Yoon, K.P.; Hwang, C. (1995). Multiple Attribute Decision Making: An Introduction. SAGE publications.
  6. Zavadskas, E.K.; Zakarevicius, A.; Antucheviciene, J. (2006). "Evaluation of Ranking Accuracy in Multi-Criteria Decisions". Informatica. 17 (4): 601–618. doi: 10.15388/Informatica.2006.158 .
  7. Greene, R.; Devillers, R.; Luther, J.E.; Eddy, B.G. (2011). "GIS-based multi-criteria analysis". Geography Compass. 5 (6): 412–432. doi:10.1111/j.1749-8198.2011.00431.x.
  8. Locatelli, Giorgio; Mancini, Mauro (2012-09-01). "A framework for the selection of the right nuclear power plant" (PDF). International Journal of Production Research. 50 (17): 4753–4766. doi:10.1080/00207543.2012.657965. ISSN   0020-7543. S2CID   28137959.
  9. Huang, I.B.; Keisler, J.; Linkov, I. (2011). "Multi-criteria decision analysis in environmental science: ten years of applications and trends". Science of the Total Environment. 409 (19): 3578–3594. Bibcode:2011ScTEn.409.3578H. doi:10.1016/j.scitotenv.2011.06.022. PMID   21764422.