Ordinal priority approach (OPA) is a multiple-criteria decision analysis method that aids in solving the group decision-making problems based on preference relations.
Various methods have been proposed to solve multi-criteria decision-making problems. [1] The basis of methods such as analytic hierarchy process and analytic network process is pairwise comparison matrix. [2] The advantages and disadvantages of the pairwise comparison matrix were discussed by Munier and Hontoria in their book. [3] In recent years, the OPA method was proposed to solve the multi-criteria decision-making problems based on the ordinal data instead of using the pairwise comparison matrix. [4] The OPA method is a major part of Dr. Amin Mahmoudi's PhD thesis from the Southeast University of China. [4]
This method uses linear programming approach to compute the weights of experts, criteria, and alternatives simultaneously. [5] The main reason for using ordinal data in the OPA method is the accessibility and accuracy of the ordinal data compared with exact ratios used in group decision-making problems involved with humans. [6]
In real-world situations, the experts might not have enough knowledge regarding one alternative or criterion. In this case, the input data of the problem is incomplete, which needs to be incorporated into the linear programming of the OPA. To handle the incomplete input data in the OPA method, the constraints related to the criteria or alternatives should be removed from the OPA linear-programming model. [7]
Various types of data normalization methods have been employed in multi-criteria decision-making methods in recent years. Palczewski and Sałabun showed that using various data normalization methods can change the final ranks of the multi-criteria decision-making methods. [8] Javed and colleagues showed that a multiple-criteria decision-making problem can be solved by avoiding the data normalization. [9] There is no need to normalize the preference relations and thus, the OPA method does not require data normalization. [10]
The OPA model is a linear programming model, which can be solved using a simplex algorithm. The steps of this method are as follows: [11]
Step 1: Identifying the experts and determining the preference of experts based on their working experience, educational qualification, etc.
Step 2: identifying the criteria and determining the preference of the criteria by each expert.
Step 3: identifying the alternatives and determining the preference of the alternatives in each criterion by each expert.
Step 4: Constructing the following linear programming model and solving it by an appropriate optimization software such as LINGO, GAMS, MATLAB, etc.
In the above model, represents the rank of expert , represents the rank of criterion , represents the rank of alternative , and represents the weight of alternative in criterion by expert . After solving the OPA linear programming model, the weight of each alternative is calculated by the following equation:
The weight of each criterion is calculated by the following equation:
And the weight of each expert is calculated by the following equation:
Suppose that we are going to investigate the issue of buying a house. There are two experts in this decision problem. Also, there are two criteria called cost (c), and construction quality (q) for buying the house. On the other hand, there are three houses (h1, h2, h3) for purchasing. The first expert (x) has three years of working experience and the second expert (y) has two years of working experience. The structure of the problem is shown in the figure.
Step 1: The first expert (x) has more experience than expert (y), hence x > y.
Step 2: The criteria and their preference are summarized in the following table:
Criteria | Expert (x) | Expert (y) |
---|---|---|
c | 1 | 2 |
q | 2 | 1 |
Step 3: The alternatives and their preference are summarized in the following table:
Alternatives | Expert (x) | Expert (y) | ||
---|---|---|---|---|
c | q | c | q | |
h1 | 1 | 2 | 1 | 3 |
h2 | 3 | 1 | 2 | 1 |
h3 | 2 | 3 | 3 | 2 |
Step 4: The OPA linear programming model is formed based on the input data as follows:
After solving the above model using optimization software, the weights of experts, criteria and alternatives are obtained as follows:
Therefore, House 1 (h1) is considered as the best alternative. Moreover, we can understand that criterion cost (c) is more important than criterion construction quality (q). Also, based on the experts' weights, we can understand that expert (x) has a higher impact on final selection compared with expert (y).
The applications of the OPA method in various field of studies are summarized as follows:
Agriculture, manufacturing, services
Construction industry
Energy and environment
Healthcare
Information technology
Transportation
Several extensions of the OPA method are listed as follows:
The following non-profit tools are available to solve the MCDM problems using the OPA method:
In machine learning, support vector machines are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories by Vladimir Vapnik with colleagues SVMs are one of the most studied models, being based on statistical learning frameworks or VC theory proposed by Vapnik and Chervonenkis (1974).
Arrow's impossibility theorem, the general possibility theorem or Arrow's paradox is an impossibility theorem in social choice theory that states that when voters have three or more distinct alternatives (options), no ranked voting electoral system can convert the ranked preferences of individuals into a community-wide ranking while also meeting the specified set of criteria: unrestricted domain, non-dictatorship, Pareto efficiency, and independence of irrelevant alternatives. The theorem is often cited in discussions of voting theory as it is further interpreted by the Gibbard–Satterthwaite theorem. The theorem is named after economist and Nobel laureate Kenneth Arrow, who demonstrated the theorem in his doctoral thesis and popularized it in his 1951 book Social Choice and Individual Values. The original paper was titled "A Difficulty in the Concept of Social Welfare".
In mathematics, the Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function f(w) = wew, where w is any complex number and ew is the exponential function. The function is named after Johann Lambert, who considered a related problem in 1758. Building on Lambert's work, Leonhard Euler described the W function per se in 1783.
For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.
Data envelopment analysis (DEA) is a nonparametric method in operations research and economics for the estimation of production frontiers. DEA has been applied in a large range of fields including international banking, economic sustainability, police department operations, and logistical applications Additionally, DEA has been used to assess the performance of natural language processing models, and it has found other applications within machine learning.
In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector. This norm can be defined as the square root of the inner product of a vector with itself.
Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making. It is also known as multiple attribute utility theory, multiple attribute value theory, multiple attribute preference theory, and multi-objective decision analysis.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.
Quantum tomography or quantum state tomography is the process by which a quantum state is reconstructed using measurements on an ensemble of identical quantum states. The source of these states may be any device or system which prepares quantum states either consistently into quantum pure states or otherwise into general mixed states. To be able to uniquely identify the state, the measurements must be tomographically complete. That is, the measured operators must form an operator basis on the Hilbert space of the system, providing all the information about the state. Such a set of observations is sometimes called a quorum. The term tomography was first used in the quantum physics literature in a 1993 paper introducing experimental optical homodyne tomography.
In mathematics, the Poincaré inequality is a result in the theory of Sobolev spaces, named after the French mathematician Henri Poincaré. The inequality allows one to obtain bounds on a function using bounds on its derivatives and the geometry of its domain of definition. Such bounds are of great importance in the modern, direct methods of the calculus of variations. A very closely related result is Friedrichs' inequality.
Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.
Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution. It is related to, but often distinguished from, probabilistic optimization methods such as chance-constrained optimization.
Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.
The dominance-based rough set approach (DRSA) is an extension of rough set theory for multi-criteria decision analysis (MCDA), introduced by Greco, Matarazzo and Słowiński. The main change compared to the classical rough sets is the substitution for the indiscernibility relation by a dominance relation, which permits one to deal with inconsistencies typical to consideration of criteria and preference-ordered decision classes.
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
The decision-making paradox is a phenomenon related to decision-making and the quest for determining reliable decision-making methods. It was first described by Triantaphyllou, and has been recognized in the related literature as a fundamental paradox in multi-criteria decision analysis (MCDA), multi-criteria decision making (MCDM) and decision analysis since then.
In multiple criteria decision aiding (MCDA), multicriteria classification involves problems where a finite set of alternative actions should be assigned into a predefined set of preferentially ordered categories (classes). For example, credit analysts classify loan applications into risk categories, customers rate products and classify them into attractiveness groups, candidates for a job position are evaluated and their applications are approved or rejected, technical systems are prioritized for inspection on the basis of their failure risk, clinicians classify patients according to the extent to which they have a complex disease or not, etc.
Network coding has been shown to optimally use bandwidth in a network, maximizing information flow but the scheme is very inherently vulnerable to pollution attacks by malicious nodes in the network. A node injecting garbage can quickly affect many receivers. The pollution of network packets spreads quickly since the output of honest node is corrupted if at least one of the incoming packets is corrupted.
The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a multi-criteria decision analysis method, which was originally developed by Ching-Lai Hwang and Yoon in 1981 with further developments by Yoon in 1987, and Hwang, Lai and Liu in 1993. TOPSIS is based on the concept that the chosen alternative should have the shortest geometric distance from the positive ideal solution (PIS) and the longest geometric distance from the negative ideal solution (NIS). A dedicated book in the fuzzy context was published in 2021
Ordinal data is a categorical, statistical data type where the variables have natural, ordered categories and the distances between the categories are not known. These data exist on an ordinal scale, one of four levels of measurement described by S. S. Stevens in 1946. The ordinal scale is distinguished from the nominal scale by having a ranking. It also differs from the interval scale and ratio scale by not having category widths that represent equal increments of the underlying attribute.
{{cite journal}}
: Cite journal requires |journal=
(help){{cite journal}}
: Cite journal requires |journal=
(help){{cite journal}}
: Cite journal requires |journal=
(help)