In measurements, the measurement obtained can suffer from two types of uncertainties. [1] The first is the random uncertainty which is due to the noise in the process and the measurement. The second contribution is due to the systematic uncertainty which may be present in the measuring instrument. Systematic errors, if detected, can be easily compensated as they are usually constant throughout the measurement process as long as the measuring instrument and the measurement process are not changed. But it can not be accurately known while using the instrument if there is a systematic error and if there is, how much? Hence, systematic uncertainty could be considered as a contribution of a fuzzy nature.
This systematic error can be approximately modeled based on our past data about the measuring instrument and the process.
Statistical methods can be used to calculate the total uncertainty from both systematic and random contributions in a measurement. [2] [3] [4] But, the computational complexity is very high and hence, are not desirable.
L.A.Zadeh introduced the concepts of fuzzy variables and fuzzy sets. [5] [6] Fuzzy variables are based on the theory of possibility and hence are possibility distributions. This makes them suitable to handle any type of uncertainty, i.e., both systematic and random contributions to the total uncertainty. [7] [8] [9]
Random-fuzzy variable (RFV) is a type 2 fuzzy variable, [10] defined using the mathematical possibility theory, [5] [6] used to represent the entire information associated to a measurement result. It has an internal possibility distribution and an external possibility distribution called membership functions. The internal distribution is the uncertainty contributions due to the systematic uncertainty and the bounds of the RFV are because of the random contributions. The external distribution gives the uncertainty bounds from all contributions.
A Random-fuzzy Variable (RFV) is defined as a type 2 fuzzy variable which satisfies the following conditions: [11]
An RFV can be seen in the figure. The external membership function is the distribution in blue and the internal membership function is the distribution in red. Both the membership functions are possibility distributions. Both the internal and external membership functions have a unitary value of possibility only in the rectangular part of the RFV. So, all three conditions have been satisfied.
If there are only systematic errors in the measurement, then the RFV simply becomes a fuzzy variable which consists of just the internal membership function. Similarly, if there is no systematic error, then the RFV becomes a fuzzy variable with just the random contributions and therefore, is just the possibility distribution of the random contributions.
A Random-fuzzy variable can be constructed using an Internal possibility distribution(rinternal) and a random possibility distribution(rrandom).
rrandom is the possibility distribution of the random contributions to the uncertainty. Any measurement instrument or process suffers from random error contributions due to intrinsic noise or other effects.
This is completely random in nature and is a normal probability distribution when several random contributions are combined according to the Central limit theorem. [12]
But, there can also be random contributions from other probability distributions such as a uniform distribution, gamma distribution and so on.
The probability distribution can be modeled from the measurement data. Then, the probability distribution can be used to model an equivalent possibility distribution using the maximally specific probability-possibility transformation. [13]
Some common probability distributions and the corresponding possibility distributions can be seen in the figures.
rinternal is the internal distribution in the RFV which is the possibility distribution of the systematic contribution to the total uncertainty. This distribution can be built based on the information that is available about the measuring instrument and the process.
The largest possible distribution is the uniform or rectangular possibility distribution. This means that every value in the specified interval is equally possible. This actually represents the state of total ignorance according to the theory of evidence [14] which means it represents a scenario in which there is maximum lack of information.
This distribution is used for the systematic error when we have absolutely no idea about the systematic error except that it belongs to a particular interval of values. This is quite common in measurements.
But, in certain cases, it may be known that certain values have a higher or lower degrees of belief than certain other values. In this case, depending on the degrees of belief for the values, an appropriate possibility distribution could be constructed.
After modeling the random and internal possibility distribution, the external membership function, rexternal, of the RFV can be constructed by using the following equation: [15]
where is the mode of , which is the peak in the membership function of and Tmin is the minimum triangular norm. [16]
RFV can also be built from the internal and random distributions by considering the α-cuts of the two possibility distributions(PDs).
An α-cut of a fuzzy variable F can be defined as [17] [18]
So, essentially an α-cut is the set of values for which the value of the membership function of the fuzzy variable is greater than α. So, this gives the upper and lower bounds of the fuzzy variable F for each α-cut.
The α-cut of an RFV, however, has 4 specific bounds and is given by . [11] and are the lower and upper bounds respectively of the external membership function(rexternal) which is a fuzzy variable on its own. and are the lower and upper bounds respectively of the internal membership function(rinternal) which is a fuzzy variable on its own.
To build the RFV, let us consider the α-cuts of the two PDs i.e., rrandom and rinternal for the same value of α. This gives the lower and upper bounds for the two α-cuts. Let them be and for the random and internal distributions respectively. can be again divided into two sub-intervals and where is the mode of the fuzzy variable. Then, the α-cut for the RFV for the same value of α, can be defined by [11]
Using the above equations, the α-cuts are calculated for every value of α which gives us the final plot of the RFV.
A Random-Fuzzy variable is capable of giving a complete picture of the random and systematic contributions to the total uncertainty from the α-cuts for any confidence level as the confidence level is nothing but 1-α. [17] [18]
An example for the construction of the corresponding external membership function(rexternal) and the RFV from a random PD and an internal PD can be seen in the following figure.
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .
Information theory is the mathematical study of the quantification, storage, and communication of information. The field was established and put on a firm footing by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering.
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.
The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actuarial, and many other types of observable phenomena; the principle originally applied to describing the distribution of wealth in a society, fitting the trend that a large portion of wealth is held by a small fraction of the population. The Pareto principle or "80-20 rule" stating that 80% of outcomes are due to 20% of causes was named in honour of Pareto, but the concepts are distinct, and only Pareto distributions with shape value of log45 ≈ 1.16 precisely reflect it. Empirical observation has shown that this 80-20 distribution fits a wide range of cases, including natural phenomena and human activities.
In mathematics, fuzzy sets are sets whose elements have degrees of membership. Fuzzy sets were introduced independently by Lotfi A. Zadeh in 1965 as an extension of the classical notion of set. At the same time, Salii (1965) defined a more general kind of structure called an "L-relation", which he studied in an abstract algebraic context; fuzzy relations are special cases of L-relations when L is the unit interval [0, 1]. They are now used throughout fuzzy mathematics, having applications in areas such as linguistics, decision-making, and clustering.
In statistics, interval estimation is the use of sample data to estimate an interval of possible values of a parameter of interest. This is in contrast to point estimation, which gives a single value.
In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions.
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.
In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if A is a subset of some set X, then if and otherwise, where is a common notation for the indicator function. Other common notations are and
In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis.
In probability and statistics, a mixture distribution is the probability distribution of a random variable that is derived from a collection of other random variables as follows: first, a random variable is selected by chance from the collection according to given probabilities of selection, and then the value of the selected random variable is realized. The underlying random variables may be random real numbers, or they may be random vectors, in which case the mixture distribution is a multivariate distribution.
In metrology, measurement uncertainty is the expression of the statistical dispersion of the values attributed to a quantity measured on an interval or ratio scale.
In Bayesian statistics, a credible interval is an interval used to characterize a probability distribution. It is defined such that an unobserved parameter value has a particular probability to fall within it. For example, in an experiment that determines the distribution of possible values of the parameter , if the probability that lies between 35 and 45 is , then is a 95% credible interval.
Probabilistic design is a discipline within engineering design. It deals primarily with the consideration and minimization of the effects of random variability upon the performance of an engineering system during the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using the safety factor. Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design, quality control, systems engineering, machine design, civil engineering and manufacturing.
In probability theory and statistics, a categorical distribution is a discrete probability distribution that describes the possible results of a random variable that can take on one of K possible categories, with the probability of each category separately specified. There is no innate underlying ordering of these outcomes, but numerical labels are often attached for convenience in describing the distribution,. The K-dimensional categorical distribution is the most general distribution over a K-way event; any other discrete distribution over a size-K sample space is a special case. The parameters specifying the probabilities of each possible outcome are constrained only by the fact that each must be in the range 0 to 1, and all must sum to 1.
Subjective logic is a type of probabilistic logic that explicitly takes epistemic uncertainty and source trust into account. In general, subjective logic is suitable for modeling and analysing situations involving uncertainty and relatively unreliable sources. For example, it can be used for modeling and analysing trust networks and Bayesian networks.
In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1.
Industrial process data validation and reconciliation, or more briefly, process data reconciliation (PDR), is a technology that uses process information and mathematical methods in order to automatically ensure data validation and reconciliation by correcting measurements in industrial processes. The use of PDR allows for extracting accurate and reliable information about the state of industry processes from raw measurement data and produces a single consistent set of data representing the most likely process operation.
In quantum physics, a quantum state is a mathematical entity that embodies the knowledge of a quantum system. Quantum mechanics specifies the construction, evolution, and measurement of a quantum state. The result is a prediction for the system represented by the state. Knowledge of the quantum state, and the rules for the system's evolution in time, exhausts all that can be known about a quantum system.
A probability box is a characterization of uncertain numbers consisting of both aleatoric and epistemic uncertainties that is often used in risk analysis or quantitative uncertainty modeling where numerical calculations must be performed. Probability bounds analysis is used to make arithmetic and logical calculations with p-boxes.
{{cite book}}
: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link){{cite journal}}
: CS1 maint: DOI inactive as of December 2024 (link){{cite book}}
: CS1 maint: location missing publisher (link){{cite book}}
: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link){{cite book}}
: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)