EigenTrust

Last updated

EigenTrust algorithm is a reputation management algorithm for peer-to-peer networks, developed by Sep Kamvar, Mario Schlosser, and Hector Garcia-Molina. [1] The algorithm provides each peer in the network a unique global trust value based on the peer's history of uploads and thus aims to reduce the number of inauthentic files in a P2P network. It has been cited by approximately 3853 other articles according to Google Scholar. [2]

Contents

Overview

Peer-to-peer systems available today (like Gnutella) are open, often anonymous and lack accountability. Hence a user with malicious intent can introduce into the peer-to-peer network resources that may be inauthentic, corrupted or malicious (Malware). This reflects poorly on the credibility of current peer-to-peer systems. A research team from Stanford provides a reputation management system, where each peer in the system has a unique global trust value based on the peer's history of uploads. Any peer requesting resources will be able to access the trust value of a peer and avoid downloading files from untrusted peers.

Algorithm

The Eigentrust algorithm is based on the notion of transitive trust: If a peer i trusts any peer j, it would also trust the peers trusted by j. Each peer i calculates the local trust value sij for all peers that have provided it with authentic or fake downloads based on the satisfactory or unsatisfactory transactions that it has had.

where sat (i, j) refers to the number of satisfactory responses that peer i has received from peer j, and unsat(i, j) refers to the number of unsatisfactory responses that peer i has received from peer j.

The local value is normalized, to prevent malicious peers from assigning arbitrarily high local trust values to colluding malicious peers and arbitrarily low local trust values to good peers. The normalized local trust value cij is then

The local trust values are aggregated at a central location or in a distributed manner to create a trust vector for the whole network. Based on the idea of transitive trust, a peer i would ask other peers it knows to report the trust value of a peer k and weigh responses of these peers by the trust peer i places in them.

If we assume that a user knew the cij values for the whole network in the form of a matrix C, then trust vector that defines the trust value for is given by

In the equation shown above, if C is assumed to be aperiodic and strongly connected, powers of the matrix C will converge to a stable value at some point.

It seems that for a large value of x, the trust vector will converge to the same vector for every peer in the network. The vector is known as the left principal eigenvector of the matrix C. We also note that since is same for all nodes in the network, it represents the global trust value.

Based on the results above a simple centralized trust value computing algorithm can be written. Note that we assume that all the local trust values for the whole network are available and present in the matrix C. We also note that, if the equation shown above converges, we can replace the initial vector by a vector that is an m-vector representing uniform probability distribution over all m peers. The basic EigenTrust algorithm is shown below:

repeat
until

See also

Related Research Articles

<span class="mw-page-title-main">Principal component analysis</span> Method of data analysis

Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

<span class="mw-page-title-main">Nonlinear dimensionality reduction</span> Summary of algorithms for nonlinear dimensionality reduction

Nonlinear dimensionality reduction, also known as manifold learning, refers to various related techniques that aim to project high-dimensional data onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In numerical analysis, inverse iteration is an iterative eigenvalue algorithm. It allows one to find an approximate eigenvector when an approximation to a corresponding eigenvalue is already known. The method is conceptually similar to the power method. It appears to have originally been developed to compute resonance frequencies in the field of structural mechanics.

In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.

<span class="mw-page-title-main">Gauss–Newton algorithm</span> Mathematical algorithm

The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.

A Hopfield network is a form of recurrent artificial neural network and a type of spin glass system popularised by John Hopfield in 1982 as described by Shun'ichi Amari in 1972 and by Little in 1974 based on Ernst Ising's work with Wilhelm Lenz on the Ising model. Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes, or with continuous variables. Hopfield networks also provide a model for understanding human memory.

<span class="mw-page-title-main">Centrality</span> Degree of connectedness within a graph

In graph theory and network analysis, indicators of centrality assign numbers or rankings to nodes within a graph corresponding to their network position. Applications include identifying the most influential person(s) in a social network, key infrastructure nodes in the Internet or urban networks, super-spreaders of disease, and brain networks. Centrality concepts were first developed in social network analysis, and many of the terms used to measure centrality reflect their sociological origin.

In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron and Georg Frobenius, asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix. It is named after Carl Gustav Jacob Jacobi, who first proposed the method in 1846, but only became widely used in the 1950s with the advent of computers.

In mathematics, power iteration is an eigenvalue algorithm: given a diagonalizable matrix , the algorithm will produce a number , which is the greatest eigenvalue of , and a nonzero vector , which is a corresponding eigenvector of , that is, . The algorithm is also known as the Von Mises iteration.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

<span class="mw-page-title-main">Spectral clustering</span> Clustering methods

In multivariate statistics, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of points in the dataset.

Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().

<span class="mw-page-title-main">Google matrix</span> Stochastic matrix representing links between entities

A Google matrix is a particular stochastic matrix that is used by Google's PageRank algorithm. The matrix represents a graph with edges representing links between pages. The PageRank of each page can then be generated iteratively from the Google matrix using the power method. However, in order for the power method to converge, the matrix must be stochastic, irreducible and aperiodic.

Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem

EigenMoments is a set of orthogonal, noise robust, invariant to rotation, scaling and translation and distribution sensitive moments. Their application can be found in signal processing and computer vision as descriptors of the signal or image. The descriptors can later be used for classification purposes.

DeGroot learning refers to a rule-of-thumb type of social learning process. The idea was stated in its general form by the American statistician Morris H. DeGroot; antecedents were articulated by John R. P. French and Frank Harary. The model has been used in physics, computer science and most widely in the theory of social networks.

References

  1. Kamvar, S.D.; Schlosser, M.T.; Garcia-Molina, H. (2003). "The eigentrust algorithm for reputation management in p2p networks". Proceedings of the 12th International Conference on World Wide Web: 640–651. doi:10.1145/775152.775242. ISBN   1-58113-680-3. S2CID   3102087 . Retrieved 5 July 2015.
  2. "Google Scholar" . Retrieved 5 July 2015.